category
stringclasses 191
values | search_query
stringclasses 434
values | search_type
stringclasses 2
values | search_engine_input
stringclasses 748
values | url
stringlengths 22
468
| title
stringlengths 1
77
| text_raw
stringlengths 1.17k
459k
| text_window
stringlengths 545
2.63k
| stance
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|
Ornithology
|
Did archaeopteryx really fly?
|
yes_statement
|
"archaeopteryx" was capable of flight.. "archaeopteryx" had the ability to "fly".
|
https://www.heritagedaily.com/2014/07/archaeopteryx-plumage-first-show-off-then-take-off/103966
|
Archaeopteryx plumage: First show off, then take off
|
Archaeopteryx plumage: First show off, then take off
LMU paleontologists are currently studying a new specimen of Archaeopteryx, revealing previously unknown features of the plumage. The findings unveil information on the original function of feathers and their recruitment for flight.
150 years after discovery and a mere 150 million years since it took flight, Archaeopteryx still has hidden mysteries that need solving: The eleventh specimen of the iconic “basal bird” turns out to have been the best preserved plumage thus far, allowing detailed comparisons to be made with other feathered dinosaurs. The fossil is still undergoing extensive examinations by a team led by Dr. Oliver Rauhut, a paleontologist in the Department of Earth and Environmental sciences at LMU Munich, who is also associated with the Bavarian State Collection for Paleontology and Geology in Munich. The primary results of their analysis of the plumage are reported in the latest issue of the journal Nature. This new information makes a major contribution to the ongoing debate over the evolution of feathers and its relation to avian flight. The data also implies that the connections between feather development and the origin of flight are potentially much more complex than what was previously thought.
“For the first time, it has become possible to examine the detailed structure of the feathers on the body, the tail and, above all, on the legs,” says Oliver Rauhut. In the case of this new specimen, the feathers are, for the most part, preserved as impressions in the rock matrix. “Comparisons with other feathered predatory dinosaurs indicate that the plumage in the different regions of the body varied widely between these species. That suggests that primordial feathers did not evolve in connection with flight-related roles, but originated in other functional contexts,” says Dr. Christian Forth of LMU and the Bavarian State Collection for Paleontology and Geology in Munich, first author on the new paper.
Feathered Wing Fossil: LMU Munich
To keep warm and to catch the eye
It has now been discovered that predatory dinosaurs (theropods) with body plumage predate Archaeopteryx and their feathers probably provided thermal insulation. Advanced species of predatory dinosaurs and primitive birds with feathered forelimbs potentially used them as balance organs when running, much like ostriches do today. Furthermore, feathers may have served as useful functions in brooding, camouflage and display. The feathers on the tail, wings and hand-limbs would have probably fulfilled functions in display, but it is very unlikely that Archaeopteryx were capable of flight. “Interestingly, the lateral feathers in the tail of Archaeopteryx had an aerodynamic form, and most probably played an important role in its aerial abilities,” says Foth.
In terms of their investigation of the plumage of the new fossil, the research team has been able to identify the taxonomical relationship between Archaeopteryx and other species of feathered dinosaur. The diversity in form and distribution of the feather tracts is particularly striking. For instance, among dinosaurs that had feathers on their legs, many had long feathers that extended as far as their toes, while others had shorter down-like plumage. “If feathers had evolved originally for flight, functional constraints should have restricted their range of variation. And in primitive birds we do see less variation in wing feathers than in those on the hind-limbs or the tail,” explains Foth.
The findings suggest that feathers obtained their aerodynamic functions secondarily: Once feathers had been invented, they could be co-opted for flight. It is even possible that the ability to fly evolved more than once within the theropods,” says Rauhut. “Since the feathers were already present, different groups of predatory dinosaurs and their descendants, the birds, could have exploited these structures in different ways.” The new discovery also contradicts with the theory that powered avian flight evolved from earlier four-winged species that had the ability to glide.
A cultural treasure
Archaeopteryx represents a transitional form between reptiles and birds and is the possibly both the earliest and best-known bird fossil. It is concrete evidence that modern birds are direct descendents of predatory dinosaurs, and thus are themselves modern-day dinosaurs. The many new fossil species of feathered dinosaurs discovered in China in recent years have made it possible to place Archaeopteryx within a larger evolutionary context. Yet, when feathers first appeared and the frequencies of flight are aspects that are still undetermined.
The eleventh known specimen of Archaeopteryx is still held privately. Like all the other examples of the genus, it was discovered in the Altmuhl valley in Bavaria, which in the Late Jurassic times was located in the northern tropics, and at the bottom of a shallow sea, as all Archaeopteryx fossils found so far have been recovered from limestone deposits. The collector was not only willing to make the specimen available for study, he also had it registered on the list of protected German Cultural Treasures to ensure that it would remain accessible to science. This is a very good example of successful cooperation between private collectors and academic paleontologists,” says Rauhut. The detailed analysis of the fossil was made possible by the financial support provided by the Volkswagen Foundation.
More on this topic
Mark Milligan is an award winning journalist and the Managing Editor at HeritageDaily. His background is in archaeology and computer science, having written over 7,000 articles across several online publications. Mark is a member of the Association of British Science Writers (ABSW) and in 2023 was the recipient of the British Citizen Award for Education and the BCA Medal of Honour.
|
Advanced species of predatory dinosaurs and primitive birds with feathered forelimbs potentially used them as balance organs when running, much like ostriches do today. Furthermore, feathers may have served as useful functions in brooding, camouflage and display. The feathers on the tail, wings and hand-limbs would have probably fulfilled functions in display, but it is very unlikely that Archaeopteryx were capable of flight. “Interestingly, the lateral feathers in the tail of Archaeopteryx had an aerodynamic form, and most probably played an important role in its aerial abilities,” says Foth.
In terms of their investigation of the plumage of the new fossil, the research team has been able to identify the taxonomical relationship between Archaeopteryx and other species of feathered dinosaur. The diversity in form and distribution of the feather tracts is particularly striking. For instance, among dinosaurs that had feathers on their legs, many had long feathers that extended as far as their toes, while others had shorter down-like plumage. “If feathers had evolved originally for flight, functional constraints should have restricted their range of variation. And in primitive birds we do see less variation in wing feathers than in those on the hind-limbs or the tail,” explains Foth.
The findings suggest that feathers obtained their aerodynamic functions secondarily: Once feathers had been invented, they could be co-opted for flight. It is even possible that the ability to fly evolved more than once within the theropods,” says Rauhut. “Since the feathers were already present, different groups of predatory dinosaurs and their descendants, the birds, could have exploited these structures in different ways.” The new discovery also contradicts with the theory that powered avian flight evolved from earlier four-winged species that had the ability to glide.
A cultural treasure
Archaeopteryx represents a transitional form between reptiles and birds and is the possibly both the earliest and best-known bird fossil. It is concrete evidence that modern birds are direct descendents of predatory dinosaurs, and thus are themselves modern-day dinosaurs.
|
no
|
Paleobotany
|
Did flowers exist in the age of dinosaurs?
|
yes_statement
|
"flowers" "existed" in the "age" of "dinosaurs".. there were "flowers" during the "age" of "dinosaurs".
|
https://www.prehistoriclife.xyz/cretaceous-period/the-rise-of-flowering-plants.html
|
The Rise Of Flowering Plants - Cretaceous Period - Prehistoric Life
|
The Rise Of Flowering Plants
Until the Early Cretaceous Epoch, the world's flora was dominated by ferns and gymnosperms—seed plants whose seed embryos are not protected by a fruit, cone, or other body. Gymnosperms first appeared in the late Paleozoic Era and became dominant during the first half of the Mesozoic Era. They are still represented today by more than 600 known species of conifers (evergreen trees), cy-cads, gnetophytes, and Ginkgo, none of which have flowers or fruits. Gymnosperms are typically tough and hearty. Their woody pulp, thick bark, branches, and needles or frondlike leaves are difficult to chew. Herbivorous dinosaurs of the Jurassic Period—including the sauropods, stegosaurs, and ankylosaurs—developed jaws, teeth, and digestive systems capable of extracting nutrition from the likes of evergreens, cycads, and other tough gymnosperms. The animals' basic digestive strategy was to minimize chewing of the food in the mouth and to use a fermentation process in the stomach to slowly extract nourishment from the nutritionally stingy gymnosperms.
Gymnosperms were successful at surviving in a Jurassic world with a moderately warm and arid climate. They relied only on wind to carry pollen to their seeds. Pollinated gymnosperm seeds grew slowly, and when fully grown they often took the form of tall trees in species such as ginkgos and conifers. Slow growth and height discouraged consumption by herbivores. Sauropod dinosaurs adapted by growing taller to reach the ever-more-lofty canopies of gymno-sperms and also developed consumption habits that allowed them to eat fairly constantly in order to derive enough sustenance from the nutritionally stingy conifers and cycads.
(continues on page 28)
THINK ABOUT IT
Dinosaurs of the Poles
The polar regions of today's world are the coldest and harshest on the planet. Most kinds of organisms would not survive for long if left to fend for themselves above the Arctic Circle or in Antarctica. The polar regions of the Earth were not always so uninhabitable, however, and there is growing evidence that a wide variety of dinosaurs lived within the polar circles of the Mesozoic.
Even though the middle latitudes of the Earth were uniformly warm during the Mesozoic Era, temperatures at the poles would have been somewhat cooler, even without the presence of ice caps. Studies of fossil plants and associated oxygen isotope studies of polar sediments have been carried out to determine the average annual temperatures of the Mesozoic polar regions. Results suggest that the North Pole had a mean average temperature between 36° and 46°F (2° and 8 °C) and the South Pole about 50 °F (10 °C)—not tropical temperatures by any means, but not below freezing, either. Another factor affecting life on the extreme ends of the planet would have been prolonged periods of darkness and cooler temperatures still during winter.
The idea that dinosaurs could have lived at the relatively cool poles of the Earth was virtually unthinkable 50 years ago because of the widespread belief that their metabolism was more like that of cold-blooded modern reptiles than that of birds or mammals. The work of paleontologists to collect fossils in these regions during the past 20 years has led to a change of thinking. Not only did dinosaurs colonize the poles by at least 190 million years ago, but fragmentary remains have now been identified there for nearly all major branches of the dinosaur evolutionary tree, with the notable and interesting exception of sauropods.
"Polar" dinosaurs—as defined by paleontologists Thomas Rich (Museum of Victoria, Australia); Roland Gangloff (University of Alaska); and William Hammer (Augustana College, Illinois)—are defined as those dinosaurs "that lived within the polar circles of their time, not necessarily within the current polar circles." This means that their fossils are sometimes found on landmasses that have since drifted to the fringes of the ancient polar circles, such as Australia and New Zealand in the south and Alaska, Russia, and the Canadian Yukon in the north.
One of the most productive fossil sites for polar dinosaurs is found on the banks of the Colville River in northeast Alaska. Evidence of polar dinosaurs is usually scant. The most complete dinosaur from any polar locality was found at the Matanuska Formation of south-central Alaska in 1995 and consisted of about a quarter of the animal. Bones from the foot, limbs, and tail were enough to convince paleontologist Anne Pasch of the University of Alaska that what had been found was a specimen of a hadrosaur—a duck-billed dinosaur. Dating of the fossil sediments was made easier by the presence of sea creatures such as ammonites, the age of which can be fixed at about 90 million years ago. That's about 10 million years older than other hadrosaurs from North America and suggests that the Alaskan duckbill might be linked to early hadrosaurs from Asia. Finding the bones of terrestrial animals in marine deposits is not so unusual, although the specimens are usually spotty and incomplete. Pasch speculated that the hadrosaur died on the shore of an ancient ocean and "floated out to sea, probably as a bloated carcass. It eventually sank to the bottom and was buried in fine black mud along with shells and other sea creatures" found with its bones.
Most main groups of dinosaurs are represented by fossil evidence from regions that would have been polar during the Mesozoic. In the Northern Hemisphere, compelling evidence of hadrosaurs, horned dinosaurs, large and small theropods, sauropods, and plated and armored dinosaurs is found in the northern reaches of Alaska, Canada, and Siberia. In the Southern Hemisphere, polar dinosaurs are represented by specimens of armored dinosaurs, smallornithopods, hadrosaurs, prosauropods, sauro-pods, large and small theropods, and possible horned dinosaurs. These remains are not alone and are often found with fossils of other creatures
Ccontinues)
(continued)
from the polar neighborhood such as crocodilians, amphibians, pterosaurs, birds, and small mammals.
The presence of polar dinosaurs cannot be denied but raises questions about their lifestyle, metabolism, and thermoregulation. Chief among these questions is whether the presence of dinosaurs in cooler regions of the world is evidence of a more active, energetic thermoregulatory metabolism, or whether there was more to the story. Australian paleontologists Thomas Rich and Patricia Vickers Rich, who have done much to advance knowledge of polar dinosaurs of the Southern Hemisphere, speculate that some small dinosaurs may have actually burrowed into the ground to protect themselves against the chill of the long winter nights. Another plausible idea is that some dinosaur groups migrated to the south toward the poles during seasonally warmer periods and returned toward the Equator when the winter chill set in. That would have been possible given the configuration of connected landmasses during much of the Mesozoic. Another clue to dinosaur survival in colder climates might also be related to the possible use of feathers as a form of body insulation—at least for small theropods for which such body coverings have been found.
(continued from page 25)
The dominance of gymnosperms diminished during the Cretaceous Period with the rise of flowering plants—the angiosperms. Angiosperms were characterized by a new reproductive life cycle that quickened their ability to grow, breed, and disperse. Angio-sperms utilize flowers to attract pollinating animals, such as insects, and also encase their seeds in fruits that, when separated from the plant, can aid in dispersal of seeds. The innovations of flowers to aid in pollination and fruits to protect the embryo contributed to the rapid success and spread of flowering plants. The oldest known
A fossil of a gymnosperm
angiosperm dates from the Early Cretaceous Period, about 125 million years ago. Found in the same Chinese fossil region that contains exciting fossils of early marsupial and placental mammals, feathered dinosaurs, and birds, this primitive early example of a flowering
A fossil of an angiosperm
plant had paired stamens (the pollen-producing parts of a plant) and multiseeded fruits, although it may have lacked flowers.
Angiosperms quickly became a favorite food of dinosaurs. Flowering plants reproduced much more quickly than gymnosperms; angiosperms constantly replenished a landscape that could become heavily browsed by hungry dinosaurs. The ability of angiosperms to grow rapidly and disperse widely allowed them to diversify into hundreds of species by the end of the Cretaceous Period. The
An early magnolia from the Cretaceous Period
importance of the angiosperms to the evolution of dinosaurs cannot be understated. The Cretaceous Period is known for an explosion of new lines of ornithischians—duck-billed, armored, and horned dinosaurs in particular—that developed specialized adaptations for chewing and consuming the wider assortment of vegetation available to them, including the recently evolved flowering plants and gymnosperms. Those special anatomical features will be explored in Section Three of Last of the Dinosaurs, in the discussion of ornithischians of the Cretaceous Period.
Readers' Questions
Trilobites: These were arthropods that thrived in the oceans, evolving various forms and sizes. They were highly diverse and abundant during the Paleozoic Era.
Brachiopods: These were marine invertebrates that had a hinged shell, similar to a clam. They were also quite diverse during this era.
Ammonites: These were cephalopods that had a coiled shell. They were present in the oceans and underwent significant diversification during the Paleozoic Era.
Plants: Land plants, such as ferns and early seed-bearing plants, originated and diversified during the Paleozoic Era. Some plants, like the giant tree-like Lepidodendron, were particularly prominent.
Fish: Jawed fish, such as the armored fishes (placoderms) and cartilaginous fishes, were prevalent in the Paleozoic Era. These early fish laid the foundation for the evolution of more advanced fish groups.
Reptiles: The earliest reptiles, including the reptile-like amphibians called the temnospondyls, began to emerge during the Paleozoic Era.
Insects: Insects evolved and diversified during the Paleozoic Era. They underwent significant adaptations, including the development of wings, leading to the emergence of various insect groups.
Corals: Various types of coral groups, including rugose and tabulate corals, thrived in the oceans during the Paleozoic Era. These reefs played a vital ecological role.
These are just a few examples of the organisms that arose during the Paleozoic Era. The era was marked by a significant diversification and evolution of life, with numerous new species emerging in various habitats.
Angiosperms, or flowering plants, are believed to have evolved during the Early Cretaceous period, around 140 to 130 million years ago. This evolutionary development was a significant event in the history of plants, as angiosperms quickly became the dominant plant group on Earth.
Fruits evolved approximately 80-100 million years ago, during the Late Cretaceous period. This coincides with the diversification of flowering plants (angiosperms) and the development of various mechanisms for seed dispersal, including the formation of fruits.
leanna
Which of the following is not a reason for the success of the dinosaurs?
Flowers have been on Earth for approximately 140 million years. The oldest known flower fossils date back to the Early Cretaceous period, which began around 140 million years ago. However, it is believed that flowers may have evolved even earlier than that, possibly during the late Jurassic period.
It is not possible to definitively answer this question because the fossil record from the Cretaceous period (145 million to 66 million years ago) is not extensive enough to provide enough evidence to know exactly which flowers existed during that time. However, it is possible that some of the flowers that existed during this time period included water lilies, magnolias, buttercups, and possibly even roses.
semira
Which time period saw the rise of flowering plants, mammals, large reptiles, and marsupials?
Breakup of Pangea: The supercontinent Pangea began to break up during the Jurassic Period, creating the modern continents of Europe, Africa, North America, South America, Australia, and Antarctica.
Appearance of Dinosaurs: Dinosaurs first appeared during the Triassic period but flourished during the Jurassic. This period saw the evolution of many dinosaur species, including Diplodocus and Brachiosaurus.
Rise of Modern Birds: The first true birds evolved during the Jurassic period, giving rise to most modern forms of birds.
Expansion of Flora: The Jurassic Period saw the proliferation of lush vegetation, which provided food for the dinosaurs and other animals living at the time.
Development of Marine Reptiles: The Jurassic Period also saw the evolution of large marine reptiles such as ichthyosaurs and plesiosaurs.
Volcanic Activity: During the Jurassic Period, there was an increase in volcanic activity, which contributed to global climate change.
Global climatic change gave gymnosperms an advantage over ferns by providing them with a better chance of survival. Gymnosperms are able to withstand much more extreme conditions than ferns, including hotter climates and drier soils. They also have evolved more efficient strategies for water conservation, which have allowed them to thrive in areas that have recently become drier during periods of global climatic change. This gives them an advantage over ferns, which are much less well-adapted to dry environments.
claudia
What advantage did gymnosperms have over more primitive types of plants?
Gymnosperms had a reproductive advantage over more primitive plants because their reproductive structures were not dependent upon water for fertilization. Also, the presence of an exposed seed allowed for the easy dispersal of their progeny by wind, animals, and other means. The tougher, woody nature of their stems and leaves gave them a competitive advantage over more delicate primitive plants.
Fruits have played a major role in the success of angiosperms. Fruits provide an effective means of dispersal for many plants, enabling them to spread their seeds far and wide. Fruits also attract animals, which help spread seeds. Fruits provide a nutritious food source for animals, and in exchange, animals carry the seeds of the fruits to other places, pollinating and dispersing the plants in the process. This form of seed dispersal is known as endozoochory. Additionally, some fruits provide the energy for seed germination, helping to promote the successful establishment of new plants. Finally, fruits can provide insulation for the developing seeds and embryos of many plants, protecting them from the elements and helping them to survive.
Angiosperms are more successful than gymnosperms because they have various adaptations that have enabled them to thrive in many different environments and to become the most widespread and diverse group of plants on Earth. Angiosperms are able to reproduce more efficiently than gymnosperms, producing seeds that are enclosed by fruit. This has allowed angiosperms to disperse their seeds more widely than gymnosperms, increasing their chances of surviving and thriving in new environments. Additionally, angiosperms can cross-pollinate and self-pollinate, which allows them to produce an abundance of fertile seeds. Finally, angiosperms have adaptations that enable them to survive in a greater range of climates and habitats than gymnosperms. These adaptations include leaves that can adapt to different light intensities, roots that can absorb nutrients from different depths in the soil, and flowers that can attract different pollinators.
The dominant organisms living in the Jurassic Period included primitive amphibians, reptiles such as dinosaurs, giant insects, and early birds. Other organisms included small mammals, fish, and plants, including ferns, cycads, conifers, and ginkgos.
The exact height of trees in prehistoric times is impossible to determine, as there are no surviving tree ring records from that period. However, the fossilized leaves, cones, and seeds from some of the earliest trees suggest that they typically reached heights of 30-40 feet.
Ferns do not grow as tall as gymnosperms and angiosperms because they lack the vascular system that is responsible for transporting water, minerals, and other resources up the stem. Without a strong vascular system, ferns rely on moisture in the environment to survive, which limits their growth.
Elias
What type of organisms are important in the evolution of flowering plants?
Pollinators such as bees, butterflies, and hummingbirds play an important role in the evolution of flowering plants. These organisms feed on the nectar and pollen of flowers and, in doing so, help to spread and cross-pollinate the plants, thus allowing them to evolve and develop. In addition, many plants rely on mutualistic relationships with insects and other animals for pollination, dispersal of seeds, and dispersal of nutrients. Without these helpful organisms, the evolution of flowering plants may have been drastically different.
A notable plant of the Cretaceous period was cycads, a type of ancient gymnosperm. They were typically tropical plants which grew in humid environments and had prominent trunks and fern-like leaves. Other plants that were common during this period included conifers, ferns, ginkgoes, and a variety of flowering plants.
Drawing flowers during the Cretaceous period would be quite a challenge! It is difficult to know exactly what flowers may have looked like during that time period since the fossil record is not very clear. However, you could use some creative license to draw flowers that would fit in with the flora of the time.
Some suggestions for creating a "Cretaceous Period" flower drawing would be to use colors from plants that grew in the Cretaceous, such as greens, yellows, and oranges. You could also use shapes that are more angular or geometric, as these shapes were more common for plants of the time. Additionally, you could research images of fossilized flowers from the Cretaceous period to get a better idea of the types of shapes and color palettes that were present then.
It is difficult to answer this question definitively due to a lack of detailed records from this period. However, some flowering plants which are thought to have existed during the Dark Ages include: Primrose, Rose, Mistletoe, Sundew, Buttercup, Daisy, Anemone, Bellflower, Columbine, Primula, and Violets.
Some of the plants that thrived during the Cretaceous period include conifers, cycads, ginkgoes, and various types of flowering plants. Other plants that flourished during this era include ferns, grasses, horsetails, and club mosses.
The Cretaceous period saw the development of conifer and angiosperm trees, as well as ferns, cycads, ginkgos, and early flowering plants. Grasses and herbaceous plants were also abundant. Marine algae and other aquatic plant life were also diverse.
The late Paleozoic Era (approximately 300-250 million years ago) was a time of great biological diversity, and amphibians were no exception. During this period, the Earth experienced global warming, continental drift, and high levels of oxygen in the atmosphere, all of which provided a conducive environment for the diversification of amphibians and other species. The increased availability of niches and resources, as well as the lack of competition from other vertebrate groups, likely allowed amphibians to diversify quickly.
The Cretaceous period lasted from 145 million to 66 million years ago. During this period, the Earth experienced major climate changes and the mass extinction of the dinosaurs. Other important events that occurred during this period include the diversification of flowering plants, the evolution of birds, the emergence of mammals, and the formation of Pangaea. In addition, the seas were teeming with life and coral reefs were widespread.
The Cretaceous period was home to a wide variety of animals, including the dinosaurs that were the dominant species of the time. Other animals that lived during this period include woolly mammoths, saber-toothed cats, ammonites, crocodiles, turtles, pterosaurs, and marine reptiles such as mosasaurs and plesiosaurs. Many modern animal families also developed during this time, including snakes, lizards, birds, and mammals.
Some dinosaurs may have eaten fruit, but mainly the herbivorous dinosaurs. Some of the fruit available to them in the Mesozoic Era would have included conifer cones, cycads, and ginkgo. The fossilized remains of these ancient fruits have been found in dinosaur droppings.
It is impossible to provide an exact number, as the fossil record of flowering plants is incomplete. However, it is estimated that there were hundreds of species of flowering plants during the Cretaceous period.
During the Cretaceous period, plants stabilized oxygen levels by producing more oxygen during photosynthesis and absorbing more carbon dioxide during respiration. Additionally, faster rates of weathering of rocks due to increased temperatures and increased rainfall contributed to greater amounts of oxygen being released into the atmosphere. This oxygenation of the atmosphere allowed for the proliferation of numerous species that require oxygen for respiration.
The plants that grew in the Cretaceous period (145.5 to 65.5 million years ago) include conifers, ginkgo trees, cycads, pteridophytes, flowering plants, and ferns. Also present were various species of mosses, liverworts, and horsetails.
Fossil evidence suggests that the fruits available during the Jurassic period were largely primitive varieties, such as primitive apples, pears, plums, peaches, and apricots. Other primitive fruits that may have been available include cherries, dates, figs, and olives.
|
Fruits evolved approximately 80-100 million years ago, during the Late Cretaceous period. This coincides with the diversification of flowering plants (angiosperms) and the development of various mechanisms for seed dispersal, including the formation of fruits.
leanna
Which of the following is not a reason for the success of the dinosaurs?
Flowers have been on Earth for approximately 140 million years. The oldest known flower fossils date back to the Early Cretaceous period, which began around 140 million years ago. However, it is believed that flowers may have evolved even earlier than that, possibly during the late Jurassic period.
It is not possible to definitively answer this question because the fossil record from the Cretaceous period (145 million to 66 million years ago) is not extensive enough to provide enough evidence to know exactly which flowers existed during that time. However, it is possible that some of the flowers that existed during this time period included water lilies, magnolias, buttercups, and possibly even roses.
semira
Which time period saw the rise of flowering plants, mammals, large reptiles, and marsupials?
Breakup of Pangea: The supercontinent Pangea began to break up during the Jurassic Period, creating the modern continents of Europe, Africa, North America, South America, Australia, and Antarctica.
Appearance of Dinosaurs: Dinosaurs first appeared during the Triassic period but flourished during the Jurassic. This period saw the evolution of many dinosaur species, including Diplodocus and Brachiosaurus.
Rise of Modern Birds: The first true birds evolved during the Jurassic period, giving rise to most modern forms of birds.
Expansion of Flora: The Jurassic Period saw the proliferation of lush vegetation, which provided food for the dinosaurs and other animals living at the time.
Development of Marine Reptiles: The Jurassic Period also saw the evolution of large marine reptiles such as ichthyosaurs and plesiosaurs.
Volcanic Activity:
|
yes
|
Paleobotany
|
Did flowers exist in the age of dinosaurs?
|
yes_statement
|
"flowers" "existed" in the "age" of "dinosaurs".. there were "flowers" during the "age" of "dinosaurs".
|
https://www.dw.com/en/rethinking-evolution-butterflies-came-first-flowers-came-second/a-42110188
|
Oldest butterfly fossils ever discovered – DW – 01/11/2018
|
Oldest butterfly fossils ever discovered
Researchers have unearthed the earliest known fossil evidence of an insect of the butterfly order. It reveals that these animals fluttered about 200 million years ago – even before flowering plants came along.
Imagine a brightly yellow-colored butterfly landing on the back of a Tyrannosaurus rex. Well it may have happened. But, okay, this picture is only partly true.
Indeed, it is true that animals of the order Lepidoptera – including butterflies and moths – co-existed with the dinosaurs.
The latest evidence shows that these flying insects evolved more than 200 to 250 million years ago in the Triassic period. That's also when the first dinosaurs appeared.
That means that moths and butterflies are much older than previously thought.
So, what's wrong with the picture we painted at the start?
Well, back then butterflies were unlikely to have been a bright yellow. It's thought they were not brightly colored at all. They were a browny-grey, more like a moth than a butterfly of today.
"The colorful butterflies that we think are so beautiful came much later," Bas van de Schootbrugge of Utrecht University, Netherlands, tells DW. "They evolved only after dinosaurs became extinct."
A living representative of a primitive moth. A new study suggests they existed 200 million years agoImage: Hossein Rajaei
But no matter which color the fluttering insects were – the discovery that they are older than previously thought has a major implication.
It challenges the modern scientific theory that flowering plants and the insects that feed on them evolved together.
Butterfly scales reveal the truth
Until recently, scientists believed that moths and butterflies, whose descendants still exist today, came into being 130 million years ago – that's the age of the oldest fossil evidence ever found.
But when Bas van de Schootbrugge and his team had a closer look at a drilled core from northern Germany dating back about 200 million years, they were stunned.
The core contained multiple fossilized wing scales showing characteristics of living moths, with a tubular mouthpart used for sucking and feeding.
"It took us quite a while to realize what these particles actually were," says van de Schootbrugge. It took several years of dedicated work by his student, Timo van Eldijk, and the support of insect-fossil experts, to solve the puzzle.
Butterfly and moth fossils often remain unrecognized, because they are so rare, says Sonja Wedmann, paleontologist and research associate at the Messel pit near Frankfurt am Main, Germany.
Wedmann was not involved in the van de Schootbrugge / van Eldijk study. But she explains that because butterfly wings are good at repelling water "they tend to float on a lake instead of sinking to the ground."
And that's why such particles tend to quickly decompose rather than fossilizing and remaining intact for millennia to come.
Moths and butterflies belong to the same animal order. Moths are simply not as colorfulImage: picture-alliance/Arco Images/J. Fieber
The concept of co-evolution on trial
Until now scientists believed that flowering plants evolved first, and were only then followed by butterflies and moths.
It seems logical.
But the Dutch group's new interpretation of the research suggests the theory is wrong: they say the feeders came first and the plants came second. It's a revolutionary thought that leads Sonja Wedmann to call the study "really special."
"How flowering plants evolved from seed-producing plants is one of the big mysteries in evolution," says van de Schootbrugge, "and our discovery opens new doors to this mystery."
While flowering plants didn't exist, butterflies and moths probably fed on some non-flowering, seed-producing plants, such as conifers.
Maybe – and van de Schootbrugge stresses that it's just a maybe – these plants tried to lock in their nectar, which they need for reproduction, to stop the butterflies and moths feeding on it, and that's possibly how, or why, they evolved into something completely new: flowering plants.
That certainly would be a revolution in evolution theory. It would show that – at least in evolution – not everything that appears logical is necessarily true.
|
Oldest butterfly fossils ever discovered
Researchers have unearthed the earliest known fossil evidence of an insect of the butterfly order. It reveals that these animals fluttered about 200 million years ago – even before flowering plants came along.
Imagine a brightly yellow-colored butterfly landing on the back of a Tyrannosaurus rex. Well it may have happened. But, okay, this picture is only partly true.
Indeed, it is true that animals of the order Lepidoptera – including butterflies and moths – co-existed with the dinosaurs.
The latest evidence shows that these flying insects evolved more than 200 to 250 million years ago in the Triassic period. That's also when the first dinosaurs appeared.
That means that moths and butterflies are much older than previously thought.
So, what's wrong with the picture we painted at the start?
Well, back then butterflies were unlikely to have been a bright yellow. It's thought they were not brightly colored at all. They were a browny-grey, more like a moth than a butterfly of today.
"The colorful butterflies that we think are so beautiful came much later," Bas van de Schootbrugge of Utrecht University, Netherlands, tells DW. "They evolved only after dinosaurs became extinct. "
A living representative of a primitive moth. A new study suggests they existed 200 million years agoImage: Hossein Rajaei
But no matter which color the fluttering insects were – the discovery that they are older than previously thought has a major implication.
It challenges the modern scientific theory that flowering plants and the insects that feed on them evolved together.
Butterfly scales reveal the truth
Until recently, scientists believed that moths and butterflies, whose descendants still exist today, came into being 130 million years ago – that's the age of the oldest fossil evidence ever found.
But when Bas van de Schootbrugge and his team had a closer look at a drilled core from northern Germany dating back about 200 million years, they were stunned.
|
no
|
Paleobotany
|
Did flowers exist in the age of dinosaurs?
|
yes_statement
|
"flowers" "existed" in the "age" of "dinosaurs".. there were "flowers" during the "age" of "dinosaurs".
|
https://www.murdoch.edu.au/news/articles/a-thousand-flowers-found-to-be-from-before-the-dinosaurs
|
Flowers found to have existed before dinosaurs
|
Flowers found to have existed before dinosaurs
A flower encased in sap that’s part of a family of more than one thousand species worldwide has been found to be 260 million years old.
The discovery paints a picture of flowering meadows seeded by fires millions of years ago, as charcoal was also discovered with the flower fossils.
The Phylica flower at the centre of the research is part of the buckthorn family, or Rhamnaceae, which are found throughout Africa, Australia, North and South America, Asia and Europe. Rhamnaceae produce dry fruits and are closely related to the Vitaceae family, which includes grapevines.
Dr Tianhua He, a molecular geneticist at Murdoch University, and Professor Byron Lamont, an evolutionary ecologist at Curtin University, made the discovery after analysing an ancient flower fossil from Myanmar.
They analysed the DNA of modern-day Phylica and living related plants throughout the world, and used the fossils of Phylica to calibrate the molecular clock for the rest of the family, which set the age of the family at a remarkable 260 million years.
“Fossils provide important evidence for evolution and the adaptation of plants and animals to their environments and an essential tool to estimate the timing of plant evolution,” said Dr He.
“The Phylica fossils preserved in the Burmese amber provided us with an excellent and unique tool to examine the speed of evolution of Rhamnaceae and - with a bit of caution - extrapolate to all flowering plants.”
“Discoveries like this can fundamentally change our view on the origin and evolution of plants on earth and prompt us to re-examine the evolution of life on earth.”
Professor Lamont said the age of the fossil was well beyond botanist expectations.
“It was previously believed that the Phylica evolved about 20 million years ago, and the Buckthorn family less than100 million years ago, so these new dates mean the family of flowering plants are much older than botanists could have possibly ever imagined,” Professor Lamont said.
“Since the Buckthorn family is not even considered an old member of the flowering plants, this means that flowering plants evolved more than 300 million years ago, a staggering 50 million years before the rise of the dinosaurs.
“Flowering plants are the basis of our entire existence, producing oxygen, food, timber, medicine, habitats for animals and the parks and gardens where we live. Thus, it is of great interest to know how long they have been on earth and under what circumstances they arose.”
We acknowledge that Murdoch University is situated on the lands of the Whadjuk and Binjareb Noongar people. We pay our respects to their enduring and dynamic culture and the leadership of Noongar elders past and present. The boodjar (country) on which Murdoch University is located has, for thousands of years, been a place of learning. We at Murdoch University are proud to continue this long tradition.
|
Flowers found to have existed before dinosaurs
A flower encased in sap that’s part of a family of more than one thousand species worldwide has been found to be 260 million years old.
The discovery paints a picture of flowering meadows seeded by fires millions of years ago, as charcoal was also discovered with the flower fossils.
The Phylica flower at the centre of the research is part of the buckthorn family, or Rhamnaceae, which are found throughout Africa, Australia, North and South America, Asia and Europe. Rhamnaceae produce dry fruits and are closely related to the Vitaceae family, which includes grapevines.
Dr Tianhua He, a molecular geneticist at Murdoch University, and Professor Byron Lamont, an evolutionary ecologist at Curtin University, made the discovery after analysing an ancient flower fossil from Myanmar.
They analysed the DNA of modern-day Phylica and living related plants throughout the world, and used the fossils of Phylica to calibrate the molecular clock for the rest of the family, which set the age of the family at a remarkable 260 million years.
“Fossils provide important evidence for evolution and the adaptation of plants and animals to their environments and an essential tool to estimate the timing of plant evolution,” said Dr He.
“The Phylica fossils preserved in the Burmese amber provided us with an excellent and unique tool to examine the speed of evolution of Rhamnaceae and - with a bit of caution - extrapolate to all flowering plants.”
“Discoveries like this can fundamentally change our view on the origin and evolution of plants on earth and prompt us to re-examine the evolution of life on earth.”
Professor Lamont said the age of the fossil was well beyond botanist expectations.
“It was previously believed that the Phylica evolved about 20 million years ago, and the Buckthorn family less than100 million years ago, so these new dates mean the family of flowering plants are much older than botanists could have possibly ever imagined,” Professor Lamont said.
|
yes
|
Paleobotany
|
Did flowers exist in the age of dinosaurs?
|
yes_statement
|
"flowers" "existed" in the "age" of "dinosaurs".. there were "flowers" during the "age" of "dinosaurs".
|
https://www.reuters.com/article/us-fossil-orchid/orchids-likely-decorated-dinosaur-stomping-grounds-idUSN2936357320070830
|
Orchids likely decorated dinosaur stomping grounds | Reuters
|
Orchids likely decorated dinosaur stomping grounds
CHICAGO (Reuters) - Fossilized orchid pollen on the back of a bee preserved in amber has offered the first evidence that these delicate flowers existed around the time of the dinosaurs, U.S. researchers said on Wednesday.
Slideshow ( 2 images )
Biologists at Harvard University said the ancient pollen, found in a clump on a now-extinct worker bee, means orchids are much older than previously thought.
While orchids are the largest and most diverse plant family on Earth, they have been largely absent from the fossil record, said Harvard researcher Santiago Ramirez, whose study appears in the journal Nature.
Orchids package their pollen in structures called pollinia, which consist of masses of pollen grains. It was that structure that caught Ramirez’ eye.
“It is very distinct. Because of its shape and form, we were able to identify it right away,” Ramirez said in a telephone interview.
“Orchids were missing in the fossil record until this was found,” he added.
The absence of orchids from the fossil record has fueled debate over their age, with estimates ranging from 26 million to 112 million years ago.
The amber-encased bee was first discovered in the Dominican Republic by a private collector in 2000. It made its way to Harvard’s Museum of Comparative Zoology in 2005.
The worker bee specimen is 15 to 20 million years old, but Ramirez and colleagues used its payload of pollen to analyze the orchid species. They used a molecular-clock method of analysis to estimate the age of the orchid family, which they date to about 80 million years ago.
The dinosaurs’ extinction occurred about 65 million years ago.
Ramirez said the find not only helps resolve a debate over the age of the orchid but it provides the first direct evidence of ancient pollination.
“This is one of the first fossil observations in which you can find both the pollinator and the plant together,” he said.
|
Orchids likely decorated dinosaur stomping grounds
CHICAGO (Reuters) - Fossilized orchid pollen on the back of a bee preserved in amber has offered the first evidence that these delicate flowers existed around the time of the dinosaurs, U.S. researchers said on Wednesday.
Slideshow ( 2 images )
Biologists at Harvard University said the ancient pollen, found in a clump on a now-extinct worker bee, means orchids are much older than previously thought.
While orchids are the largest and most diverse plant family on Earth, they have been largely absent from the fossil record, said Harvard researcher Santiago Ramirez, whose study appears in the journal Nature.
Orchids package their pollen in structures called pollinia, which consist of masses of pollen grains. It was that structure that caught Ramirez’ eye.
“It is very distinct. Because of its shape and form, we were able to identify it right away,” Ramirez said in a telephone interview.
“Orchids were missing in the fossil record until this was found,” he added.
The absence of orchids from the fossil record has fueled debate over their age, with estimates ranging from 26 million to 112 million years ago.
The amber-encased bee was first discovered in the Dominican Republic by a private collector in 2000. It made its way to Harvard’s Museum of Comparative Zoology in 2005.
The worker bee specimen is 15 to 20 million years old, but Ramirez and colleagues used its payload of pollen to analyze the orchid species. They used a molecular-clock method of analysis to estimate the age of the orchid family, which they date to about 80 million years ago.
The dinosaurs’ extinction occurred about 65 million years ago.
Ramirez said the find not only helps resolve a debate over the age of the orchid but it provides the first direct evidence of ancient pollination.
“This is one of the first fossil observations in which you can find both the pollinator and the plant together,” he said.
|
yes
|
Paleobotany
|
Did flowers exist in the age of dinosaurs?
|
yes_statement
|
"flowers" "existed" in the "age" of "dinosaurs".. there were "flowers" during the "age" of "dinosaurs".
|
https://www.sciencedaily.com/releases/2007/08/070829143719.htm
|
First Orchid Fossil Puts Showy Blooms At Some 80 Million Years Old ...
|
First Orchid Fossil Puts Showy Blooms At Some 80 Million Years Old
Biologists at Harvard University have identified the ancient fossilized remains of a pollen-bearing bee as the first hint of orchids in the fossil record, a find they say suggests orchids are old enough to have co-existed with dinosaurs. Their analysis indicates orchids arose some 76 to 84 million years ago, much longer ago than many scientists had estimated.
Biologists at Harvard University have identified the ancient fossilized remains of a pollen-bearing bee as the first hint of orchids in the fossil record, a find they say suggests orchids are old enough to have co-existed with dinosaurs.
Their analysis, published recently in the journal Nature, indicates orchids arose some 76 to 84 million years ago, much longer ago than many scientists had estimated. The extinct bee they studied, preserved in amber with a mass of orchid pollen on its back, represents some of the only direct evidence of pollination in the fossil record.
"Since the time of Darwin, evolutionary biologists have been fascinated with orchids' spectacular adaptations for insect pollination," says lead author Santiago R. Ramírez, a researcher in Harvard's Museum of Comparative Zoology and Department of Organismic and Evolutionary Biology. "But while orchids are the largest and most diverse plant family on Earth, they have been absent from the fossil record."
The fossil record lacks evidence of orchids, Ramírez says, because they bloom infrequently and are concentrated in tropical areas where heat and humidity prevent fossilization. Their pollen is dispersed only by animals, not wind, and disintegrates upon contact with the acid used to extract pollen from rocks.
Orchids' ambiguous fossil record has fed a longstanding debate over their age, with various scientists pegging the family at anywhere from 26 to 112 million years old. Those arguing for a younger age have often pointed to the lack of a meaningful fossil record as evidence of the family's youth, along with the highly specialized flowers' need for a well-developed array of existing pollinators to survive. Proponents of an older age for orchids had cited their ubiquity around the world, their close evolutionary kinship with the ancient asparagus family, and their bewildering diversity: Some 20,000 to 30,000 species strong, the showy plants comprise some 8 percent of all flowering species worldwide.
"Our analysis places orchids far toward the older end of the range that had been postulated, suggesting the family was fairly young at the time of the extinction of the dinosaurs some 65 million years ago," Ramírez says. "It appears, based on our molecular clock analyses, that they began to flourish shortly after the mass extinction at the so-called 'K/T boundary' between the Cretaceous and Tertiary periods, which decimated many of Earth's species."
Orchids, unlike most flowering plants, package pollen in unique structures called pollinia, which consist of relatively large masses of compact pollen grains. The 15- to 20-million-year-old specimen of a worker bee carrying orchid pollinia, recovered by a private collector in the Dominican Republic in 2000, came to the attention of Ramírez and his colleagues at Harvard's Museum of Comparative Zoology in 2005. While this particular species of stingless bee, Proplebeia dominicana, is now extinct, the scientists' analysis of the shape and configuration of its cargo of pollen places it firmly within one of five extant subfamilies of orchids.
advertisement
The specimen is one of just a few fossils known to illustrate directly a plant-pollinator association. The specific placement of the pollen on the bee's back not only confirms the grains were placed through active pollination -- as opposed to a random encounter with an orchid -- but also sheds light on the exact type and shape of orchid flower that produced the pollen tens of millions of years ago.
By applying the so-called molecular clock method, the scientists also estimated the age of the major branches of the orchid family. To their surprise, they found that certain groups of modern orchids, including the highly prized genus Vanilla, evolved very early during the rise of the plant family.
"This result is puzzling and fascinating at the same time because modern species of Vanilla orchids are locally distributed throughout the tropical regions of the world," says Ramírez. "But we know that tropical continents began to split apart about 100 million years ago, and thus our estimates of 60 to 70 million years for the age of Vanilla suggest that tropical continents were still experiencing significant biotic exchange much after their dramatic split."
Ramírez's co-authors on the Nature paper are Charles R. Marshall and Naomi E. Pierce, both professors in the Department of Organismic and Evolutionary Biology in Harvard's Faculty of Arts and Sciences; Barbara Gravendeel of the Nationaal Herbarium Nederland in Leiden, The Netherlands; and Rodrigo B. Singer of the Universidade Federal do Rio Grande do Sul in Porto Alegre, Brazil. Their work was funded by the National Science Foundation, the Fulbright scholar program, and the Barbour Fund at Harvard's Museum of Comparative Zoology.
Harvard University. (2007, August 30). First Orchid Fossil Puts Showy Blooms At Some 80 Million Years Old. ScienceDaily. Retrieved August 14, 2023 from www.sciencedaily.com/releases/2007/08/070829143719.htm
June 30, 2021 Dinosaurs roamed the Earth more than 65 million years ago, and paleontologists and amateur fossil hunters are still unearthing traces of them today. The minerals in fossilized eggs and shell ...
Mar. 10, 2021 A study of fossilized lampreys dating from more than 300 million years ago is challenging a long-held theory about the evolutionary origin of vertebrates. These ancient, jawless, eel-like fishes ...
Oct. 27, 2020 Some of the largest birds in history, called pelagornithids, arose a few million years after the mass extinction that killed off the dinosaurs and patrolled the oceans with giant wingspans for some ...
|
First Orchid Fossil Puts Showy Blooms At Some 80 Million Years Old
Biologists at Harvard University have identified the ancient fossilized remains of a pollen-bearing bee as the first hint of orchids in the fossil record, a find they say suggests orchids are old enough to have co-existed with dinosaurs. Their analysis indicates orchids arose some 76 to 84 million years ago, much longer ago than many scientists had estimated.
Biologists at Harvard University have identified the ancient fossilized remains of a pollen-bearing bee as the first hint of orchids in the fossil record, a find they say suggests orchids are old enough to have co-existed with dinosaurs.
Their analysis, published recently in the journal Nature, indicates orchids arose some 76 to 84 million years ago, much longer ago than many scientists had estimated. The extinct bee they studied, preserved in amber with a mass of orchid pollen on its back, represents some of the only direct evidence of pollination in the fossil record.
"Since the time of Darwin, evolutionary biologists have been fascinated with orchids' spectacular adaptations for insect pollination," says lead author Santiago R. Ramírez, a researcher in Harvard's Museum of Comparative Zoology and Department of Organismic and Evolutionary Biology. "But while orchids are the largest and most diverse plant family on Earth, they have been absent from the fossil record. "
The fossil record lacks evidence of orchids, Ramírez says, because they bloom infrequently and are concentrated in tropical areas where heat and humidity prevent fossilization. Their pollen is dispersed only by animals, not wind, and disintegrates upon contact with the acid used to extract pollen from rocks.
Orchids' ambiguous fossil record has fed a longstanding debate over their age, with various scientists pegging the family at anywhere from 26 to 112 million years old.
|
yes
|
Paleobotany
|
Did flowers exist in the age of dinosaurs?
|
yes_statement
|
"flowers" "existed" in the "age" of "dinosaurs".. there were "flowers" during the "age" of "dinosaurs".
|
https://bgr.com/science/dinosaur-flowers-research-fragrance-scents/
|
Dinosaurs were sniffing flowers millions of years before humans ...
|
Dinosaurs were sniffing flowers millions of years before humans even existed
You don’t normally think of dinosaurs doing the kinds of things that modern animals might do — like taking a nap in a grassy field or playing with each other as youngsters — but new research suggests that they may at least been taking the time to stop and smell the roses.
Tech. Entertainment. Science. Your inbox.
A new study led by Oregon State University entomologist George Poinar Jr reveals that ancient flowering plants had the same kind of fragrant scents as many flowers do today. In fact, Poinar even goes so far as to suggest that such pleasant scents might have played a role in attracting dinosaurs to certain areas.
The study focused on long-fossilized examples of flowering plants encased in hardened tree sap. This material, which is called amber, has the ability to preserve both animals and plants for incredibly long periods of time. By studying multiple examples of now-extinct flowers dating back as far as 100 million years, the researchers were able to determine that the same fragrant compounds that tickle our fancy today were present in ancient flowers from the Cretaceous period, when dinosaurs were abundant.
“I bet some of the dinosaurs could have detected the scents of these early flowers,” George Poinar explained. “In fact, floral essences from these early flowers could even have attracted these giant reptiles.”
Just as they are today, the scents were likely used by the plants to attract pollinators. Bugs were prevalent at the time, and many plants would have relied on them for pollination just like today’s flowers.
“It’s obvious flowers were producing scents to make themselves more attractive to pollinators long before humans began using perfumes to make themselves more appealing to other humans,” Poinar says.
Whether or not towering dinosaurs would have had any measurable interest in the fragrant flowers of the age is little more than a guess, but modern animals are regularly observed taking in the sweet scents, so it’s more likely than not.
|
Dinosaurs were sniffing flowers millions of years before humans even existed
You don’t normally think of dinosaurs doing the kinds of things that modern animals might do — like taking a nap in a grassy field or playing with each other as youngsters — but new research suggests that they may at least been taking the time to stop and smell the roses.
Tech. Entertainment. Science. Your inbox.
A new study led by Oregon State University entomologist George Poinar Jr reveals that ancient flowering plants had the same kind of fragrant scents as many flowers do today. In fact, Poinar even goes so far as to suggest that such pleasant scents might have played a role in attracting dinosaurs to certain areas.
The study focused on long-fossilized examples of flowering plants encased in hardened tree sap. This material, which is called amber, has the ability to preserve both animals and plants for incredibly long periods of time. By studying multiple examples of now-extinct flowers dating back as far as 100 million years, the researchers were able to determine that the same fragrant compounds that tickle our fancy today were present in ancient flowers from the Cretaceous period, when dinosaurs were abundant.
“I bet some of the dinosaurs could have detected the scents of these early flowers,” George Poinar explained. “In fact, floral essences from these early flowers could even have attracted these giant reptiles.”
Just as they are today, the scents were likely used by the plants to attract pollinators. Bugs were prevalent at the time, and many plants would have relied on them for pollination just like today’s flowers.
“It’s obvious flowers were producing scents to make themselves more attractive to pollinators long before humans began using perfumes to make themselves more appealing to other humans,” Poinar says.
Whether or not towering dinosaurs would have had any measurable interest in the fragrant flowers of the age is little more than a guess, but modern animals are regularly observed taking in the sweet scents, so it’s more likely than not.
|
yes
|
Paleobotany
|
Did flowers exist in the age of dinosaurs?
|
yes_statement
|
"flowers" "existed" in the "age" of "dinosaurs".. there were "flowers" during the "age" of "dinosaurs".
|
https://www.fossils-facts-and-finds.com/mesozoic_era.html
|
The Mesozoic Era: Facts on the climate, continents, plants and animals
|
The Mesozoic Era The Age of Dinosaurs
The Mesozoic Era, here is all you need to know about the climate, continents, plants and animals of the Mesozoic, including the dinosaurs, the first mammals and flowers.
The Mesozoic begins where the upheavals of the Permian Extinctions
end. A mass extinction at the end of the Permian Period had eliminated
most of the species of life that had existed throughout the Paleozoic Era. Sometimes called the Age of Dinosaurs because this era becomes dominated by dinosaurs and reptiles. But so much more happened during the 3 periods of the Mesozoic.
The Continents Toward the end of the Paleozoic Era the land that would become Europe and Asia slammed into North America. By the time of the Mesozoic Era Pangeathe super continent had formed. It was roughly the shape of a “C” and contained most of the earth's land. The huge land mass protected the Tethys Ocean which lay across tropical latitudes. Pangea and the Tethys were ringed by the Panthalassic Ocean.
Climate During The Mesozoic Era
The
temperatures, both on land and in the ocean, were much higher than
during the Paleozoic, and climates were more tropical in nature. Despite
this, the seas were lower, leaving different types of land masses for
life to deal with. Over all the Mesozoic Era was dryer than in the
Paleozoic Era. There were more deserts and less marshland.
Within the three periods of the Mesozoic Era ( Triassic, Jurassic and Cretaceous) there were times of wide temperature and seasonal variation.
Life Recovers From The Permian Extinctions
It took most of the first and second periods of the Mesozoic, the Triassic and the Jurassic periods, for the diversity of species to recover and achieve some balance. While plant species had survived somewhat better than animals over the Permian Extinction, new types of plants developed to survive the changing conditions.
The warmer drier conditions of the Mesozoic required new reproductive methods in plants. Ferns and gymnosperms developed. Their reproductive methods allowed for good protection of the spores or seeds that would have to get through periods of drought before growing into the infant plant.
Marine Life
The survivors of the Permian Extinction had very little competition. Corals, mollusks and fish dominated the life in the oceans. Some Reptiles took to the water to become the first air breathing hunters in the oceans. They took on a variety of forms, including the mosasaurs and the plesiosaurs. These marine reptiles rose to the top of the food chain.
The Rise of The Reptiles and Dinosaurs
The
dominant land animals at the end of the Permian Period were the
Synapsids. This group of animals is characterized by having a single
hole on each side of the skull behind the eye. They are sometimes called
mammal like reptiles. This group nearly became extinct at the close of
the Permian Period.
The animals that developed in the Mesozoic
needed new body types to survive the extremes of temperature and
moisture. Amphibians developed respiratory mechanisms that allowed them
to live in or out of the water for extended periods of time. But it was
the reptiles that were better adapted to the warmer dryer conditions.
They developed thick, leathery skin on both their own bodies and their
eggs. The reptiles thrived, dominating the landscape in both size and
numbers. They are known as diapsids. Diapsids are characterized by
having two openings on each side of the skull behind the eyes.
The dinosaurs evolved from these reptiles and were themselves diapsids. During the Jurassic and Cretaceous Periods the dinosaurs ruled the earth.
Both plants and animals reached giant proportions during the Mesozoic.
During the 180 million years of the Era, reptiles lived on land, in
seas, and in the air. Small mammals, although not significant during the
time, did exist during this era.
Mass Extinction Ends The Mesozoic Era Another
mass extinction occurred at the end of the Cretaceous Period, bringing
an end to the dinosaurs and the tropical forests. This extinction, while
not as broad and devastating as that at the end of the Permian, had the
effect of eliminating a way of life that has not been replicated.
Most researchers agree that the Mesozoic Era ended at least in part due to the impact of an asteroid.
|
The Mesozoic Era The Age of Dinosaurs
The Mesozoic Era, here is all you need to know about the climate, continents, plants and animals of the Mesozoic, including the dinosaurs, the first mammals and flowers.
The Mesozoic begins where the upheavals of the Permian Extinctions
end. A mass extinction at the end of the Permian Period had eliminated
most of the species of life that had existed throughout the Paleozoic Era. Sometimes called the Age of Dinosaurs because this era becomes dominated by dinosaurs and reptiles. But so much more happened during the 3 periods of the Mesozoic.
The Continents Toward the end of the Paleozoic Era the land that would become Europe and Asia slammed into North America. By the time of the Mesozoic Era Pangeathe super continent had formed. It was roughly the shape of a “C” and contained most of the earth's land. The huge land mass protected the Tethys Ocean which lay across tropical latitudes. Pangea and the Tethys were ringed by the Panthalassic Ocean.
Climate During The Mesozoic Era
The
temperatures, both on land and in the ocean, were much higher than
during the Paleozoic, and climates were more tropical in nature. Despite
this, the seas were lower, leaving different types of land masses for
life to deal with. Over all the Mesozoic Era was dryer than in the
Paleozoic Era. There were more deserts and less marshland.
Within the three periods of the Mesozoic Era ( Triassic, Jurassic and Cretaceous) there were times of wide temperature and seasonal variation.
Life Recovers From The Permian Extinctions
It took most of the first and second periods of the Mesozoic, the Triassic and the Jurassic periods, for the diversity of species to recover and achieve some balance. While plant species had survived somewhat better than animals over the Permian Extinction, new types of plants developed to survive the changing conditions.
The warmer drier conditions of the Mesozoic required new reproductive methods in plants.
|
yes
|
Paleobotany
|
Did flowers exist in the age of dinosaurs?
|
yes_statement
|
"flowers" "existed" in the "age" of "dinosaurs".. there were "flowers" during the "age" of "dinosaurs".
|
https://www.livescience.com/29231-cretaceous-period.html
|
Cretaceous period: Animals, plants and extinction event | Live Science
|
The Cretaceous period was the last and longest segment of the Mesozoic era. It lasted approximately 79 million years, from the minor extinction event that closed the Jurassic period about 145 million years ago to the Cretaceous-Paleogene (K-Pg) extinction event 66 million years ago. The name comes from "creta," the Latin word for chalk, because of widespread chalk deposits dating from the period, according to the National Park Service.
In the early Cretaceous, the continents were in very different positions than they are today, according to the Australian Museum. Sections of the supercontinent Pangaea were drifting apart. The Tethys Ocean still separated the northern continent Laurasia from the southern continent Gondwana. The North and South Atlantic were still closed, although the Central Atlantic had begun to open up in the Late Jurassic period. By the middle of the Cretaceous period, ocean levels were much higher; most of the landmasses we are familiar with were underwater. By the end of the period, the continents were much closer to their modern configuration. Africa and South America had assumed their distinctive shapes. But India had not yet collided with Asia, and Australia was still part of Antarctica.
Parts of supercontinent Pangaea eventually drifted apart to become the continents we know today. (Image credit: USGS)
Cretaceous period plants
One hallmark of the Cretaceous period was the development and radiation of flowering plants, or angiosperms, which "rapidly diversified," according to the National Park Service. This radiation "gave rise suddenly and mysteriously to exquisite angiosperm diversity in the mid-Cretaceous," an evolutionary development that troubled Charles Darwin, who saw evolution happening much more slowly, according to a review in the journal Proceedings of the Royal Society B. Darwin proposed that flowering plants must have started developing long before the Cretaceous, potentially on "a lost island or continent," William E. Friedman, an evolutionary biologist at Harvard University, wrote in the American Journal of Botany in 2009. However, the Cretaceous-era burst of floral development may instead reveal how evolution can happen very quickly, Friedman wrote.
Though Darwin's lost continent never showed up, some flowering plants may have appeared in the Jurassic, recent research has shown.
However, Jurassic-era flowering plants would have been uncommon and may also have been evolutionary links between older plants that resembled angiosperms and the real thing, found in the Cretaceous, researchers said. Scientists generally place "the oldest uncontested" angiosperm fossils at about 125 million to 130 million years ago, in the early Cretaceous, according to the Brooklyn Botanic Garden. These include plants of the genera Archaefructus and Montsechia, which show the first evidence of ovaries in plants but may have lacked petals.
Since Darwin, scientists have thought that pollinating insects, such as bees and wasps, played a key role in the Cretaceous explosion of flowering plants, according to recent and foundational research. This is frequently cited as an example of co-evolution, according to the Washington Native Plant Society.
The mid-Cretaceous saw abundant populations of both insects and flowering plants, and recent finds finally caught Cretaceous-era insect pollinators frozen in the act. In 2019, scientists reported in the journal Proceedings of the National Academy of Sciences the first direct fossil evidence of insect pollination in the Cretaceous: a tumbling flower beetle, Angimordella burmitina, preserved in amber since the mid-Cretaceous, 99 million years ago, and covered with pollen grains. The beetle sports several body parts specialized for feeding on flowers, including pollen-feeding mouthparts, and the pollen grains have traits, like clumping characteristics, associated with insect pollination, the researchers reported.
And in a 2020 paper published in the journal BioOne, scientists reported on the oldest bee found bearing pollen, the 100 million-year-old Discoscapa apicula. Also found encased in amber, this insect shared some traits with modern bees, such as hind legs laden with pollen, and some traits with wasps, such as its wing vein features.
Thanks to pollinating insects, flowering plants had tremendous advantages over plants that spread pollen only by wind, spurring the explosion of angiosperms, according to Illinois Extension at the University of Illinois Urbana-Champaign. Competition for insect attention probably facilitated the relatively rapid success and diversification of the flowering plants, "lead[ing] to the development of many different size, shapes, colors and fragrances of flowers we see today,” including the production of nectar to attract hungry bugs. As diverse flower forms lured insects to pollinate them, insects adapted to different ways of gathering nectar and moving pollen, thus setting up the intricate co-evolutionary systems found to this day.
A few finds over the decades have estimated that some pollinating insects arrived before flowering plants. In 2009, researchers found that 11 species of scorpionflies present starting in the middle Jurassic boasted the elongated mouthparts and pollen-centric diets characteristic of pollinators, as reported in the journal Science. These likely pollinating insects, however, fed on nonflowering plants, or angiosperms, "long before the similar and independent coevolution of nectar-feeding flies, moths and beetles on angiosperms," the study said. These critters went extinct during the Cretaceous, around the time of the "global gymnosperm-to-angiosperm turnover," the researchers said. In the 1990s, researchers reported that bee- or wasp-like insects built hive-like nests in what is now called the Petrified Forest in Arizona, dating back to more than 200 million years ago. However, later re-evaluations found that the structures lacked defining characteristics of bee nests and most likely came from beetle larva chambers or other creatures, as reported in the journal Palaeogeography, Palaeoclimatology, Palaeoecology. That evaluation of the structures "eliminates them as evidence that decouples bee origins from the Cretaceous origin of angiosperms," the scientists wrote.
Some evidence shows that dinosaurs ate flowering plants. Two dinosaur coprolites (fossilized excrements) discovered in Utah contain fragments of angiosperm wood, according to an unpublished study presented at the 2015 Society of Vertebrate Paleontology annual meeting. An Early Cretaceous ankylosaur was found with fossilized angiosperm fruit in its gut.
However, for the most part, evidence suggests that dinosaurs ignored angiosperms in the Cretaceous, maintaining a diet focused on ferns and conifers, University of Bristol researchers said in 2021, summarizing their work on angiosperm evolution in the journal New Phytologist. The shape of some teeth from Cretaceous animals suggests that the herbivores grazed on leaves and twigs, said Betsy Kruk, formerly a volunteer researcher at the Field Museum in Chicago and now a principal investigator and project manager at Material Culture Consulting, a California-based company that consults on compliance services including archaeology and paleontology.
Cretaceous period animals
The Cretaceous was an age of reptiles. Dinosaurs dominated the land, while marine reptiles like the mosasaurs — which could span 56 feet (17 meters) — swam the oceans. Pterosaurs plied the skies, including the largest flying animal ever, Quetzalcoatlus, whose wingspan could stretch to 36 feet (11 m).
The largest-ever land predator, the famous Tyrannosaurus rex, also reigned during the Cretaceous. By the end of the Jurassic, some large sauropods, such as Apatosaurus and Diplodocus, had gone extinct. But other giant sauropods, including the titanosaurs, flourished, especially toward the end of the Cretaceous, Kruk said. Titanosaurs were the most successful sauropods of the period, and the past two decades have seen a "boom" in titanosaur discoveries, according to the journal Nature Ecology & Evolution.
Large herds of herbivorous ornithischians also thrived during the Cretaceous. These included Iguanodon (which belongs to the same group as duck-billed dinosaurs, also known as hadrosaurs), Ankylosaurus, and the ceratopsians, like Triceratops. Duck-billed dinosaurs were the most common type of ornithischians, a group of mostly herbivorous dinosaurs with bird-like hips, according to the Cal Poly Humboldt Natural History Museum. Theropods, including T. rex, continued as apex predators until the end of this period.
During the Cretaceous, more ancient birds took flight, joining the pterosaurs in the air. Experts have long debated the origin of flight. According to the so-called trees down theory, small reptiles may have evolved flight from gliding behaviors. The ground up hypothesis posits that flight evolved from the ability of small theropods to leap high to grasp prey or evade predators. Early research suggested that feathers evolved from elongated scales whose primary function, at least at first, was thermoregulation. They could be moved to absorb more solar heat in cool conditions and provide protection from the sun when it was hot, according to a 1975 study in The Quarterly Review of Biology. More recent studies suggest that signaling and tactile sensing may also have played a role in the evolution of these feather precursors, according to a study in the International Journal of Organic Evolution.
About the size of a crow, Confuciusornis is the earliest known bird to have a true beak. It lived about 25 million years after Archaeopteryx, but like its early ancestor, it still had clawed fingers. (Image credit: Eduard Solà Vázquez)
The earliest fossilized bird, Archaeopteryx, swooped through Cretaceous skies 150 million years ago, though it resembled small dinosaurs more than the birds we see today, according to the Australian Museum. A variety of birds arrived on the scene soon afterward sporting a range of features that could be more like those of current birds. Some of these creatures evolved into birds of the modern type by the late Cretaceous, which means that "bird-like dinosaurs, primitive birds and early modern birds all co-existed" for a stretch of the Cretaceous, the Australian Museum added.
One Cretaceous-era bird, Confuciusornis sanctus, lived about 125 million years ago. It was a crow-size bird with a modern, toothless beak, unlike the fanged Archaeopteryx; claws similar to those of modern, tree-dwelling birds; and flight-worthy feathers. A study of pigment-storing cell organelles in C. sanctus in the journal Science found that these ancient birds likely sported dark feathers on their torsos, with lighter-colored wings, according to the California Academy of Sciences. Iberomesornis, a contemporary of Archaeopteryx only the size of a sparrow, was capable of flight and may have been an insectivore.
Sea creatures also thrived during the Cretaceous, with many marine groups reaching their peak levels of diversity, according to the Cal Poly Humboldt museum. Beyond the mosasaurs, ocean sea life included mollusks that built reefs comparable to today's coral reefs, along with sharks, lobsters and crabs, sand dollar-like creatures known as echinoids, and a type of bony fish known as ray-finned fish (named for their fins formed from spines draped with webs of skin).
Though reptiles ruled the Cretaceous world, early mammals did exist at the time. Traditionally, scientists have viewed mammal evolution as constrained by the dominant dinosaurs; mammals couldn't evolve many species types, because dinosaurs occupied most niches, this view suggests. Only after the mass extinction that killed off all nonavian dinosaurs could mammals "radiate," or evolve into many diverse forms. But mammals may have gone through radiations even during the dinosaur age, including the Jurassic and Cretaceous periods, a 2019 study in the journal Trends in Ecology and Evolution found. And a 2021 study in the journal Current Biology found that evolutionary suppression of therians, the ancestors of today's mammals, may have come from not only dinosaurs, but also ancient relatives of mammals known as mammaliaforms.
How did the Cretaceous period end?
About 66 million years ago, nearly all large vertebrates and many tropical invertebrates became extinct in one of Earth's five great mass extinction events, according to former University of California, Davis, Earth and planetary sciences professor Richard Cowen. Scientists have linked that mass extinction with an enormous asteroid that collided with Earth in what is now Mexico. The event killed off all nonavian dinosaurs, all pterosaurs (which were not dinosaurs) and many marine reptiles, including mosasaurs and plesiosaurs, as well as many early mammals and "a host of amphibians, birds, reptiles and insects," according to the American Museum of Natural History in New York. An estimated three-quarters of species alive at the time met their end.
Geologists call this mass die-off the K-Pg extinction event because it marks the boundary between the Cretaceous and Paleogene periods; the "K" is from "Kreide," the German word for Cretaceous. The event was formerly known as the Cretaceous-Tertiary (K-T) event, but the group that sets standards for geologic nomenclature now considers Tertiary out of date with current science, according to the National Commission for Stratigraphy Belgium.
The Chicxulub (CHEEK-sheh-loob) crater in the Yucatán Peninsula, which spans more than 110 miles (180 kilometers) in diameter, is the likely landing spot of the dinosaur-killing asteroid. This crater dates to within 33,000 years of the K-Pg event, Live Science previously reported. "We've shown the impact and the mass extinction coincided as much as one can possibly demonstrate with existing dating techniques," Paul Renne, lead scientist in that study and a geochronologist and director of the Berkeley Geochronology Center in California, previously told Live Science.
Scientists had first associated the K-Pg extinction with an extraterrestrial impact decades ago, however. In 1979, a geologist discovered that the thin layer of clay separating the Cretaceous and Paleogene periods contained high concentrations of iridium. This element is rare on Earth but much more common in meteorites and asteroids, according to the Lunar and Planetary Science Institute. Other researchers found "shocked quartz," a form of the mineral created under intense pressure, and tiny, glass-like globes called tektites that form from droplets of melted rock. Both of these geological features form when an extraterrestrial object strikes Earth with great force.
Research in 2020 found that the object that carved out Chicxulub hit Earth at the most destructive possible angle, Live Science previously reported. The 7.5-mile-wide (12 km) asteroid, traveling at about 27,000 mph (43,000 km/h), would have vaporized rocks, sending 325 gigatons of sulfur and 435 gigatons of carbon dioxide into the atmosphere in the form of pulverized rock and sulfuric acid droplets, researchers estimated.
When the asteroid collided with Earth, its impact would have triggered a 10.1-magnitude earthquake, sent a shock wave with "hurricane-force winds" rippling across the Americas, and spawned a 330- to 820-foot-high (100 to 250 m) tsunami, according to a 2021 University of Maryland course. As debris ejected by the impact fell back to Earth, the material would have cooked the atmosphere to 2,700 degrees Fahrenheit (1,482 degrees Celsius), painting the sky red for several hours and igniting forest fires across the planet, Live Science reported in 2013. The heat pulse was like a global broiler oven, not only burning vegetation, but also cooking living things unable to burrow or dive, the researchers said.
Illustration of the K-Pg extinction event at the end of the Cretaceous Period. A ten-kilometre-wide asteroid or comet is entering the Earth's atmosphere as dinosaurs, including T. rex, look on. (Image credit: ROGER HARRIS/SCIENCE PHOTO LIBRARY via Getty Images)
"This rain of hot dust raised global temperatures for hours after the impact and cooked alive animals that were too large to seek shelter," Kruk said. "Small animals that could shelter underground, underwater, or perhaps in caves or large tree trunks, may have been able to survive this initial heat blast."
Rock vaporized by the asteroid likely stayed in the atmosphere, blocking part of the sun's rays for months or years, according to the University of Maryland. This may even have lasted as long as 16 years, with a 30-year recovery period. With less sunlight, plants would have died, with consequences traveling up the food chain to herbivores dependent on plants and carnivores dependent on those herbivores, according to the Natural History Museum in London.
"Smaller, omnivorous terrestrial animals — like mammals, lizards, turtles or birds — may have been able to survive as scavengers feeding on the carcasses of dead dinosaurs, fungi, roots and decaying plant matter, while smaller animals with lower metabolisms were best able to wait the disaster out," she said.
The last phase of the asteroid fallout, greenhouse warming, may have lasted around 100,000 years, according to the University of Maryland. Carbonite rocks oxidized by the impact would have released large amounts of the greenhouse gas carbon dioxide (CO2) into the atmosphere. Just before the impact, a series of what may have been the second-largest volcanic eruptions ever on land went off at the Deccan traps in western India, according to the American Museum of Natural History. These regional catastrophes had already spewed tremendous levels of CO2 and so likely combined with the asteroid fallout to heat up the planet once the sun-obscuring dust settled, according to the University of Maryland.
Cretaceous period climate
Even before global cataclysms spurred global warming, the world was a warmer place during the Cretaceous period than it is today, according to Climate Policy Watcher. The poles were cooler than the lower latitudes, but "overall, things were warmer," Kruk told Live Science. Fossils of tropical plants and ferns support this idea, she said. Warm ocean currents, unfrozen poles and levels of CO2 that were relatively high even before the extinction event all combined to produce a hot planet, according to Climate Policy Watcher.
Animals in the Cretaceous lived all over, even in colder areas. For instance, Hadrosaur fossils dating to the Late Cretaceous were uncovered in Alaska. And in a 2020 paper in the journal Nature, scientists reported on a temperate rainforest in Antarctica dating to the mid-Cretaceous.
Additional resources
Learn about and visit a cast of a titanosaur, the gigantic sauropods of the Cretaceous era, at the American Museum of Natural History. Explore the Cretaceous-Paleogene extinction and Earth's four other mass extinction events, including the possibility that we've entered a new one, at the Natural History Museum in London. Discover how pollinators and flowers have co-evolved at the New England Primate Conservancy. Read Richard Cowen's essay on the K-Pg mass extinction event and other topics in his book "History of Life" (Blackwell Scientific Publications, 2000).
This article was originally written by Live Science contributor Mary Bagley with contributions from Live Science editor Laura Geggel.
Originally published on Live Science on January 8, 2016 and updated on July 26, 2022.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
Michael Dhar is a science editor and writer based in Chicago. He has an MS in bioinformatics from NYU Tandon School of Engineering, an MA in English literature from Columbia University and a BA in English from the University of Iowa. He has written about health and science for Live Science, Scientific American, Space.com, The Fix, Earth.com and others and has edited for the American Medical Association and other organizations.
|
Two dinosaur coprolites (fossilized excrements) discovered in Utah contain fragments of angiosperm wood, according to an unpublished study presented at the 2015 Society of Vertebrate Paleontology annual meeting. An Early Cretaceous ankylosaur was found with fossilized angiosperm fruit in its gut.
However, for the most part, evidence suggests that dinosaurs ignored angiosperms in the Cretaceous, maintaining a diet focused on ferns and conifers, University of Bristol researchers said in 2021, summarizing their work on angiosperm evolution in the journal New Phytologist. The shape of some teeth from Cretaceous animals suggests that the herbivores grazed on leaves and twigs, said Betsy Kruk, formerly a volunteer researcher at the Field Museum in Chicago and now a principal investigator and project manager at Material Culture Consulting, a California-based company that consults on compliance services including archaeology and paleontology.
Cretaceous period animals
The Cretaceous was an age of reptiles. Dinosaurs dominated the land, while marine reptiles like the mosasaurs — which could span 56 feet (17 meters) — swam the oceans. Pterosaurs plied the skies, including the largest flying animal ever, Quetzalcoatlus, whose wingspan could stretch to 36 feet (11 m).
The largest-ever land predator, the famous Tyrannosaurus rex, also reigned during the Cretaceous. By the end of the Jurassic, some large sauropods, such as Apatosaurus and Diplodocus, had gone extinct. But other giant sauropods, including the titanosaurs, flourished, especially toward the end of the Cretaceous, Kruk said. Titanosaurs were the most successful sauropods of the period, and the past two decades have seen a "boom" in titanosaur discoveries, according to the journal Nature Ecology & Evolution.
Large herds of herbivorous ornithischians also thrived during the Cretaceous.
|
yes
|
Evolution
|
Did humans evolve from apes?
|
yes_statement
|
"humans" evolved from "apes". "humans" share a common ancestor with "apes"
|
http://humanorigins.si.edu/education/frequently-asked-questions
|
Frequently Asked Questions | The Smithsonian Institution's Human ...
|
Ask us a question! Use the contact form to ask your question about our work and you may see your question -- and answer -- on this website, or in the 'Evolution FAQ' kiosk in the David H. Koch Hall of Human Origins.
How does evolution work?
To survive, living things adapt to their surroundings. Occasionally a genetic variation gives one member of a species an edge. That individual passes the beneficial gene on to its descendents. More individuals with the new trait survive and pass it on to their descendents. If many beneficial traits arise over time, a new species—better equipped to meet the challenges of its environment—evolves.
What do scientists mean when they call evolution a theory?
Like gravity and plate tectonics, evolution is a scientific theory. In science, a theory is the most logical explanation for how a natural phenomenon works. It is well tested and supported by abundant evidence. It means quite the opposite from our informal use of the word theory, which implies an untested opinion or guess. As a scientific theory, evolution enables scientists to make predictions and drives investigations that lead to new kinds of observable evidence.
How does evolution explain complex organisms like humans?
Evolution doesn’t happen all at once, especially in complex organisms such as human beings. Modern humans are the product of evolutionary processes that go back more than 3.5 billion years, to the beginnings of life on Earth. We became human gradually, evolving new physical traits and behaviors on top of those inherited from earlier primates, mammals, vertebrates, and the very oldest living organisms.
How are humans and monkeys related?
Humans and monkeys are both primates. But humans are not descended from monkeys or any other primate living today. We do share a common ape ancestor with chimpanzees. It lived between 8 and 6 million years ago. But humans and chimpanzees evolved differently from that same ancestor. All apes and monkeys share a more distant relative, which lived about 25 million years ago.
Did humans evolve in a straight line, one species after another?
Human evolution, like evolution in other species, did not proceed in a straight line. Instead, a diversity of species diverged from common ancestors, like branches on a bush. Our species, Homo sapiens, is the only survivor. But there were many times in the past when several early human species lived at the same time.
Isn’t evolution controversial among scientists?
Evolution is the cornerstone of modern biology. There is no scientific controversy about whether evolution occurred or whether it explains the history of life on Earth. As in all fields of science, knowledge about evolution continues to increase through research and serious debate. For example, scientists continue to investigate the details of how evolution occurred and to refine exactly what happened at different times.
How do scientists know the age of fossils?
Scientists have developed more than a dozen methods for determining the age of fossils, human artifacts, and the sediments in which such evidence is found. These methods can date objects millions of years old. What’s more, the methods can be tested against one another to provide a highly reliable record of the past. Read more about dating methods here.
How do scientists know what past climates were like?
Among the major sources of evidence are sediment cores from the ocean bottom. They preserve the fossils of tiny organisms called foraminifera. By measuring oxygen in the skeletons of these organisms, scientists can calculate fluctuations in temperature and moisture over millions of years. Some of the most dramatic climate fluctuations in all of Earth’s history occurred during the period of human evolution.
What has been discovered about evolution since Darwin?
A lot! Since Darwin died in 1882, findings from many fields have confirmed and greatly expanded on his ideas. We’ve learned that Earth is old enough for all known species to have evolved. We’ve discovered DNA, which confirms that all organisms are related to one another. And we’ve uncovered millions of fossils that provide evidence of how one life form evolved into another over time.
Can the concept of evolution co-exist with religious faith?
Some members of both religious and scientific communities consider evolution to be opposed to religion. But others see no conflict between religion as a matter of faith and evolution as a matter of science. Still others see a much stronger and constructive relationship between religious perspectives and evolution. Many religious leaders and organizations have stated that evolution is the best explanation for the wondrous variety of life on Earth.
How can we reduce the conflict between religion and science?
Many scientists are people of faith who see opportunities for respectful dialogue about the relationship between religion and science. Some people consider science and faith as two separate areas of human understanding that enrich their lives in different ways. This Museum encourages visitors to explore new scientific findings and decide how these findings complement their ideas about the natural world.
What about the gaps in knowledge about human evolution?
In science, gaps in knowledge are the driving force behind the ongoing study of the natural world and how it arose. The science of human origins is a vibrant field in which new discoveries continually add to our understanding of how we became human. You can learn about some of the most recent findings in this exhibit.
How does scientific knowledge about evolution relate to cultural beliefs about our origins?
Societies worldwide express their beliefs through a wide diversity of stories about how humans came into being. These stories reflect the universal curiosity people have about our origins. For millennia, they have played a vital role in helping people develop an identity and an understanding of themselves as well as of their community. This exhibit presents research and findings based on scientific methods that are distinct from these stories.
|
How does evolution explain complex organisms like humans?
Evolution doesn’t happen all at once, especially in complex organisms such as human beings. Modern humans are the product of evolutionary processes that go back more than 3.5 billion years, to the beginnings of life on Earth. We became human gradually, evolving new physical traits and behaviors on top of those inherited from earlier primates, mammals, vertebrates, and the very oldest living organisms.
How are humans and monkeys related?
Humans and monkeys are both primates. But humans are not descended from monkeys or any other primate living today. We do share a common ape ancestor with chimpanzees. It lived between 8 and 6 million years ago. But humans and chimpanzees evolved differently from that same ancestor. All apes and monkeys share a more distant relative, which lived about 25 million years ago.
Did humans evolve in a straight line, one species after another?
Human evolution, like evolution in other species, did not proceed in a straight line. Instead, a diversity of species diverged from common ancestors, like branches on a bush. Our species, Homo sapiens, is the only survivor. But there were many times in the past when several early human species lived at the same time.
Isn’t evolution controversial among scientists?
Evolution is the cornerstone of modern biology. There is no scientific controversy about whether evolution occurred or whether it explains the history of life on Earth. As in all fields of science, knowledge about evolution continues to increase through research and serious debate. For example, scientists continue to investigate the details of how evolution occurred and to refine exactly what happened at different times.
How do scientists know the age of fossils?
Scientists have developed more than a dozen methods for determining the age of fossils, human artifacts, and the sediments in which such evidence is found. These methods can date objects millions of years old. What’s more, the methods can be tested against one another to provide a highly reliable record of the past. Read more about dating methods here.
How do scientists know what past climates were like?
Among the major sources of evidence are sediment cores from the ocean bottom.
|
yes
|
Evolution
|
Did humans evolve from apes?
|
yes_statement
|
"humans" evolved from "apes". "humans" share a common ancestor with "apes"
|
https://www.pbs.org/wgbh/evolution/library/faq/cat02.html
|
Humans did not evolve from monkeys
|
Humans did not evolve from monkeys. Humans are more closely related to modern
apes than to monkeys, but we didn't evolve from apes, either. Humans share a
common ancestor with modern African apes, like gorillas and chimpanzees. Scientists
believe this common ancestor existed
5 to 8 million years ago. Shortly thereafter, the
species diverged into two separate lineages. One of these lineages ultimately evolved into
gorillas and chimps, and the other evolved into early human ancestors called hominids.
Since the earliest hominid species diverged from the ancestor
we share with modern African apes, 5 to 8 million years ago, there
have been at least a dozen different species of these humanlike creatures.
Many of these hominid species are close relatives, but not human ancestors.
Most went extinct without giving rise to other species. Some of the extinct hominids
known today, however, are almost certainly direct ancestors of Homo sapiens.
While the total number of species that existed and the relationships among them
is still unknown, the picture becomes clearer as new fossils are found. Humans
evolved through the same biological processes that govern the evolution of all life
on Earth. See "What is evolution?", "How does natural selection work?", and "How
do organisms evolve?"
A society's culture consists of its accumulated learned behavior.
Human culture is based at least partly on social living and language,
although the ability of a species to invent and use language and engage in
complex social behaviors has a biological basis. Some scientists hypothesize that
language developed as a means of establishing lasting social relationships. Even a
form of communication as casual as gossip provides an ingenious social tool: Suddenly,
we become aware of crucial information that we never would have known otherwise. We
know who needs a favor; who's available; who's already taken; and who's looking for someone --
information that, from an evolutionary perspective, can mean the difference between failure and success.
So, it is certainly possible that evolutionary forces have influenced the development of human capacities
for social interaction and the development of culture. While scientists tend to agree about the general role
of evolution in culture, there is still great disagreement about its specific contributions.
There is great debate about how we are related to Neanderthals,
close hominid relatives who coexisted with our species from more than
100,000 years ago to about 28,000 years ago. Some data suggest that
when anatomically modern humans dispersed into areas beyond Africa,
they did so in small bands, across many different regions. As they did so,
according to this hypothesis, humans merged with and interbred with Neanderthals,
meaning that there is a little Neanderthal in all modern Europeans.
Scientific opinion based on other sets of data, however, suggests that the
movement of anatomically modern humans out of Africa happened on a larger scale.
These movements by the much more culturally and technologically advanced modern humans,
the hypothesis states, would have been difficult for the Neanderthals to accommodate; the modern
humans would have out-competed the Neanderthals for resources and driven them to extinction.
Evolution describes the change over time of all living things from a single
common ancestor. The "tree of life" illustrates this concept. Every branch
represents a species, each connected to other such branches and the rest
of tree as a whole. The forks separating one species from another represent
the common ancestors shared by these species. In the case of the relatedness
of humans and single-celled organisms, a journey along two different paths -- one
starting at the tip of the human branch, the other starting at the tip of a single-celled
organism's branch -- would ultimately lead to a fork near the base of the tree: the common
ancestor shared by these two very different types of organisms. This journey would cross
countless other forks and branches along the way and span perhaps more than a billion years of
evolution, but it demonstrates that even the most disparate creatures are related to one another --
that all life is interconnected.
Life began more than 3 billion years before the Cambrian, and gradually
diversified into a wide variety of single-celled organisms. Toward the end
of the Precambrian, about 570 million years ago, a number of multicelled forms
began to appear in the fossil record, including invertebrates resembling sponges
and jellyfish, and some as-yet-unknown burrowing forms of life. As the Cambrian began,
most of the basic body plans of invertebrates emerged from these Precambrian forms. They
emerged relatively rapidly, in the geological sense -- over 10 million to 25 million years. These
Cambrian forms were not identical to modern invertebrates, but were their early ancestors. Major
groups of living organisms, such as fish, amphibians, reptiles, birds, and mammals, did not appear
until millions of years after the end of the Cambrian Period.
|
Humans did not evolve from monkeys. Humans are more closely related to modern
apes than to monkeys, but we didn't evolve from apes, either. Humans share a
common ancestor with modern African apes, like gorillas and chimpanzees. Scientists
believe this common ancestor existed
5 to 8 million years ago. Shortly thereafter, the
species diverged into two separate lineages. One of these lineages ultimately evolved into
gorillas and chimps, and the other evolved into early human ancestors called hominids.
Since the earliest hominid species diverged from the ancestor
we share with modern African apes, 5 to 8 million years ago, there
have been at least a dozen different species of these humanlike creatures.
Many of these hominid species are close relatives, but not human ancestors.
Most went extinct without giving rise to other species. Some of the extinct hominids
known today, however, are almost certainly direct ancestors of Homo sapiens.
While the total number of species that existed and the relationships among them
is still unknown, the picture becomes clearer as new fossils are found. Humans
evolved through the same biological processes that govern the evolution of all life
on Earth. See "What is evolution?", "How does natural selection work?", and "How
do organisms evolve?"
A society's culture consists of its accumulated learned behavior.
Human culture is based at least partly on social living and language,
although the ability of a species to invent and use language and engage in
complex social behaviors has a biological basis. Some scientists hypothesize that
language developed as a means of establishing lasting social relationships. Even a
form of communication as casual as gossip provides an ingenious social tool: Suddenly,
we become aware of crucial information that we never would have known otherwise. We
know who needs a favor; who's available; who's already taken; and who's looking for someone --
information that, from an evolutionary perspective, can mean the difference between failure and success.
So, it is certainly possible that evolutionary forces have influenced the development of human capacities
for social interaction and the development of culture.
|
no
|
Evolution
|
Did humans evolve from apes?
|
yes_statement
|
"humans" evolved from "apes". "humans" share a common ancestor with "apes"
|
https://en.wikipedia.org/wiki/Chimpanzee%E2%80%93human_last_common_ancestor
|
Chimpanzee–human last common ancestor - Wikipedia
|
In human genetic studies, the CHLCA is useful as an anchor point for calculating single-nucleotide polymorphism (SNP) rates in human populations where chimpanzees are used as an outgroup, that is, as the extant species most genetically similar to Homo sapiens.
The taxon tribeHominini was proposed to separate humans (genus Homo) from chimpanzees (Pan) and gorillas (genus Gorilla) on the notion that the least similar species should be separated from the other two. However, later evidence revealed that Pan and Homo are closer genetically than are Pan and Gorilla; thus, Pan was referred to the tribe Hominini with Homo. Gorilla now became the separated genus and was referred to the new taxon 'tribe Gorillini'.
Mann and Weiss (1996), proposed that the tribe Hominini should encompass Pan and Homo, grouped in separate subtribes.[1] They classified Homo and all bipedal apes in the subtribe Hominina and Pan in the subtribe Panina. (Wood (2010) discussed the different views of this taxonomy.)[2] A "chimpanzee clade" was posited by Wood and Richmond, who referred it to a tribe Panini, which was envisioned from the family Hominidae being composed of a trifurcation of subfamilies.[3]
Richard Wrangham (2001) argued that the CHLCA species was very similar to the common chimpanzee (Pan troglodytes) – so much so that it should be classified as a member of the genus Pan and be given the taxonomic name Pan prior.[4]
All the human-related genera of tribe Hominini that arose after divergence from Pan are members of the subtribe Hominina, including the genera Homo and Australopithecus. This group represents "the human clade" and its members are called "hominins".[5]
No fossil has yet conclusively been identified as the CHLCA. A possible candidate is Graecopithecus, though this claim is disputed as there is insufficient evidence to support the determination of Graecopithecus as hominin.[6] This would put the CHLCA split in Southeast Europe instead of Africa.[7][8]
Sahelanthropus tchadensis is an extinct hominine with some morphology proposed (and disputed) to be as expected of the CHLCA, and it lived some 7 million years ago – close to the time of the chimpanzee–human divergence. But it is unclear whether it should be classified as a member of the tribe Hominini, that is, a hominin, as an ancestor of Homo and Pan and a potential candidate for the CHLCA species itself, or simply a Miocene ape with some convergent anatomical similarity to many later hominins.
Ardipithecus most likely appeared after the human-chimpanzee split, some 5.5 million years ago, at a time when gene flow may still have been ongoing. It has several shared characteristics with chimpanzees, but due to its fossil incompleteness and the proximity to the human-chimpanzee split, the exact position of Ardipithecus in the fossil record is unclear.[9] It is most likely derived from the chimpanzee lineage and thus not ancestral to humans.[10][11] However, Sarmiento (2010), noting that Ardipithecus does not share any characteristics exclusive to humans and some of its characteristics (those in the wrist and basicranium), suggested that it may have diverged from the common human/African ape stock prior to the human, chimpanzee and gorilla divergence.[12]
The earliest fossils clearly in the human but not the chimpanzee lineage appear between about 4.5 to 4 million years ago, with Australopithecus anamensis.
Few fossil specimens on the "chimpanzee-side" of the split have been found; the first fossil chimpanzee, dating between 545 and 284 kyr (thousand years, radiometric), was discovered in Kenya's East African Rift Valley (McBrearty, 2005).[13] All extinct genera listed in the taxobox[which?] are ancestral to Homo, or are offshoots of such. However, both Orrorin and Sahelanthropus existed around the time of the divergence, and so either one or both may be ancestral to both genera Homo and Pan.
Due to the scarcity of fossil evidence for CHLCA candidates, Mounier (2016) presented a project to create a "virtual fossil" by applying digital "morphometrics" and statistical algorithms to fossils from across the evolutionary history of both Homo and Pan, having previously used this technique to visualize a skull of the last common ancestor of Neanderthal and Homo sapiens.[14][15]
An estimate of TCHLCA of 10 to 13 million years was proposed in 1998,[note 1] and a range of 7 to 10 million years ago is assumed by White et al. (2009):
In effect, there is now no a priori reason to presume that human-chimpanzee split times are especially recent, and the fossil evidence is now fully compatible with older chimpanzee–human divergence dates [7 to 10 Ma...
Some researchers tried to estimate the age of the CHLCA (TCHLCA) using biopolymer structures that differ slightly between closely related animals. Among these researchers, Allan C. Wilson and Vincent Sarich were pioneers in the development of the molecular clock for humans. Working on protein sequences, they eventually (1971) determined that apes were closer to humans than some paleontologists perceived based on the fossil record.[note 2]
This paradigmatic age has stuck with molecular anthropology until the late 1990s. Since the 1990s, the estimate has again been pushed towards more-remote times, because studies have found evidence for a slowing of the molecular clock as apes evolved from a common monkey-like ancestor with monkeys, and humans evolved from a common ape-like ancestor with non-human apes.[19]
A 2016 study analyzed transitions at CpG sites in genome sequences, which exhibit a more clocklike behavior than other substitutions, arriving at an estimate for human and chimpanzee divergence time of 12.1 million years.[20]
A source of confusion in determining the exact age of the Pan–Homo split is evidence of a more complex speciation process than a clean split between the two lineages. Different chromosomes appear to have split at different times, possibly over as much as a 4-million-year period, indicating a long and drawn out speciation process with large-scale gene flow events between the two emerging lineages as recently as 6.3 to 5.4 million years ago, according to Patterson et al. (2006).[21]
Speciation between Pan and Homo occurred over the last 9 million years. Ardipithecus probably branched off of the Pan lineage in the middle Miocene Messinian.[10][11] After the original divergences, there were, according to Patterson (2006), periods of gene flow between population groups and a process of alternating divergence and gene flow that lasted several million years.[21] Some time during the late Miocene or early Pliocene, the earliest members of the human clade completed a final separation from the lineage of Pan – with date estimates ranging from 13 million[16] to as recent as 4 million years ago.[21] The latter date and the argument for gene flow events are rejected by Wakeley.[note 3]
The assumption of late gene flow was in particular based on the similarity of the X chromosome in humans and chimpanzees, suggesting a divergence as late as some 4 million years ago. This conclusion was rejected as unwarranted by Wakeley (2008), who suggested alternative explanations, including selection pressure on the X chromosome in the populations ancestral to the CHLCA.[note 3]
Complex speciation and incomplete lineage sorting of genetic sequences seem to also have happened in the split between the human lineage and that of the gorilla, indicating "messy" speciation is the rule rather than the exception in large primates.[23][24] Such a scenario would explain why the divergence age between the Homo and Pan has varied with the chosen method and why a single point has so far been hard to track down.
^Based on a revision of the divergence of Hominoidea from Cercopithecoidea at more than 50 Mya (previously set at 30 Mya). "Consistent with the marked shift in the dating of the Cercopithecoidea/Hominoidea split, all hominoid divergences receive a much earlier dating. Thus the estimated date of the divergence between Pan (chimpanzee) and Homo is 10–13 MYBP and that between Gorilla and the Pan/Homo linage ≈17 MYBP."[16]
^"If man and old world monkeys last shared a common ancestor 30 million years ago, then man and African apes shared a common ancestor 5 million years ago..."[18]
^ ab"Patterson et al. suggest that the apparently short divergence time between humans and chimpanzees on the X chromosome is explained by a massive interspecific gene flow event in the ancestry of these two species. However, Patterson et al. do not statistically test their own null model of simple speciation before concluding that speciation was complex, and—even if the null model could be rejected—they do not consider other explanations of a short divergence time on the X chromosome. These include natural selection on the X chromosome in the common ancestor of humans and chimpanzees, changes in the ratio of male-to-female mutation rates over time, and less extreme divergence versions with gene flow. I, therefore, believe that their claim of gene flow is unwarranted."[22]
|
However, Sarmiento (2010), noting that Ardipithecus does not share any characteristics exclusive to humans and some of its characteristics (those in the wrist and basicranium), suggested that it may have diverged from the common human/African ape stock prior to the human, chimpanzee and gorilla divergence.[12]
The earliest fossils clearly in the human but not the chimpanzee lineage appear between about 4.5 to 4 million years ago, with Australopithecus anamensis.
Few fossil specimens on the "chimpanzee-side" of the split have been found; the first fossil chimpanzee, dating between 545 and 284 kyr (thousand years, radiometric), was discovered in Kenya's East African Rift Valley (McBrearty, 2005).[13] All extinct genera listed in the taxobox[which?] are ancestral to Homo, or are offshoots of such. However, both Orrorin and Sahelanthropus existed around the time of the divergence, and so either one or both may be ancestral to both genera Homo and Pan.
Due to the scarcity of fossil evidence for CHLCA candidates, Mounier (2016) presented a project to create a "virtual fossil" by applying digital "morphometrics" and statistical algorithms to fossils from across the evolutionary history of both Homo and Pan, having previously used this technique to visualize a skull of the last common ancestor of Neanderthal and Homo sapiens.[14][15]
An estimate of TCHLCA of 10 to 13 million years was proposed in 1998,[note 1] and a range of 7 to 10 million years ago is assumed by White et al. (2009):
|
yes
|
Evolution
|
Did humans evolve from apes?
|
yes_statement
|
"humans" evolved from "apes". "humans" share a common ancestor with "apes"
|
https://www.abc.net.au/science/articles/2011/10/04/3331957.htm
|
If evolution is real why are there still monkeys? › Ask an Expert (ABC ...
|
Related Stories
If evolution is real why are there still monkeys? How can we be descended from monkeys if they are around today?
With all the 'monkeying around' that can go on in the playground or even in the office it seems we could easily be directly descended from monkeys, but our evolutionary relationship is actually much more distant.
"This is a question often encountered by evolutionary biologists," says Dr Paul Willis, palaeontologist and Director of RIAus.
"But the question itself reveals a couple of fundamental misunderstandings about evolution and how it operates", he says.
Firstly, humans did not evolve from monkeys. Instead, monkeys and humans share a common ancestor from which both evolved around 25 million years ago.
This evolutionary relationship is supported both by the fossil record and DNA analysis. A 2007 study showed that humans and rhesus monkeys share about 93% of their DNA. Based on the similarities and differences between the two types of DNA, scientists have estimated that humans and rhesus monkeys diverged from their common ancestor 25 million years ago.
Similarly, the fossil record has identified ancestors common to both humans and monkeys, such as an as yet unnamed primate fossil from Myanmar found in 2009 and dated as living around 37 million years ago.
Our closer cousins
Humans are actually more closely related to chimpanzees and other apes, but DNA evidence again shows that we didn't evolve from them. Chimps and humans share between 98 to 99% of DNA suggesting that we shared a common ancestor around 6 million years ago.
Evolution is not linear
"The idea of sharing a common ancestor leads to the second major misunderstanding inherent in the question," says Dr Willis, "that evolution is a linear process where one species evolves into another."
Evolution is really a branching process where one species can give rise to two or more species.
"The fallacy of linear evolution is most clearly illustrated by the analogy of asking; how can I share common grandparents with my cousins if my cousins and my grandparents are still alive?," says Dr Willis.
"The answer is of course that your grandparents had more than one child and they each went off and started their own families creating new branches of your own family tree."
The same thing happens in evolutionary families. A species can split into two or more descendant species and they can split again and again across the generations.
Dr Paul Willis is the Director of RiAus and formerly a presenter on ABC TV's Catalyst program.
|
Related Stories
If evolution is real why are there still monkeys? How can we be descended from monkeys if they are around today?
With all the 'monkeying around' that can go on in the playground or even in the office it seems we could easily be directly descended from monkeys, but our evolutionary relationship is actually much more distant.
"This is a question often encountered by evolutionary biologists," says Dr Paul Willis, palaeontologist and Director of RIAus.
"But the question itself reveals a couple of fundamental misunderstandings about evolution and how it operates", he says.
Firstly, humans did not evolve from monkeys. Instead, monkeys and humans share a common ancestor from which both evolved around 25 million years ago.
This evolutionary relationship is supported both by the fossil record and DNA analysis. A 2007 study showed that humans and rhesus monkeys share about 93% of their DNA. Based on the similarities and differences between the two types of DNA, scientists have estimated that humans and rhesus monkeys diverged from their common ancestor 25 million years ago.
Similarly, the fossil record has identified ancestors common to both humans and monkeys, such as an as yet unnamed primate fossil from Myanmar found in 2009 and dated as living around 37 million years ago.
Our closer cousins
Humans are actually more closely related to chimpanzees and other apes, but DNA evidence again shows that we didn't evolve from them. Chimps and humans share between 98 to 99% of DNA suggesting that we shared a common ancestor around 6 million years ago.
Evolution is not linear
"The idea of sharing a common ancestor leads to the second major misunderstanding inherent in the question," says Dr Willis, "that evolution is a linear process where one species evolves into another. "
Evolution is really a branching process where one species can give rise to two or more species.
"The fallacy of linear evolution is most clearly illustrated by the analogy of asking; how can I share common grandparents with my cousins if my cousins and my grandparents are still alive?," says Dr Willis.
|
no
|
Evolution
|
Did humans evolve from apes?
|
yes_statement
|
"humans" evolved from "apes". "humans" share a common ancestor with "apes"
|
https://www.sci.news/othersciences/anthropology/science-homo-pan-last-common-ancestor-03220.html
|
Study: Last Common Ancestor of Humans and Apes Looked Like ...
|
Study: Last Common Ancestor of Humans and Apes Looked Like Gorilla or Chimpanzee
Humans split from our closest African ape relatives in the genus Pan around six to seven million years ago. We have features that clearly link us with African apes, but we also have features that appear more primitive. This combination calls into question whether the Homo-Pan last common ancestor looked more like modern day chimpanzees and gorillas or an ancient ape unlike any living group. A new study, published online in the Proceedings of the National Academy of Sciences, suggests that the simplest explanation – that the ancestor looked a lot like a chimpanzee or gorilla – is the right one, at least in the shoulder.
“It appears that shoulder shape tracks changes in early human behavior such as reduced climbing and increased tool use,” said the study’s lead author Dr Nathan Young of the University of California, San Francisco.
The shoulders of African apes consist of a trowel-shaped blade and a handle-like spine that points the joint with the arm up toward the skull, giving an advantage to the arms when climbing or swinging through the branches.
In contrast, the scapular spine of monkeys is pointed more downwards.
In humans this trait is even more pronounced, indicating behaviors such as stone tool making and high-speed throwing.
The prevailing question was whether humans evolved this configuration from a more primitive ape, or from a modern African ape-like creature, but later reverted back to the downward angle.
Dr Young and his colleagues from Harvard University, American Museum of Natural History, and California Academy of Sciences, tested these competing theories by comparing 3D measurements of fossil shoulder blades of early hominins and modern humans against African apes, orangutan, gibbons and large, tree-dwelling monkeys.
The scientists found that the shoulder shape of anatomically modern Homo sapiens is unique in that it shares the lateral orientation with orangutans and the scapular blade shape with African apes; a primate in the middle.
“Human shoulder blades are odd, separated from all the apes. Primitive in some ways, derived in other ways, and different from all of them,” Dr Young said.
“How did the human lineage evolve and where did the common ancestor to modern humans evolve a shoulder like ours?”
To find out, the researchers analyzed two early human ancestors – Australopithecus afarensis and A. sediba – as well as Homo ergaster and Neanderthals, to see where they fit on the shoulder spectrum.
The results showed that australopiths were intermediate between African apes and humans.
The shoulder of Australopithecus afarensis was more like an African ape than a human, and Australopithecus sediba closer to human’s than to an ape’s.
This positioning is consistent with evidence for increasingly sophisticated tool use in Australopithecus.
“The mix of ape and human features observed in Australopithecus afarensis’ shoulder support the notion that, while bipedal, the species engaged in tree climbing and wielded stone tools. This is a primate clearly on its way to becoming human,” explained co-author Dr Zeray Alemseged from the California Academy of Sciences.
These shifts in the shoulder also enabled the evolution of another critical behavior – human’s ability to throw objects with speed and accuracy.
A laterally facing shoulder blade allows humans to store energy in their shoulders, much like a slingshot, facilitating high-speed throwing, an important and uniquely human behavior.
“These changes in the shoulder, which were probably initially driven by the use of tools well back into human evolution, also made us great throwers,” said co-author Dr Neil Roach of Harvard University.
|
In contrast, the scapular spine of monkeys is pointed more downwards.
In humans this trait is even more pronounced, indicating behaviors such as stone tool making and high-speed throwing.
The prevailing question was whether humans evolved this configuration from a more primitive ape, or from a modern African ape-like creature, but later reverted back to the downward angle.
Dr Young and his colleagues from Harvard University, American Museum of Natural History, and California Academy of Sciences, tested these competing theories by comparing 3D measurements of fossil shoulder blades of early hominins and modern humans against African apes, orangutan, gibbons and large, tree-dwelling monkeys.
The scientists found that the shoulder shape of anatomically modern Homo sapiens is unique in that it shares the lateral orientation with orangutans and the scapular blade shape with African apes; a primate in the middle.
“Human shoulder blades are odd, separated from all the apes. Primitive in some ways, derived in other ways, and different from all of them,” Dr Young said.
“How did the human lineage evolve and where did the common ancestor to modern humans evolve a shoulder like ours?”
To find out, the researchers analyzed two early human ancestors – Australopithecus afarensis and A. sediba – as well as Homo ergaster and Neanderthals, to see where they fit on the shoulder spectrum.
The results showed that australopiths were intermediate between African apes and humans.
The shoulder of Australopithecus afarensis was more like an African ape than a human, and Australopithecus sediba closer to human’s than to an ape’s.
This positioning is consistent with evidence for increasingly sophisticated tool use in Australopithecus.
“The mix of ape and human features observed in Australopithecus afarensis’ shoulder support the notion that, while bipedal, the species engaged in tree climbing and wielded stone tools. This is a primate clearly on its way to becoming human,” explained co-author Dr Zeray Alemseged from the California Academy of Sciences.
|
yes
|
Evolution
|
Did humans evolve from apes?
|
yes_statement
|
"humans" evolved from "apes". "humans" share a common ancestor with "apes"
|
https://www.britannica.com/science/human-evolution
|
Human evolution | History, Stages, Timeline, Tree, Chart, & Facts ...
|
What is a human being?
Humans are culture-bearing primates classified in the genus Homo, especially the species Homo sapiens. They are anatomically similar and related to the great apes (orangutans, chimpanzees, bonobos, and gorillas) but are distinguished by a more highly developed brain that allows for the capacity for articulate speech and abstract reasoning. Humans display a marked erectness of body carriage that frees the hands for use as manipulative members.
When did humans evolve?
The answer to this question is challenging, since paleontologists have only partial information on what happened when. So far, scientists have been unable to detect the sudden “moment” of evolution for any species, but they are able to infer evolutionary signposts that help to frame our understanding of the emergence of humans. Strong evidence supports the branching of the human lineage from the one that produced great apes (orangutans, chimpanzees, bonobos, and gorillas) in Africa sometime between 6 and 7 million years ago. Evidence of toolmaking dates to about 3.3 million years ago in Kenya. However, the age of the oldest remains of the genus Homo is younger than this technological milestone, dating to some 2.8–2.75 million years ago in Ethiopia. The oldest known remains of Homo sapiens—a collection of skull fragments, a complete jawbone, and stone tools—date to about 315,000 years ago.
Did humans evolve from apes?
No. Humans are one type of several living species of great apes. Humans evolved alongside orangutans, chimpanzees, bonobos, and gorillas. All of these share a common ancestor before about 7 million years ago.
Are Neanderthals classified as humans?
Yes. Neanderthals (Homo neanderthalensis) were archaic humans who emerged at least 200,000 years ago and died out perhaps between 35,000 and 24,000 years ago. They manufactured and used tools (including blades, awls, and sharpening instruments), developed a spoken language, and developed a rich culture that involved hearth construction, traditional medicine, and the burial of their dead. Neanderthals also created art; evidence shows that some painted with naturally occurring pigments. In the end, Neanderthals were likely replaced by modern humans (H. sapiens), but not before some members of these species bred with one another where their ranges overlapped.
human evolution, the process by which human beings developed on Earth from now-extinct primates. Viewed zoologically, we humans are Homo sapiens, a culture-bearing upright-walking species that lives on the ground and very likely first evolved in Africa about 315,000 years ago. We are now the only living members of what many zoologists refer to as the human tribe, Hominini, but there is abundantfossil evidence to indicate that we were preceded for millions of years by other hominins, such as Ardipithecus, Australopithecus, and other species of Homo, and that our species also lived for a time contemporaneously with at least one other member of our genus, H. neanderthalensis (the Neanderthals). In addition, we and our predecessors have always shared Earth with other apelike primates, from the modern-day gorilla to the long-extinct Dryopithecus. That we and the extinct hominins are somehow related and that we and the apes, both living and extinct, are also somehow related is accepted by anthropologists and biologists everywhere. Yet the exact nature of our evolutionary relationships has been the subject of debate and investigation since the great British naturalist Charles Darwin published his monumental books On the Origin of Species (1859) and The Descent of Man (1871). Darwin never claimed, as some of his Victorian contemporaries insisted he had, that “man was descended from the apes,” and modern scientists would view such a statement as a useless simplification—just as they would dismiss any popular notions that a certain extinct species is the “missing link” between humans and the apes. There is theoretically, however, a common ancestor that existed millions of years ago. This ancestral species does not constitute a “missing link” along a lineage but rather a node for divergence into separate lineages. This ancient primate has not been identified and may never be known with certainty, because fossil relationships are unclear even within the human lineage, which is more recent. In fact, the human “family tree” may be better described as a “family bush,” within which it is impossible to connect a full chronological series of species, leading to Homo sapiens, that experts can agree upon.
The primary resource for detailing the path of human evolution will always be fossil specimens. Certainly, the trove of fossils from Africa and Eurasia indicates that, unlike today, more than one species of our family has lived at the same time for most of human history. The nature of specific fossil specimens and species can be accurately described, as can the location where they were found and the period of time when they lived; but questions of how species lived and why they might have either died out or evolved into other species can only be addressed by formulating scenarios, albeit scientifically informed ones. These scenarios are based on contextual information gleaned from localities where the fossils were collected. In devising such scenarios and filling in the human family bush, researchers must consult a large and diverse array of fossils, and they must also employ refined excavation methods and records, geochemical dating techniques, and data from other specialized fields such as genetics, ecology and paleoecology, and ethology (animal behaviour)—in short, all the tools of the multidisciplinary science of paleoanthropology.
This article is a discussion of the broad career of the human tribe from its probable beginnings millions of years ago in the Miocene Epoch (23 million to 5.3 million years ago [mya]) to the development of tool-based and symbolically structured modern human culture only tens of thousands of years ago, during the geologically recent Pleistocene Epoch (about 2.6 million to 11,700 years ago). Particular attention is paid to the fossil evidence for this history and to the principal models of evolution that have gained the most credence in the scientific community.See the article evolution for a full explanation of evolutionary theory, including its main proponents both before and after Darwin, its arousal of both resistance and acceptance in society, and the scientific tools used to investigate the theory and prove its validity.
|
What is a human being?
Humans are culture-bearing primates classified in the genus Homo, especially the species Homo sapiens. They are anatomically similar and related to the great apes (orangutans, chimpanzees, bonobos, and gorillas) but are distinguished by a more highly developed brain that allows for the capacity for articulate speech and abstract reasoning. Humans display a marked erectness of body carriage that frees the hands for use as manipulative members.
When did humans evolve?
The answer to this question is challenging, since paleontologists have only partial information on what happened when. So far, scientists have been unable to detect the sudden “moment” of evolution for any species, but they are able to infer evolutionary signposts that help to frame our understanding of the emergence of humans. Strong evidence supports the branching of the human lineage from the one that produced great apes (orangutans, chimpanzees, bonobos, and gorillas) in Africa sometime between 6 and 7 million years ago. Evidence of toolmaking dates to about 3.3 million years ago in Kenya. However, the age of the oldest remains of the genus Homo is younger than this technological milestone, dating to some 2.8–2.75 million years ago in Ethiopia. The oldest known remains of Homo sapiens—a collection of skull fragments, a complete jawbone, and stone tools—date to about 315,000 years ago.
Did humans evolve from apes?
No. Humans are one type of several living species of great apes. Humans evolved alongside orangutans, chimpanzees, bonobos, and gorillas. All of these share a common ancestor before about 7 million years ago.
Are Neanderthals classified as humans?
Yes. Neanderthals (Homo neanderthalensis) were archaic humans who emerged at least 200,000 years ago and died out perhaps between 35,000 and 24,000 years ago.
|
no
|
Evolution
|
Did humans evolve from apes?
|
yes_statement
|
"humans" evolved from "apes". "humans" share a common ancestor with "apes"
|
https://evolution-outreach.biomedcentral.com/articles/10.1007/s12052-010-0293-2
|
Why Are There Still Monkeys? | Evolution: Education and Outreach ...
|
Why Are There Still Monkeys?
Abstract
The question “If humans evolved from monkeys, why are there still monkeys?” reveals a widespread and persistent misconception about the process and pattern of evolution. The concept of “cousins” is central to understanding and overcoming this particular obstacle to evolution education.
Popular misconceptions about evolution seem to have a life of their own. Some of the most common ones have persisted for decades, despite all efforts to correct them (Petto and Mead 2008; Mead and Scott 2010a; Mead and Scott 2010b). Some of these ideas seem to be firmly embedded in American culture—or sometimes to have even deeper roots in the Western historical tradition. They are passed on from generation to generation, typically outside of formal and informal educational institutions. These are not necessarily or distinctively creationist misconceptions. Rather, they are simply very common among students and the general public, regardless of what their beliefs may be about whether evolution has occurred. Educators need to be aware of and ready to counter such common misconceptions. Unless they are explicitly pointed out and debunked, they will persist, coexisting with standard concepts of evolution that may be learned in the classroom.
When talking to the general public or school groups about human evolution, we have found that if you discuss evolution or answer questions about it long enough, one particular question will inevitably be asked. That question, of course, is “If humans evolved from monkeys, why are there still monkeys?” This is sometimes phrased in terms of “apes” instead of “monkeys,” but the technical differences between these groups of primates are irrelevant to the significance of the question being asked and the unspoken assumptions that underlie it. Besides, it is not at all clear that most of the public can tell the difference between an ape and a monkey, as illustrated by the numerous portrayals of “monkeys” in the media by chimpanzees. One of us has described this question as “probably the second most common question I get on talk radio” (Scott 2009).Footnote 1
When first encountering this question, it may not be clear how to respond. Why shouldn’t there still be monkeys? What is the questioner thinking? After repeatedly confronting this question in various guises, we have recognized that it derives from a mistaken view of evolution shared by many people, including students. Its persistence among the general public suggests that many retain this view even after instruction in evolution.
The “why are there still monkeys” question reflects an interpretation of evolution as a series of progressive steps, from simple to complex. It sees modern organisms, whether living species or other groups, as representatives of the ancestral “stages” or “steps” of evolution, or even as the still-surviving ancestors themselves. This popular misconception often includes the unspoken assumption that the appearance of descendants must coincide with, if not result in, the disappearance of ancestors. One must change into the other, without any coexistence of the two. The unconscious model of evolution that appears to be the default mode for a great many people thus seems to be both linear and anagenetic.
What is missing from this view of evolution is the crucial role of branching or splitting in creating the tree of life (Mead 2009). Perhaps the easiest way to introduce a more accurate model of the relationships of contemporary species is through the analogy of human family categories, and especially that of “cousins.” Students often don’t recognize that they have two classes of relatives in their own families, lineal and collateral. The progressive, ladder model of evolution highlights only the lineal relatives: grandparents, parents, children, grandchildren, etc. However, collateral relatives such as cousins, aunts, nephews, and so forth are also family members. In any large extended family, it is likely that the majority of relatives will be collateral rather than lineal ones. So too in the extended family of all living things.
The notion that living species are cousins, and neither ancestors nor descendants of each other, is one of the most important understandings for students to acquire. This relationship results from the branching nature of evolution and reflects common ancestry. Those who ask why there are still monkeys implicitly conceive of the relationship of monkeys, apes, and humans as a lineal one where monkeys evolve into apes, and apes evolve into humans (Fig. 1a). This is incorrect on many levels, of course. First, it usually pictures living monkeys and apes as part of this linear trajectory, instead of ancient apes and monkeys. Second, ancient monkeys didn’t evolve into apes. Monkeys from the New World are only distantly related to humans and apes, but even Old World monkeys didn’t evolve into apes. Apes and Old World monkeys descended from a more generalized anthropoid common ancestor that lacked the derived traits of either monkeys or apes (McNulty 2010). Sometimes scientists refer to the common ancestors of modern apes and humans as “apes,” though it would be clearer to students if we were more careful to distinguish such ancestors from living forms, perhaps by consistently referring to them as “fossil apes.”
Fig. 1
Relationship of monkeys, an African ape, and humans. a Common misconception of monkeys being ancestral to apes and apes to humans. b The accurate relationship of these three groups
So, many students are wrong about the linear sequence monkeys—apes—humans. The historically accurate relationship is that apes are more closely related to humans than they are to monkeys, as shown in Fig. 1b. Humans and African apes share a more recent common ancestor than the two of them share with monkeys. The accurate relationship between an individual, a sibling, and a cousin is diagrammed in Fig. 2b. It is identical in form to the accurate relationship of a monkey, an ape, and a human. Would students visualize their relationship to their relatives as being that in Fig. 2a? Surely not! Yet this same error is regularly made about the relationship of monkeys, apes, and humans. Reminding students that living species are cousins rather than ancestors will help counter the misconception of evolution as linear rather than branching.
Fig. 2
Relationship within a family. a The relationship of relatives if the same reasoning were followed as in Fig. 1a. b The relationship among an individual, a sibling, and a cousin is accurately depicted
In fact, this “cousin” model of relationships is more than just a metaphor. Individuals are relatives, in a genetic sense, if they share genes derived from common ancestors. Related species also share genes, derived from their common ancestors. In each case, the percentage of genes shared reflects recency of common ancestry as well as the distance between any pair of relatives. We humans share more genes with modern apes than we do with monkeys; an individual will share more genes with a brother than with a cousin. For a detailed elaboration of the scientific approach to understanding the relationships of species to each other, see the Tree of Life Web Project (http://tolweb.org/tree/). For a more popular exposition of the meaning of cousins and family trees in an evolutionary context, see the Evolutionary Genealogy website (http://www.evogeneao.com/evo-gene.html).
So how should teachers and professors respond when confronted with the question, “if humans evolved from monkeys, why are there still monkeys?” Sometimes this and other questions about evolution are encountered in a fleeting context where there is not quite enough time to explain the full scope of evolutionary biology (Scott 2006)! The briefest possible response would be to emphasize that evolution deals with common ancestors. It is not that humans descended from apes and that apes descended from monkeys; rather, humans and apes share a common ancestor, and it is more recent than the common ancestor they both share with monkeys.
If you are in a classroom situation where you have a bit more time, use the analogy of a human family tree, as in Fig. 2a and b. It is no more correct that humans descended from apes and that apes descended from monkeys than that you descended from your siblings who in turn descended from your cousins. No one would ask, “If you evolved from your cousin, why is your cousin still here?” The question “if humans evolved from monkeys, why are there still monkeys?” is equally absurd to an evolutionary biologist. (We note with interest that the Young Earth Creationist organization Answers in Genesis (AiG) has very recently, September 21, 2010, posted among their “Arguments Christians Shouldn’t Use” an article entitled “If Humans Evolved from Apes, Why Do Apes Exist Today?” (http://www.answersingenesis.org/articles/2010/09/21/humans-evolved-from-apes) We are pleased that AiG recognizes that this question “… shows a misunderstanding of what evolutionists actually believe about human evolution. The evolutionary concept of the origin of humans is not based on humans descending from modern apes but, rather, argues that humans and modern apes share a common ancestor.” (Emphasis in original) Of course, AiG still completely rejects this scientific conclusion, but at least they understand it.)
Where it is possible to use diagrams or other illustrations, you can reinforce the point by noting that genetic information supports the genealogical relationships of the primates as more or less distant cousins: apes and humans are genetically closer to one another than they are to monkeys, just as an individual shares more genes with a sibling than with a cousin.
As with all misconceptions, this one will not be laid to rest without making students grapple with the conflict between their misconceptions and the scientific data. And of course, misconceptions remain resistant to change without the repeated reinforcement of accurate science. In this context, it is critical that teachers present evolution not as a linear sequence but as a branching and splitting pattern of lineages, with the end products being cousins.
Notes
The most common question is, “Why is creationism such a problem in the United States and not elsewhere?”
Corresponding author
Rights and permissions
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (
https://creativecommons.org/licenses/by-nc/2.0
), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
If you are in a classroom situation where you have a bit more time, use the analogy of a human family tree, as in Fig. 2a and b. It is no more correct that humans descended from apes and that apes descended from monkeys than that you descended from your siblings who in turn descended from your cousins. No one would ask, “If you evolved from your cousin, why is your cousin still here?” The question “if humans evolved from monkeys, why are there still monkeys?” is equally absurd to an evolutionary biologist. (We note with interest that the Young Earth Creationist organization Answers in Genesis (AiG) has very recently, September 21, 2010, posted among their “Arguments Christians Shouldn’t Use” an article entitled “If Humans Evolved from Apes, Why Do Apes Exist Today?” (http://www.answersingenesis.org/articles/2010/09/21/humans-evolved-from-apes) We are pleased that AiG recognizes that this question “… shows a misunderstanding of what evolutionists actually believe about human evolution. The evolutionary concept of the origin of humans is not based on humans descending from modern apes but, rather, argues that humans and modern apes share a common ancestor.” (Emphasis in original) Of course, AiG still completely rejects this scientific conclusion, but at least they understand it.)
Where it is possible to use diagrams or other illustrations, you can reinforce the point by noting that genetic information supports the genealogical relationships of the primates as more or less distant cousins: apes and humans are genetically closer to one another than they are to monkeys, just as an individual shares more genes with a sibling than with a cousin.
As with all misconceptions, this one will not be laid to rest without making students grapple with the conflict between their misconceptions and the scientific data.
|
no
|
Evolution
|
Did humans evolve from apes?
|
no_statement
|
"humans" did not "evolve" from "apes". "humans" and "apes" did not share a common ancestor
|
https://startalkmedia.com/this-sunday-find-out-why-humans-didnt-evolve-from-monkeys/
|
This Sunday, Find Out Why Humans Didn't Evolve From Monkeys ...
|
Share
October 18, 2014 1:27 pm
Image courtesy of the Anne and Bernard Spitzer Hall of Human Origins at the American Museum of Natural History.
Okay. Let’s get this out of the way right up front: we did not evolve from monkeys. We had a common ancestor with monkeys, but we split off from them about 30 million years ago.
We didn’t evolve from apes, either. We split off from our common ancestor with bonobos and chimpanzees about 7 million years ago.
We took a different evolutionary path, and it has made all the difference.
After all, you don’t see a bunch of bonobos sitting around wondering whether it’s morally wrong to create designer babies with desirable characteristics, or a troop of monkeys debating new legislation protecting primate civil rights.
We don’t see chimpanzees using slings or spears, let alone sending spacecraft to the furthest reaches of our solar system and robotic rovers to explore Mars.
In fact, according to Dr. Ian Tattersall, Paleoanthropologist and Curator Emeritus at the American Museum of Natural History, even with a long history of biological evolution, all the action in human development is on the cultural and technological level now.
You may remember Ian as a guest on our recent episode, Planet of the Apes. He’s back this week for our latest episode, Cosmic Queries: Primate Evolution.
This time, Ian is here to help host Neil deGrasse Tyson answer questions our fans have previously submitted that co-host Eugene Mirman has plucked out of our social media.
Some of those questions are thought provoking indeed.
For instance, are we interfering with evolution by using technology and medicine to keep alive the weak that nature, left alone, would have otherwise culled?
Or why can’t we just flip a switch in our DNA and activate characteristics buried in our genome, like wings?
Then there are the questions ripped from science fiction, like whether dinosaurs could have evolved higher intelligence? Is it possible for a drug to alter DNA like it did in Rise of the Planet of the Apes? Could we use selective breeding to breed smarter chimps? And what species did Eugene Mirman evolve from?
You’ll learn the answer to these and other questions this Sunday, October 19 at 7:00 PM ET on our website, iTunes, Stitcher, SoundCloud and TuneIn.
|
Share
October 18, 2014 1:27 pm
Image courtesy of the Anne and Bernard Spitzer Hall of Human Origins at the American Museum of Natural History.
Okay. Let’s get this out of the way right up front: we did not evolve from monkeys. We had a common ancestor with monkeys, but we split off from them about 30 million years ago.
We didn’t evolve from apes, either. We split off from our common ancestor with bonobos and chimpanzees about 7 million years ago.
We took a different evolutionary path, and it has made all the difference.
After all, you don’t see a bunch of bonobos sitting around wondering whether it’s morally wrong to create designer babies with desirable characteristics, or a troop of monkeys debating new legislation protecting primate civil rights.
We don’t see chimpanzees using slings or spears, let alone sending spacecraft to the furthest reaches of our solar system and robotic rovers to explore Mars.
In fact, according to Dr. Ian Tattersall, Paleoanthropologist and Curator Emeritus at the American Museum of Natural History, even with a long history of biological evolution, all the action in human development is on the cultural and technological level now.
You may remember Ian as a guest on our recent episode, Planet of the Apes. He’s back this week for our latest episode, Cosmic Queries: Primate Evolution.
This time, Ian is here to help host Neil deGrasse Tyson answer questions our fans have previously submitted that co-host Eugene Mirman has plucked out of our social media.
Some of those questions are thought provoking indeed.
For instance, are we interfering with evolution by using technology and medicine to keep alive the weak that nature, left alone, would have otherwise culled?
Or why can’t we just flip a switch in our DNA and activate characteristics buried in our genome, like wings?
Then there are the questions ripped from science fiction, like whether dinosaurs could have evolved higher intelligence? Is it possible for a drug to alter DNA like it did in Rise of the Planet of the Apes?
|
no
|
Evolution
|
Did humans evolve from apes?
|
no_statement
|
"humans" did not "evolve" from "apes". "humans" and "apes" did not share a common ancestor
|
https://ncse.ngo/misconception-monday-lets-stop-monkeying-about-shall-we
|
Misconception Monday: Let's Stop Monkeying About, Shall We ...
|
Misconception Monday: Let's Stop Monkeying About, Shall We?
I have a three-year-old daughter who is obsessed with Curious George. I think I can recite every word to every one of the 102 episodes, which means that I know roughly 102 scenarios in which the Man with the Yellow Hat tells George, “Be a good little monkey,” which in turn means my daughter is familiar with the 102 scenes in her favorite show that make her mother yell “APE!” Yes, Curious George is not a monkey; he is a chimpanzee, which makes him an ape, as the Man with the Yellow Hat should well know as a scientific illustrator of some kind and with a scientist (who, alas, wears a lab coat) for a best friend. But I am digressing a little bit because ape vs. monkey is not the subject of this post. The subject is:
Misconception: Humans evolved from monkeys.
Correction: Modern humans and modern apes share a recent common ancestor.
In biology, we name organisms according to their evolutionary history—who your most recent ancestors were determines what you are. Apes like Curious George (along with gorillas, Neanderthals, modern humans, and many other lineages) are called apes because they descended from a common ape ancestor, not a monkey ancestor. So if anything, humans most recently evolved from apes, not monkeys, but we didn’t really do that either, at least not in the way many people understand it. Often, when people say or hear “humans evolved from apes,” they picture apes as we know them (that is to say, modern apes) turning into humans. People in the anti-evolution camp mistakenly try to use this as an argument against evolution: You silly scientists, they say, patting us on the head, how can we have evolved from chimpanzees if there are still chimpanzees?
Right. How indeed? The reason is, of course, that we did not evolve from modern chimpanzees. Rather, humans and chimpanzees both evolved from a common, now extinct, ancestor. So, instead of “humans evolved from apes” one really should say “modern humans and modern apes evolved from a common ancestor.”
Fossil and genetic evidence suggest this ancestor, let’s call it “CHAP” (Chimp-Human-Ancestral-Populations), lived about 6 million years ago. As its name (if not its acronym) suggests, CHAP was not an individual, but rather a group of populations. One of these populations evolved—over many thousands upon thousands of generations—to become modern-day chimps (Pan troglodytes, the common chimp, and Pan paniscus, the bonobo). Another one of the populations evolved—again, over many thousands upon thousands of generations—to become a variety of hominins, including us (Homo sapiens).
I’m painting a very simple picture here. I have so far described only the initial split. Evolution-by-splitting, in which A becomes B and C, is called cladogenesis, and it occurred many times along both the modern human and chimp lineages after that initial split. Linear evolution, or anagenesis, in which A becomes A’, also happened along both evolutionary paths. With anagenesis, there is observable directional change within a group. When people are confused that there are still chimps today, they are usually under the impression that all evolution happens in this linear way, but actually, the consensus among evolutionary scientists is that cladogenesis has been, by far, the more important pattern in the history of life.
So what should you say if a child—or anyone!—confronts you with the idea that evolution can’t be true because they saw chimps just the other day at the zoo? I suggest you explain (with as little judgment as possible) that we did not evolve from modern chimps; rather, both modern chimps and modern humans evolved from a common ancestor. You can always bring in a very common analogy to help get this important idea across: the family tree. Siblings share a common ancestor—usually called “mom” and “dad”. (This is a good time to reiterate that despite the name, “common ancestor” does not mean one, single, individual ancestor, but a group of individuals that lived at the same time and contributed genetic material to their descendants). Chimps and humans are a lot like siblings. In fact, they are so much like siblings, that we call chimps our “sister taxa.” (A 2010 paper by NCSE alums Genie Scott and Eric Meikle for Evolution: Education and Outreach explains this analogy thoroughly and has some nice diagrams that illustrate the parallel relationships among family members to those of modern humans and apes.)
Another paper, this one in The American Biology Teacher (Johnson and others, 2012), provides a few common questions and answers about human evolution. For example, they delve into exactly what fossil and genetic evidence tells us about CHAP, explaining that CHAP probably appeared to have more in common with modern chimps than modern humans, especially in terms of cranial features such as brain size, prominence of canine teeth, and a jaw that projected out from the face. This does not mean, however, that modern humans are somehow “more evolved” than modern chimps! Rather, data suggest that both lineages have undergone evolution at roughly equivalent amounts of evolutionary change since the time of the split, though certain types of evolutionary change have occurred more frequently in the human lineage, especially at the level of gene regulation and biological changes that involve the brain. (See the linked American Biology Teacher paper above for more detail on this.)
I want to conclude with a plea for compassion when confronted with questions along the lines of “If we evolved from chimps, why are there still chimps?” This stuff can be hard to wrap our heads around. And certainly in the case of children, the ignorance is not usually willful. I can’t tell you how often I’ve listened to scientists bungle this topic, saying we evolved from chimps instead of the more appropriate “chimp-like ancestors.” And even if said correctly, it’s completely understandable that “chimp-like ancestors” gets translated to “chimps” in someone’s head. So have patience. Muster some good resources and lay it out with compassion and empathy. Hopefully, your students will be more teachable than the Man with the Yellow Hat is—no matter how many times I try to explain it to him, he still tells George to be a “good little monkey.”
Short Bio
Stephanie Keep is the former Editor of Reports of the National Center for Science Education
|
Apes like Curious George (along with gorillas, Neanderthals, modern humans, and many other lineages) are called apes because they descended from a common ape ancestor, not a monkey ancestor. So if anything, humans most recently evolved from apes, not monkeys, but we didn’t really do that either, at least not in the way many people understand it. Often, when people say or hear “humans evolved from apes,” they picture apes as we know them (that is to say, modern apes) turning into humans. People in the anti-evolution camp mistakenly try to use this as an argument against evolution: You silly scientists, they say, patting us on the head, how can we have evolved from chimpanzees if there are still chimpanzees?
Right. How indeed? The reason is, of course, that we did not evolve from modern chimpanzees. Rather, humans and chimpanzees both evolved from a common, now extinct, ancestor. So, instead of “humans evolved from apes” one really should say “modern humans and modern apes evolved from a common ancestor.”
Fossil and genetic evidence suggest this ancestor, let’s call it “CHAP” (Chimp-Human-Ancestral-Populations), lived about 6 million years ago. As its name (if not its acronym) suggests, CHAP was not an individual, but rather a group of populations. One of these populations evolved—over many thousands upon thousands of generations—to become modern-day chimps (Pan troglodytes, the common chimp, and Pan paniscus, the bonobo). Another one of the populations evolved—again, over many thousands upon thousands of generations—to become a variety of hominins, including us (Homo sapiens).
I’m painting a very simple picture here. I have so far described only the initial split. Evolution-by-splitting, in which A becomes B and C, is called cladogenesis, and it occurred many times along both the modern human and chimp lineages after that initial split. Linear evolution, or anagenesis,
|
no
|
Evolution
|
Did humans evolve from apes?
|
no_statement
|
"humans" did not "evolve" from "apes". "humans" and "apes" did not share a common ancestor
|
https://news.janegoodall.org/2018/06/27/chimps-humans-monkeys-whats-difference/
|
Chimps, Humans and Monkeys: What's the Difference?
|
Chimps, Humans, and Monkeys: What’s the Difference?
It’s finally time to set the record straight: As much as we all love monkeys, Dr. Goodall’s studies and the work of the Jane Goodall Institute have primarily focused on chimpanzees, not monkeys. Now, I know your next question is probably, “But aren’t chimps the same thing as monkeys?” and the answer is, they are not! So what’s the difference and why does it matter?
Monkeys, chimpanzees, and humans are primates. Primates are mammals that are characterized by their advanced cognitive development and abilities, grasping hands and feet, and forward-facing eyes, along with other characteristics. Some primates (including some great apes and baboons) are typically terrestrial (move on the ground) versus arboreal (living in the trees), but all species of primates have adaptations to climb trees (EOL). Millions of years ago, primate ancestors evolved different defining characteristics from one another, branching into many species within different groups.
via mindthegraph
This can get confusing because of the numerous categories of primates: great apes, lesser apes, and Old/New World monkeys, are seemingly similar. All of the groups have similar characteristics, but there are characteristics that separate us. Great apes (humans, chimps, bonobos, gorillas and orangutans) generally have larger brains, larger bodies, and no tail. Dr. Goodall often likes to use Mr. H (a monkey plush toy who travels with her everywhere she goes) in her lectures to demonstrate this difference by asking the crowd, “How can we tell that Mr. H is not a chimpanzee?” She will then dangle Mr. H by his tail and say, “Chimps have no tail!”
Ok, so we understand how to identify great apes, but what about monkeys? There are many different species of monkeys, and what are known as ‘lesser apes’. Lesser apes (gibbons and siamangs) are usually smaller in stature, with thin arms, and a slightly smaller brain. Finally, monkeys are divided into “New World” and “Old World” monkeys. Many Old and New World Monkeys have tails, tend to walk on all fours like a cat or dog, and have the smallest brain out of the groups. Some Old World monkeys include baboons and guenons, while some New World monkeys include Capuchin and spider monkeys!
Now let’s get back to chimpanzees and humans. Humans did not evolve from chimps, as is a frequent misconception. Chimpanzees and humans share a recent common ancestor, and as some of this ancestral population evolved along one line to become modern chimpanzees, others of this ancestor evolved along a line of various species of early human, eventually resulting in Homo sapiens (you and me!). Chimpanzees are genetically closest to humans, and in fact, chimpanzees share about 98.6% of our DNA. We share more of our DNA with chimpanzees than with monkeys or other groups, or even with other great apes! We also both play, have complex emotions and intelligence, and a very similar physical makeup.
What many people may also not know, is how vital this taxonomy (or the systematic classification of organisms) is to Dr. Goodall and JGI’s story! It was in fact Dr. Goodall’s run in with the famed paleoanthropologist Dr. Louis Leakey in Kenya, which lead to her initial research in Gombe, Tanzania. Dr. Leakey was trying to understand early humans, and because his only point of reference was fossilized early human remains and other preserved cultural materials, he could not completely understand what early human behavior may have been like. When he met Jane, with her passion for and knowledge about animals, he knew she would be the perfect candidate to study chimpanzees – our closest living primate relative – from which he could conclude what behaviors were likely inherent to our most recent common ancestor and earlier humans. Dr. Leakey asked Jane to study the chimpanzees, Dian Fossey to study mountain gorillas, and Birute Galdikas to study orangutans, and they became known as ‘The Trimates.’
Our relationship to other primates is a dynamic one – and as Jane as often said, “Chimpanzees, more than any other living creature, have helped us to understand that there is no sharp line between humans and the rest of the animal kingdom. It’s a very blurry line, and it’s getting more blurry all the time.”
Want to know more and to support our ongoing research in Gombe, now the longest running wild chimpanzee study in the world? Become a Gombe Science Hero! Find out more and get involved here.
The Jane Goodall Institute is a global community conservation organization that advances the vision and work of Dr. Jane Goodall. By protecting chimpanzees and inspiring people to conserve the natural world we all share, we improve the lives of people, animals and the environment. Everything is connected—everyone can make a difference.
About Author
Maxine is currently an intern in the Community Engagement department at the Jane Goodall Institute. She is a senior at American University working on a Business Administration and Public Relations double major. She has been passionate about the environment and conservation since her parents raised her spending summers camping in the U.S National Parks. She hopes to someday work around the world on women's issues and environmental conservation. Upon her graduation in May 2018 she would like to become the proud owner of a dog.
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
|
Many Old and New World Monkeys have tails, tend to walk on all fours like a cat or dog, and have the smallest brain out of the groups. Some Old World monkeys include baboons and guenons, while some New World monkeys include Capuchin and spider monkeys!
Now let’s get back to chimpanzees and humans. Humans did not evolve from chimps, as is a frequent misconception. Chimpanzees and humans share a recent common ancestor, and as some of this ancestral population evolved along one line to become modern chimpanzees, others of this ancestor evolved along a line of various species of early human, eventually resulting in Homo sapiens (you and me!). Chimpanzees are genetically closest to humans, and in fact, chimpanzees share about 98.6% of our DNA. We share more of our DNA with chimpanzees than with monkeys or other groups, or even with other great apes! We also both play, have complex emotions and intelligence, and a very similar physical makeup.
What many people may also not know, is how vital this taxonomy (or the systematic classification of organisms) is to Dr. Goodall and JGI’s story! It was in fact Dr. Goodall’s run in with the famed paleoanthropologist Dr. Louis Leakey in Kenya, which lead to her initial research in Gombe, Tanzania. Dr. Leakey was trying to understand early humans, and because his only point of reference was fossilized early human remains and other preserved cultural materials, he could not completely understand what early human behavior may have been like. When he met Jane, with her passion for and knowledge about animals, he knew she would be the perfect candidate to study chimpanzees – our closest living primate relative – from which he could conclude what behaviors were likely inherent to our most recent common ancestor and earlier humans.
|
no
|
Ornithology
|
Did penguins originate in the Antarctic?
|
yes_statement
|
"penguins" "originated" in the antarctic.. the "origin" of "penguins" is in the antarctic.
|
https://www.cnn.com/2020/08/18/australia/penguins-origin-australia-nz-intl-hnk-scli-scn/index.html
|
Penguins originated in Australia and New Zealand -- not the ...
|
Penguins originated in Australia and New Zealand – not the Antarctic, new study finds
Penguins play before mating on King George island in Antarctica, in March 2014.
VANDERLEI ALMEIDA/AFP/Getty Images/file
CNN
—
When you think of the penguin, the image that pops to mind is usually the fuzzy bird waddling through snow or swimming in frigid Antarctic waters.
But penguins didn’t originate in Antarctica, as scientists have believed for years – they first evolved in Australia and New Zealand, according to a new study by researchers at the University of California, Berkeley.
The study, which was conducted in collaboration with museums and universities around the world, analyzed blood and tissue samples from 18 different species of penguins. They used this genomic information to look back in time, and trace the penguins’ movement and diversification over millennia.
“Our results indicate that the penguin crown-group originated during the Miocene (geological period) in New Zealand and Australia, not in Antarctica as previously thought,” said the study, published on Monday in the Proceedings of the National Academy of Sciences. “Penguins first occupied temperate environments and then radiated to cold Antarctic waters.”
Penguins originated in Australia and New Zealand 22 million years ago, researchers suggest; then, ancestors of the king and emperor penguins split off and moved to Antarctic waters, likely attracted by the abundant food supply there.
A penguin dives from an ice block in Antarctica in March 2014.
VANDERLEI ALMEIDA/AFP/Getty Images/file
These findings also support the theory that king and emperor penguins are the “sister group” to all other penguin lineages – adding another piece to the long-debated puzzle on where exactly these two species sit on the family tree.
Then about 12 million years ago, the Drake Passage – the body of water between Antarctica and the southern tip of South America – fully opened up. This allowed the penguins to swim throughout the Southern Ocean, and spread more widely to sub-Antarctic Islands as well as the warmer coastal regions of South America and Africa.
Today, the flightless birds are still found in Australia and New Zealand – as well as Antarctica, South America, the South Atlantic, southern Africa, the sub-Antarctic, Indian Ocean islands, and subtropical regions.
During the study, researchers also discovered a new lineage of penguin that has yet to be given a scientific description.
Penguins are adaptable – but not enough for climate change
The study shed light on the penguins’ adaptability to changing climates – and on the danger they now face in the modern climate crisis.
“We are able to show how penguins have been able to diversify to occupy the incredibly different thermal environments they live in today, going from 9 degrees Celsius (48 Fahrenheit) in the waters around Australia and New Zealand, down to negative temperatures in Antarctica and up to 26 degrees (79 Fahrenheit) in the Galapagos Islands,” said Rauri Bowie, one of the lead researchers and a professor of integrative biology at UC Berkeley, in a statement from the university.
“But we want to make the point that it has taken millions of years for penguins to be able to occupy such diverse habitats, and at the rate that oceans are warming, penguins are not going to be able to adapt fast enough to keep up with changing climate.”
The team was able to pinpoint genetic adaptations that allowed penguins to thrive in challenging environments; for example, their genes evolved to better regulate body temperature, which allowed them to live in both subzero Antarctic temperatures and warmer tropical climes.
But these steps of evolution took millions of years – time that the penguins don’t have now, as their populations dwindle.
“Right now, changes in the climate and environment are going too fast for some species to respond to the climate change,” said Juliana Vianna, associate professor at the Pontifical Catholic University of Chile, in the UC Berkeley statement.
The different elements of climate change culminate in a perfect storm. Disappearing sea ice mean fewer breeding and resting grounds for emperor penguins. The reduced ice and warming oceans also mean less krill, the main component of the penguins’ diet.
The world’s second-largest emperor penguin colony has almost disappeared; thousands of emperor penguin chicks in Antarctica drowned when sea ice was destroyed by storms in 2016. Reoccuring storms in 2017 and 2018 led to the death of almost all the chicks at the site each season.
Some penguin colonies in the Antarctic have declined by more than 75% over the past 50 years, largely as a result of climate change.
In the Galapagos, penguin populations are declining as warm El Nino events – a weather phenomenon that sees warming of the eastern Pacific Ocean – happen more frequently and with greater severity. In Africa, warming waters off the southern coast have also caused penguin populations to drop drastically.
|
Penguins originated in Australia and New Zealand – not the Antarctic, new study finds
Penguins play before mating on King George island in Antarctica, in March 2014.
VANDERLEI ALMEIDA/AFP/Getty Images/file
CNN
—
When you think of the penguin, the image that pops to mind is usually the fuzzy bird waddling through snow or swimming in frigid Antarctic waters.
But penguins didn’t originate in Antarctica, as scientists have believed for years – they first evolved in Australia and New Zealand, according to a new study by researchers at the University of California, Berkeley.
The study, which was conducted in collaboration with museums and universities around the world, analyzed blood and tissue samples from 18 different species of penguins. They used this genomic information to look back in time, and trace the penguins’ movement and diversification over millennia.
“Our results indicate that the penguin crown-group originated during the Miocene (geological period) in New Zealand and Australia, not in Antarctica as previously thought,” said the study, published on Monday in the Proceedings of the National Academy of Sciences. “Penguins first occupied temperate environments and then radiated to cold Antarctic waters.”
Penguins originated in Australia and New Zealand 22 million years ago, researchers suggest; then, ancestors of the king and emperor penguins split off and moved to Antarctic waters, likely attracted by the abundant food supply there.
A penguin dives from an ice block in Antarctica in March 2014.
VANDERLEI ALMEIDA/AFP/Getty Images/file
These findings also support the theory that king and emperor penguins are the “sister group” to all other penguin lineages – adding another piece to the long-debated puzzle on where exactly these two species sit on the family tree.
Then about 12 million years ago, the Drake Passage – the body of water between Antarctica and the southern tip of South America – fully opened up.
|
no
|
Ornithology
|
Did penguins originate in the Antarctic?
|
yes_statement
|
"penguins" "originated" in the antarctic.. the "origin" of "penguins" is in the antarctic.
|
https://indianapublicmedia.org/amomentofscience/where-did-penguins-come-from.php
|
Where Did Penguins Come From? | A Moment of Science - Indiana ...
|
Posted February 3, 2021
Read Transcript
Hide Transcript
Transcript
D: You mean my stuffed animal, Yaël? I won it from a claw machine. But, you know, scientists have been mulling over the same question—about real penguins, though. And after sequencing the genome of the 18 species of penguins that exist today, they have a pretty good idea how these evolved. The genetic evidence reveals that today’s penguin species originated in the coastal regions of Australia and New Zealand and nearby islands in the South Pacific about 22 million years ago—not in Antarctica, as many people once thought. They’ve been on an interesting evolutionary journey ever since, diversifying and spreading to all kinds of climates.
Y: It’s not hard to imagine how they made their way to Antarctica, but there are penguin species all the way in South America and Africa, right? How did they get all the way up there?
D: It all started about 12 million years ago, when Drake’s Passage between Antarctica and the southern tip of South America opened up fully and the Antarctic Circumpolar Current intensified, which caused the glaciation of Antarctica and drove penguins that hadn’t adapted to live in icy regions northward. Some ended up all the way in South America and Africa, and genetic adaptations let them thrive in these new places. These adaptations allowed them to refine how they regulate their body temperature, which made it possible for different species of penguins to live in climates as varied as Antarctica and regions close to the equator.
Y: Or even claw machines in the United States.
D: I’m not sure if the stuffed animal species was one of the 18 in this analysis.
(Wikimedia Commons)
Scientists have mulling over the origin of the penguin. And after sequencing the genome of the 18 species of penguins that exist today, they have a pretty good idea how these evolved. The genetic evidence reveals that today’s penguin species originated in the coastal regions of Australia and New Zealand and nearby islands in the South Pacific about 22 million years ago—not in Antarctica, as many people once thought. They’ve been on an interesting evolutionary journey ever since, diversifying and spreading to all kinds of climates.
It's not hard to imagine how they made their way to Antarctica, but there are penguin species all the way in South America and Africa, too. So how did they get all the way up there?
It all started about 12 million years ago, when Drake’s Passage between Antarctica and the southern tip of South America opened up fully and the Antarctic Circumpolar Current intensified, which caused the glaciation of Antarctica and drove penguins that hadn’t adapted to live in icy regions northward. Some ended up all the way in South America and Africa, and genetic adaptations let them thrive in these new places. These adaptations allowed them to refine how they regulate their body temperature, which made it possible for different species of penguins to live in climates as varied as Antarctica and regions close to the equator.
|
Posted February 3, 2021
Read Transcript
Hide Transcript
Transcript
D: You mean my stuffed animal, Yaël? I won it from a claw machine. But, you know, scientists have been mulling over the same question—about real penguins, though. And after sequencing the genome of the 18 species of penguins that exist today, they have a pretty good idea how these evolved. The genetic evidence reveals that today’s penguin species originated in the coastal regions of Australia and New Zealand and nearby islands in the South Pacific about 22 million years ago—not in Antarctica, as many people once thought. They’ve been on an interesting evolutionary journey ever since, diversifying and spreading to all kinds of climates.
Y: It’s not hard to imagine how they made their way to Antarctica, but there are penguin species all the way in South America and Africa, right? How did they get all the way up there?
D: It all started about 12 million years ago, when Drake’s Passage between Antarctica and the southern tip of South America opened up fully and the Antarctic Circumpolar Current intensified, which caused the glaciation of Antarctica and drove penguins that hadn’t adapted to live in icy regions northward. Some ended up all the way in South America and Africa, and genetic adaptations let them thrive in these new places. These adaptations allowed them to refine how they regulate their body temperature, which made it possible for different species of penguins to live in climates as varied as Antarctica and regions close to the equator.
Y: Or even claw machines in the United States.
D: I’m not sure if the stuffed animal species was one of the 18 in this analysis.
(Wikimedia Commons)
Scientists have mulling over the origin of the penguin. And after sequencing the genome of the 18 species of penguins that exist today, they have a pretty good idea how these evolved.
|
no
|
Ornithology
|
Did penguins originate in the Antarctic?
|
yes_statement
|
"penguins" "originated" in the antarctic.. the "origin" of "penguins" is in the antarctic.
|
https://www.jpost.com/health-science/penguins-originate-from-australia-new-zealand-new-study-finds-639150
|
Penguins originate from Australia, New Zealand, new study finds ...
|
Penguins most likely originated in Australia and New Zealand 22 million years ago, before the ancestors of the emperor and king penguins relocated to Antarctica.
A gentoo penguin dives into the water in its enclosure at the Sea Life aquarium in central London
(photo credit: REUTERS)
Advertisement
A new study from the University of California, Berkley has found that contrary to widespread beliefs, penguins evolved in Australia and New Zealand rather than Antarctica.
As part of a collaborative effort between museums and universities from across the globe, the researchers studied and analyzed 18 different penguin species. Using the genomic information obtained from blood and tissue samples, the scientists were able to trace the movement and diversification of the penguins throughout a period of over a thousand years.
And the findings of this study may rewrite everything scientists know about the origin of these flightless birds.
"Our results indicate that the penguin crown-group originated during the Miocene (geological period) in New Zealand and Australia, not in Antarctica as previously thought," according to the study, published on Monday in the Proceedings of the National Academy of Sciences, CNN reported. "Penguins first occupied temperate environments and then [relocated] to cold Antarctic waters."
The researchers posit that penguins likely first originated 22 million years ago, living in what is now Australia and New Zealand. However, some of them, ancestors of the emperor penguins and king penguins, eventually relocated to Antarctica, most likely due to an abundant food supply.
Eventually, around 10 million years later, the penguins were soon able to spread throughout the region. This is due to the body of water between Antarctica and South America – now known as the Drake Passage – fully opening up. As a result, penguins soon began to inhabit Antarctica, parts of Africa, parts of South America, some islands in the Indian Ocean and other subtropical regions. Indeed, some can even still be found in Australia and New Zealand, specifically the yellow-eyed, little and other crested penguins.
Not only does this research rewrite the understood origin of penguins, it also sheds new light on the ability of these flightless birds to adapt to new climates.
The study was able to pinpoint specific genetic adaptations the penguins used to thrive in new environments. This includes changes in genes used to regulate body heat, allowing them to survive in both subzero and tropical temperatures, an ability to dive deeper in the sea and osmoregulation – the process enabling them to survive on salt water without fresh water.
In addition, the study also seems to resolve a longstanding debate regarding the emperor and king penguins, which has long been hypothesized to be a sister group to all other penguins, due to being the only two species in the Aptenodytes genus.
“It was very satisfying to be able to resolve the phylogeny, which has been debated for a long time,” Rauri Bowie, professor of integrative biology at the University of California, Berkeley, and curator in the Museum of Vertebrate Zoology (MVZ) at Berkeley, said in a statement.
“The debate hinged on where, exactly, the emperor and king penguins were placed in the family tree, whether they are nested inside the tree closer to other lineages of penguins or whether they are sisters to all the other penguins, which is what our phylogeny showed and some other previous studies had suggested. And it fits with the rich fossil history of penguins.”
However, the study has also shed light on the challenges penguins face today due to climate change.
“We are able to show how penguins have been able to diversify to occupy the incredibly different thermal environments they live in today, going from 9 degrees Celsius (48 F) in the waters around Australia and New Zealand, down to negative temperatures in Antarctica and up to 26 degrees (79 F) in the Galápagos Islands,” Bowie explained. “But we want to make the point that it has taken millions of years for penguins to be able to occupy such diverse habitats, and at the rate that oceans are warming, penguins are not going to be able to adapt fast enough to keep up with changing climate.”
“We saw, over millions of years, that the diversification of penguins decreased with increasing temperature, but that was over a longtime scale,” explained Juliana Vianna, associate professor of ecosystems and environment at the Pontifical Catholic University of Chile in Santiago.
“Right now, changes in the climate and environment are going too fast for some species to respond to the climate change.”
Bowie and Vianna hope to build on this research, and plan to dive into the genetic variations found throughout the disparate populations of penguins.
“Penguins are very charismatic, certainly,” Vianna concluded. “But I hope these studies also lead to better conservation.”
The Jerusalem Post Customer Service Center can be contacted with any questions or requests:
Telephone: *2421 * Extension 4 Jerusalem Post or 03-7619056
Fax: 03-5613699
E-mail: subs@jpost.com
The center is staffed and provides answers on Sundays through Thursdays between 07:00 AM and 14:00 PM and Fridays only handles distribution requests between 7:00 AM and 12:30 PM
For international customers: The center is staffed and provides answers on Sundays through Thursdays between 7AM and 14PM Israel time Toll
Free number 1-800-448-9291
Telephone +972-3-761-9056
Fax: 972-3-561-3699
E-mail: subs@jpost.com
|
Penguins most likely originated in Australia and New Zealand 22 million years ago, before the ancestors of the emperor and king penguins relocated to Antarctica.
A gentoo penguin dives into the water in its enclosure at the Sea Life aquarium in central London
(photo credit: REUTERS)
Advertisement
A new study from the University of California, Berkley has found that contrary to widespread beliefs, penguins evolved in Australia and New Zealand rather than Antarctica.
As part of a collaborative effort between museums and universities from across the globe, the researchers studied and analyzed 18 different penguin species. Using the genomic information obtained from blood and tissue samples, the scientists were able to trace the movement and diversification of the penguins throughout a period of over a thousand years.
And the findings of this study may rewrite everything scientists know about the origin of these flightless birds.
"Our results indicate that the penguin crown-group originated during the Miocene (geological period) in New Zealand and Australia, not in Antarctica as previously thought," according to the study, published on Monday in the Proceedings of the National Academy of Sciences, CNN reported. "Penguins first occupied temperate environments and then [relocated] to cold Antarctic waters. "
The researchers posit that penguins likely first originated 22 million years ago, living in what is now Australia and New Zealand. However, some of them, ancestors of the emperor penguins and king penguins, eventually relocated to Antarctica, most likely due to an abundant food supply.
Eventually, around 10 million years later, the penguins were soon able to spread throughout the region. This is due to the body of water between Antarctica and South America – now known as the Drake Passage – fully opening up. As a result, penguins soon began to inhabit Antarctica, parts of Africa, parts of South America, some islands in the Indian Ocean and other subtropical regions. Indeed, some can even still be found in Australia and New Zealand, specifically the yellow-eyed, little and other crested penguins.
|
no
|
Ornithology
|
Did penguins originate in the Antarctic?
|
yes_statement
|
"penguins" "originated" in the antarctic.. the "origin" of "penguins" is in the antarctic.
|
https://www.conicet.gov.ar/sixty-million-years-of-information-on-penguins/
|
Sixty million years of information about Penguins | CONICET
|
Contrary to popular belief, penguins did not originate in the Antarctic. They originated in microcontinent called Zealand (around present-day New Zealand) and from that starting point, some 60 million years ago, they began to disperse, to evolve, to transform. Pablo Borboroglu is a CONICET researcher at the Center for the Study of Marine Systems (CESIMAR, CONICET) and co-author of a recently published international study that analyzes, over time, the adaptations that allowed these animals to live in environments with the most extreme conditions on the planet. For this study, genetic samples of current penguins and fossil species were analyzed to learn in detail their origin and evolution. The results were published in Nature Communications.
“The geneticists identified the DNA segments that determine the evolutionary characteristics related to vision, taste of prey, in the ability to oxygenate, to remain in apnea under water, the ability to generate fat, to resist cold. This type of fossil analysis included all penguin species, not just the eighteen that currently exist. Throughout history, penguins of many shapes and sizes have inhabited the planet, including one that is one of the oldest ones of New Zealand, which they called the Monster Penguin, an animal that weighed more than 80 kilos and is estimated to have reached 1.8 meters. Seventy-five percent of the species that existed are already extinct. Three quarters of the history of penguins no longer exists. Many species collapsed because of climate change but they can still give us a lot of information,” Borboroglu explains.
Although there are previous studies that focused on the adaptive changes of penguins, this study includes a large number of genes from extinct species that will provide information on their adaptations over time. “Sampling of fossil penguins is crucial for understanding the environmental context, improving phylogenetic resolution and dating accuracy, and reconstructing biogeographical events,” says Borboroglu.
Currently, penguins spend more that eighty percent of their lives in the water. The bodily adaptations that explain this ability come from the past. Penguins had already lost their ability to fly 60 million years ago, before the information of the polar ice caps. Since then, their life characteristics have been shaped by rising and falling temperatures, and their bodies are highly specialized for some of the most extreme conditions on Earth.
“Some of the genes that were analyzed are related to vision and how it was adapted to look accurately underwater and facilitate the capture of prey. They can observe a wide range of ultraviolet colors that we do not see, and they have, on the other hand, a more limited possibility of seeing other colors such as red, the first color that stops being seen in the ocean. Another of the modifications is linked to taste. They can detect salty and bitter tastes, but not sweet or sour ones. This is related to the diet they eat. All these characteristics tend to improve the efficiency of these animals under the sea. An emperor penguin, for example, can stay underwater for up to twenty-three minutes and dive up to five hundred meters deep”, describes the researcher.
Despite all the changes that these animals have been adopted and that have made them, possibly, the most uniquely specialized birds of all the existing ones, studies indicate that their possibilities of adaptation have diminished. “60 million years ago the rate of penguin evolution was very high but it slowed down. This is related to the sea surface temperature. At times when it was warmer, the rate dropped and vice versa. Larger penguins also had a higher rate because they generally live or have lived in more extreme environments. Penguins currently have the lowest rate of evolution of all birds. At the rate of environmental change taking place, this could present a conservation problem for penguins. That is why this type of study is of vital importance to know more and more precisely the adaptive capacities that these animals were acquiring and to think about them in the context of the challenges of the present.
|
Contrary to popular belief, penguins did not originate in the Antarctic. They originated in microcontinent called Zealand (around present-day New Zealand) and from that starting point, some 60 million years ago, they began to disperse, to evolve, to transform. Pablo Borboroglu is a CONICET researcher at the Center for the Study of Marine Systems (CESIMAR, CONICET) and co-author of a recently published international study that analyzes, over time, the adaptations that allowed these animals to live in environments with the most extreme conditions on the planet. For this study, genetic samples of current penguins and fossil species were analyzed to learn in detail their origin and evolution. The results were published in Nature Communications.
“The geneticists identified the DNA segments that determine the evolutionary characteristics related to vision, taste of prey, in the ability to oxygenate, to remain in apnea under water, the ability to generate fat, to resist cold. This type of fossil analysis included all penguin species, not just the eighteen that currently exist. Throughout history, penguins of many shapes and sizes have inhabited the planet, including one that is one of the oldest ones of New Zealand, which they called the Monster Penguin, an animal that weighed more than 80 kilos and is estimated to have reached 1.8 meters. Seventy-five percent of the species that existed are already extinct. Three quarters of the history of penguins no longer exists. Many species collapsed because of climate change but they can still give us a lot of information,” Borboroglu explains.
Although there are previous studies that focused on the adaptive changes of penguins, this study includes a large number of genes from extinct species that will provide information on their adaptations over time. “Sampling of fossil penguins is crucial for understanding the environmental context, improving phylogenetic resolution and dating accuracy, and reconstructing biogeographical events,” says Borboroglu.
Currently, penguins spend more that eighty percent of their lives in the water. The bodily adaptations that explain this ability come from the past.
|
no
|
Ornithology
|
Did penguins originate in the Antarctic?
|
yes_statement
|
"penguins" "originated" in the antarctic.. the "origin" of "penguins" is in the antarctic.
|
https://www.nature.com/articles/s41467-022-31508-9
|
Genomic insights into the secondary aquatic transition of penguins ...
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Subjects
Abstract
Penguins lost the ability to fly more than 60 million years ago, subsequently evolving a hyper-specialized marine body plan. Within the framework of a genome-scale, fossil-inclusive phylogeny, we identify key geological events that shaped penguin diversification and genomic signatures consistent with widespread refugia/recolonization during major climate oscillations. We further identify a suite of genes potentially underpinning adaptations related to thermoregulation, oxygenation, diving, vision, diet, immunity and body size, which might have facilitated their remarkable secondary transition to an aquatic ecology. Our analyses indicate that penguins and their sister group (Procellariiformes) have the lowest evolutionary rates yet detected in birds. Together, these findings help improve our understanding of how penguins have transitioned to the marine environment, successfully colonizing some of the most extreme environments on Earth.
Introduction
Penguins are one of the most iconic groups of birds, serving as both a textbook example of the evolution of secondarily aquatic ecology and as sentinels for the impacts of global change on ecosystem health1. Although often associated with Antarctica in the popular imagination, penguins originated more than 60 million years ago (Mya), evolving wing-propelled diving and losing the capacity for aerial flight long before the formation of polar ice sheets2. Over time, penguins evolved the suite of morphological, physiological, and behavioral features that make them arguably the most uniquely specialized of all extant birds. These adaptations have allowed penguins to colonize some of the most extreme environments on Earth.
Previous phylogenetic studies have yielded insights into penguin evolution, yet have been limited by sampling issues (e.g., number of lineages incorporated and quality of molecular markers3,4,5,6,7). Genomic studies have shed light on the diversification of extant penguins7,8,9 but have not integrated extinct species. Because nearly three-quarters of known penguin species are represented only by fossils (e.g., 2,3), sampling extinct species is crucial for improving phylogenetic resolution and dating accuracy, reconstructing biogeographic events, and understanding the environmental context in which key adaptations arose. While several studies have included fossil penguins, these utilized only mitochondrial genomes and/or small numbers of nuclear genes (e.g., 3,4,5,6), limiting their ability to disentangle confounding processes, such as historical and ongoing introgression and incomplete lineage sorting.
Here, we take a comprehensive approach to inferring the tempo and drivers of penguin diversification by combining genomes from all extant and recently-extinct penguin lineages (27 taxa) (Table 1), stratigraphic data from fossil penguins (47 taxa), and morphological and biogeographic data from all species (extant and extinct) (Fig. 1 and Supplementary Fig. 1; Supplementary Data 1) into a single framework for Bayesian phylogenetic analysis. This combined approach, using the fossilized birth-death process with sampled ancestors4 (see supplementary methods) offers a more complete understanding of speciation and biogeographic events over the entire history of penguin evolution. It extends our insights beyond the ~15–20 million year (Ma) history of crown penguins to include the ~50 Ma interval during which only stem penguins existed. Within this phylogenetic framework, we highlight key genes involved in marine adaptations, compare evolutionary rates in penguins to those of other birds, and reconstruct the demographic histories of individual species. Together, these extensive datasets provide new insights into the evolution of extreme ecological preferences and the genetic basis for the adaptations that enabled penguins to occupy these niches.
Results
Climate change drove evolution, biogeography, and demography
Phylogenetic results (Fig. 1 and Supplementary Fig. 2) confirm previous findings, recovering Aptenodytes (king and emperor penguins) as the sister clade to all other crown penguins, with brush-tailed (Pygoscelis) penguins in turn sister to two clades uniting the banded (Spheniscus) and little (Eudyptula) penguins and the yellow-eyed (Megadyptes) and crested (Eudyptes) penguins6,7,9. Biogeographical reconstructions (Fig. 1, Supplementary Figs. 3–4 and Supplementary Data 1) support a Zealandian origin for penguins6,7. Stem penguins radiated extensively in Zealandia before dispersing to South America and Antarctica multiple times, following the eastward-flowing direction of the Antarctic Circumpolar Current (ACC) (Fig. 1). Crown penguins most likely arose from descendant lineages in South America, before dispersing back to Zealandia at least three times. Interestingly, at least two such dispersals occurred before the inferred onset of the ACC system, suggesting that early stem penguins were not dependent on currents to disperse over long distances. A second pulse of speciation coincides with the onset of the ACC, though understanding whether this pattern is real or an artifact of fossil sampling requires more collecting from early Eocene localities. We infer an age of ~14 Ma for the origin of crown penguins, which is more recent than the ~24 Ma age recovered in genomic analyses, not including fossil taxa7 (Supplementary Fig. 2b) and coincides with the onset of global cooling during the middle Miocene climate transition4,10 (Supplementary Fig. 3a). This young age suggests that expansion of Antarctic ice sheets and the onset of dispersal vectors such as the Benguela Current11 during the middle to late Miocene facilitated crown penguin dispersal and speciation, as hinted at by fossil evidence12.
Incongruences between species trees and gene trees were identified, e.g., alternate topologies occurred at high frequencies (>10%) for several internal branches (Fig. 1c; Supplementary Fig. 5). These patterns indicate that gene tree discordance may be caused by incomplete lineage sorting (ILS) or introgression events. By quantifying ILS and introgression via branch lengths from over 10,000 gene trees, we found that the rapid speciation within crown penguins was accompanied by >5% ILS content within the ancestors of Spheniscus, Eudyptula, Eudyptes, and several subgroups within Eudyptes (Fig. 2a). Our dated tree provides a temporal framework for this rapid radiation: the four extant Spheniscus taxa are all inferred to have split from one another within the last ~3 Ma, and likewise the nine extant Eudyptes taxa likely split from one another in that same time (Fig. 1b). Many closely related penguin species/lineages are known to hybridize in the wild (see supplementary methods). Consistent with this, multiple analyses suggest that introgression also contributes to species tree—gene tree incongruence (Supplementary Figs. 6–9 and Supplementary Data 2; also see Supplementary Methods for further details). This could explain the most notable conflict in previous phylogenetic results, which showed inconsistency over whether Aptenodytes alone7 or Aptenodytes and Pygoscelis together4,5 represent the sister clade to all other extant penguins. Introgression was detected between the ancestor of Aptenodytes and the ancestor of other extant penguins, and is inferred to have occurred when the range of these ancestors overlapped in South America (Fig. 2a and Supplementary Data 2). Introgression (>9%) was also detected between Eudyptula novaehollandiae and Eudyptula minor, and several introgression events were especially pervasive in Eudyptes (Fig. 2a and Supplementary Fig. 6).
Many extant penguin lineages began to diverge within the last 3 Ma (Fig. 1). To obtain insight into this recent phase of penguin diversification, we inferred post-speciation introgression events and estimated the time when gene flow from introgression ceased between 20 pairs of closely related lineages (see Supplementary Methods). Our results provide further evidence for recent introgression between all sampled pairings (Fig. 2b) except for Eudyptes chrysocome and E. filholi, whose ranges are geographically disparate (Fig. 1a). Almost all species exhibit a genomic signature of a period of physical isolation during the Last Glacial Period (LGP) with increased climate fluctuation and environmental uncertainty, followed by postglacial contact and introgression as Earth warmed once again (Supplementary Figs. 8–9). This strongly supports the hypothesis that penguins were impacted by ecosystem-wide, climate-driven refugia/recolonization cycles in the Southern Ocean13,14, a pattern also observed in other marine taxa during the Last Glacial Maximum (e.g.,15). As ice volumes increased during the LGP high-latitude penguin species were likely forced into isolated mid-latitude refugia. As climate warmed from the late Pleistocene to Holocene, these species moved back towards the poles, recolonizing landmasses and islands as they became habitable once again, and, notably, experiencing secondary contact with one another (e.g., on small sub-Antarctic islands).
Today, penguins are under threat from climate change and environmental disruption (see Supplementary Methods for further citations) and half of all extant species are considered either Endangered or Vulnerable (IUCN red list categories). Understanding how past climate events have impacted penguin population size during the LGP is crucial in inferring how penguin populations may respond to future climate change. We estimated the effective population size for all recent penguin taxa except for E. warhami and M. a. waitaha (where data were too limited, Supplementary Data 2) (Fig. 2c, Supplementary Figs. 10–11 and Supplementary Data 2). These analyses provide a window into long-term population histories (very recent trends cannot be accurately recovered with these methods16). Four demographic patterns emerge for this critical time interval, illuminating disparate responses of penguins to glacial-interglacial cycles (Fig. 2c). The most prevalent pattern is shared by nine lineages (Aptenodytes patagonicus, Pygoscelis antarctica, P. papua “KER”, S. demersus, S. humboldti M. a. antipodes, M. a. richdalei, Eudyptes robustus and E. pachyrhynchus), all of which show evidence of population expansion coincident with the beginning of the LGP, followed by population decline towards the end of the LGP. In contrast to this pattern, nine lineages (A. forsteri, P. adeliae, P. papua “WAP”, P. papua “SG”, S. magellanicus, E. moseleyi, E. filholi, E. chrysolophus schlegeli, and E. sclateri) show evidence of population decline coincident with the beginning of the LGP, followed by population expansion towards the end of the LGP. Almost all of the remaining lineages show strong evidence of persistent long-term declines in populations from the early LPG to the end of LPG. All three Eudyptula taxa and Eudyptes chrysolophus chrysolophus underwent a steep population decline spanning the LGP, while three taxa (P. papua “FAL”, S. mendiculus, and E. chrysocome) show evidence of continual population decline across the last 250 thousand years (ka).
Interestingly, taxa that increased in population size towards the end of the LGP (e.g., A. forsteri, P. adeliae, S. magellanicus, E. filholi, E. moseleyi, E. sclateri, and E. schlegeli are typically migratory, and tend to forage offshore (>50 km; see Supplementary Data 117), while taxa that decreased towards the end of the LGP (e.g., S. humboldti, S. demersus, M. a. antipodes and likely M. a. richdalei) tend to be residential, and forage inshore; see Supplementary Data 1. Taxa that disperse farther may have overcome local impacts of global climate cooling during the LGP (e.g., changes in sea-ice extent, prey abundance and terrestrial glaciation, however see18) largely by relocating to lower latitudes (e.g.,14), whereas locally-restricted taxa may have been more prone to sudden population collapses.
Penguins have the slowest evolutionary rates among birds
The integrated evolutionary speed hypothesis (IESH) proposes that temperature, water availability, population size, and spatial heterogeneity influence evolutionary rate19. Life history traits also impact the evolutionary rate, but such relationships remain incompletely understood in birds20. Penguins are long-lived, large-bodied, and produce few offspring, thus providing an ideal case study in how life history may impact evolutionary rate. We tested the IESH using three proxies for evolutionary rate: substitution rate, P and K2P distances between lineages and their ancestors (Supplementary Fig. 12 and Supplementary Data 3). We found that penguins and their sister group (Procellariiformes) had the lowest evolutionary rates of the 17 avian orders sampled by21 (Fig. 3a, Supplementary Fig. 13, and Supplementary Data 3). Because other aquatic orders also show slow rates (e.g., the aquatic Anseriformes show a significantly slower rate than their terrestrial sister group Galliformes), we hypothesize that the rate in penguins represents the culmination of a gradual slowdown associated with increasingly aquatic ecology. Intriguingly, we detected a trend toward decreasing rate over the first ~10 Ma of crown penguin evolution, followed by a marked uptick ~2 Ma, which suggests the onset of glacial-interglacial cycles contributed to a recent increase in evolutionary rates in penguins (Fig. 3b).
Fig. 3: Evolutionary rates in birds.
a Evolutionary rate in avian orders based on a ~19 Mbp alignment of highly conserved genome regions. Sphenisciformes and Procellariiformes have the lowest evolutionary rate among modern bird orders (One-sided Wilcoxon Rank sum test, P values < 0.05 for all pairs except for Sphenisciformes and Procellariiformes (P-values > 0.1)). Numbers at the tips represent the sample size in each group. Numbers at nodes represent the divergence times (Ma) between each order and its sister taxon and red dots within the boxplots indicate average values. We did not attempt to estimate the evolutionary rates for orders containing less than three sampled species (gray font; Musophagiformes, Mesitornithiformes, and Struthioniformes). Boxplots show the median with hinges at the 25th and 75th percentile and whiskers extending 1.5 times the interquartile range. Some bird images were downloaded from phylopic.org and were licensed under the Creative Commons (CC0) 1.0 Universal Public Domain Dedication. b Evolutionary rates inferred for extant penguin lineages at internal nodes from the maximum clade credibility tree, calculated using a 500 Mbp genome alignment. Gray shadows represent the 95% credible intervals. c–e Correlations between c, body mass and generation time (P value < 0.05), d generation time (gray dots, solid lines, P value < 0.001) or body mass (blue dots, dashed lines P value < 0.05) and average sea surface temperature, e substitution per site per generation time (gray dots, solid lines, P value < 0.001) or substitution per site per million years (purple dots, dashed lines P value < 0.01) and body mass among 18 penguins, estimated using phylogenetic correlation - Phylogenetic Generalized Least Squares Regression with the best-fitting model identified by Akaike Information Criterion. Correlations with linear models were shown with black lines. Source data is provided as a Source Data file.
Extant penguin lineages show a wide range of individual rates, and phylogenetic correlation analyses (phylogenetic generalized least squares regression) shed light on potential factors influencing this disparity (Fig. 3c–e and Supplementary Data 3). Extant penguins showed a significant negative correlation between body mass and average sea surface temperature (Fig. 3d). Despite species from warmer regions having shorter generation times (Fig. 3d), a significant negative correlation was found between evolutionary rate and average sea surface temperature (Fig. 3e), suggesting that temperature may influence penguin evolutionary rates by regulating selective pressures, but not only through its effect on metabolism22. This result is in parallel with studies that show speciation rates to be higher in polar environments than in the tropics, pointing towards faster rates of evolution and more opportunities for divergence at high latitudes23,24. We propose that these patterns together reflect the signature of climate oscillations on high latitude species: polar penguins (e.g. A. forsteri/P. adeliae) were likely forced into more northerly refugia during ice ages, subsequently recolonizing Antarctica during interglacials14. These events may have led to faster evolutionary rates as these lineages underwent population contraction-expansion cycles and were periodically forced to adapt to new environments.
Putative molecular adaptations unique to penguins
As penguins became increasingly adapted to a flightless diving ecology, they encountered novel selection pressures that required modifications to their locomotory strategy, thermoregulation, sensory perception, and diet. We tested whether these phenotypic changes have been facilitated through the evolution of the underlying protein-coding genes (Supplementary Data 4) by identifying positively selected genes (PSGs), rapidly evolving genes (REGs), and pseudogenes that relate to specific adaptations including thermoregulation, oceanic diving, oxygenation, underwater vision, shifts in diet and taste, body size and immunity (see Figs. 4, 5 and Supplementary Methods for additional details and citations). These genes either differ in all penguins compared with other birds, differ in the genus Aptenodytes compared with other penguins, or are under distinct selective pressures within penguins (Supplementary Data 4). In the branch leading to the last common ancestor (bLCA) of penguins, 27 PSGs (false discovery rate [FDR] q < 0.05) and 13 REGs (FDR q < 0.05) were detected. In the bLCA of Aptenodytes, 25 PSGs (FDR q < 0.05) and 3 REGs (FDR q < 0.05) were detected. In the bLCA of penguins and four flightless/nearly flightless birds (Nannopterum harrisi, Rhynochetos jubatus, Zapornia atra, and Laterallus rogersi, see Supplementary Fig. 16a), five PSGs (FDR q < 0.05) and 38 REGs (FDR q < 0.05) were detected. Within penguins, 275 PSGs (FDR q < 0.01) were detected (Supplementary Data 4). We related the gene pathways and known functions of 15 PSGs and six REGs to penguin-specific adaptations (Fig. 4a). We also highlight five genes containing penguin-specific substitutions, seven pseudogenes, and two gene expansions (Fig. 4a, Supplementary Figs. 14, 15).
Fig. 4: Adaptive genes in extant penguin lineages.
a Genes with unique evolutionary signals in penguins and their putative adaptive function. b Gene regulatory pathways related to light transmission. c Phylogenetic tree of 45 avian species showing two mutation sites (HBA-αA, A140S, and HBB-βA, L87M) of hemoglobin genes in penguins (marked in red) and outgroups. d Positive selection at multiple sites (41, 62, 111, 113, 127, 141) on the bLCA of extant penguins for MB gene and the structural effects of amino acid substitutions in the chicken MB gene. Molecular models of the chicken MB gene and the MB gene with penguin-specific substitutions may affect the stabilization of MB. Source data is provided as a Source Data file.
We identified three REGs that are shared by penguins and other flightless/nearly flightless birds. These genes are likely associated with the shortening, rigidity, and increased density of the forelimb bones which contribute to the flipper-like wing of penguins (Fig. 4a). TBXT and FOXP1 are related to the development of articular cartilage, tendons, and limb bones25,26. SMAD3 is involved in the transforming growth factor-beta signaling pathway, which is important for maintaining articular cartilage and stimulating osteogenesis and bone formation27. Perhaps most interestingly, TNMD, a PSG, is expressed during the differentiation and developmental phase of limb tendon, ligament, and collagen fibrils, and loss of TNMD can result in reduced tenocyte density28. We hypothesize that TNMD may be key to the nearly wholesale replacement of penguin distal wing musculature by tendons, which stiffens and reduces heat loss to the high surface area flipper (Supplementary Fig. 16a-d). We also identified two genes KCNU1 and KCNMA1 that are related to calcium sequestration to be expanded in the genomes of both penguins and grebes (Podiceps cristatus and Podilymbus podiceps) (Fig. 4a, Supplementary Fig. 15). These genes likely contribute to the high bone density characteristic of these taxa, which helps reduce buoyancy for deep diving.
Penguins have densely-packed waterproof feathers, thick skin, and a layer of subcutaneous fat enabling them to thermoregulate in cold environments. We identified four genes under selective pressure in common ancestors of penguins that are related to thermoregulation (Supplementary Data 4). These genes (APPL1, TRPC1, EVPL) showed evidence of positive selection or rapid rates of evolution on the bLCA of extant penguins but not in other birds (Fig. 4a). The white adipose tissue of penguins is important for survival in the cold, acting as an insulative layer and an energy reserve, particularly prior to catastrophic moult29. We hypothesize several of these genes contribute to white adipose fat storage and hence survival in cold environments. APPL1 (Supplementary Fig. 16e) and TRPC1 are related to glucose levels and fatty acid breakdown through adiponectin30,31.
Penguins function under hypoxic conditions during deep dives in part via myoglobin concentration and utilizing anaerobic metabolism32,33. We identified seven genes related to oxygenation that are under positive selection or have penguin-specific substitutions in penguins. Transferrin Receptor 1 (TFRC) shows a positive selection in penguins (Supplementary Fig. 16f). Previous experimental work in cells has reported that TFRC messenger RNA is expressed in an oxygen-dependent manner34. Importantly, TFRC is a top candidate gene for the hypoxia response of domesticated cattle35. We hypothesize that TFRC has contributed to a convergent adaptation to withstanding hypoxia in penguins. Interestingly, FIBB and ANO6, which are involved in blood coagulation, showed a signal of positive selection in Aptenodytes, but not in other genera (Supplementary Fig. 17). Among all penguins, Aptenodytes have the capacity for the deepest diving (>500 m depth)36, and thus, these gene variants may enable these species to dive to extreme depths. While none of the hemoglobin genes were PSGs (P-value: >0.05), we observed that HBA-αA (A140S) and HBB-βA (L87M) genes (Fig. 4c and Supplementary Fig. 18) show penguin-specific amino acid substitutions that are highly conserved across all penguin species, making them candidate molecular adaptations for surviving deep oceanic dives under hypoxic conditions (see also ref. 37). MB is an oxygen-binding myoglobin gene that shows positive selection at multiple sites both between penguins and other birds and among penguins (Fig. 4d and Supplementary Fig. 16g), suggesting that these penguin-specific substitutions may impact the stability of the resulting myoglobins, as seen in extreme deep-diving cetaceans38. While cormorants and petrels also undertake deep (>70 m) dives, we did not observe selection for TFRC and hemoglobin genes in these groups (Fig. 4c). Another PSG, TRPC4, is involved in the cardiovascular system39. Specifically, TRPC4 may help widen blood vessels to decrease blood pressure during deep dives40.
Penguins frequently forage in low light, and exhibit specializations for vision in dim, blue-green marine environments41,42. Morphological research has shown that at least some penguins are cone trichromats with only three functional cone photoreceptor types, blue-shifted long-wavelength visual pigments, and no red oil droplets41. Genomic data support trichromatism in all penguins, in contrast to most other birds which are tetrachromats. The inactivation of the green cone opsin gene (RH2) in the stem penguin lineage is inferred by a 12-base pair (bp) deletion, which encompasses the codon for the critical chromophore-binding lysine (K29643) (Fig. 4a and Supplementary Fig. 19a). As all penguins share this deletion, reduced color vision must have occurred in the penguin stem lineage, similar to secondarily aquatic mammals44. Although penguins lack green cones, the functional orthologs of the remaining visual opsins in penguins strongly indicate the retention of violet (SWS1), blue (SWS2), and red (LWS) cones, plus rods (RH1) (Fig. 5a). This genetic signature is concordant with our experiments on Pygoscelis papua (see Supplementary Methods), which demonstrate a capacity for ultraviolet light perception at 365 nm, likely conferred by the SWS1 opsin. Furthermore, the peak wavelength sensitivity (λmax) of penguin LWS opsins show evidence of shifts in spectral sensitivity to better match ambient underwater light. Relative to key avian model species (e.g., Taeniopygia guttata, Columba livia, Gallus gallus) and Procellariiformes, penguins possess substitutions at five key tuning sites in LWS, four of which (A180, F277, A285, and S308) are associated with blue-shifting this pigment45 (Supplementary Fig. 19b). This suggests that this opsin has been fine-tuned for marine foraging, as observed in cetaceans44. CYP2J19, which encodes a carotenoid ketolase responsible for producing red oil droplets in avian cones46, has been inactivated in most penguins (Supplementary Data 4). Colored oil droplets are thought to fine-tune color vision46, though this comes at the cost of decreased visual sensitivity. Deactivation of CYP2J19 likely allows for higher retinal sensitivity when foraging in dim light conditions, as seen in nocturnal owls and kiwis46. Beyond these key genes, we note that two scotopic photoresponse genes, TMEM30A (PSG) and KCNV2 (REG), show evidence of selection in penguins, and two others, CNGB1 and GNB3, each have a site mutation unique to penguins (Supplementary Fig. 19c, d). These genes play an important role in the transmission of light (Fig. 4b), and may further enhance visual sensitivity at low light levels, as mutations or loss of these genes impact the result in a reduced scotopic photoresponse47,48.
A wholesale reduction in gustation capacity appears to have accompanied the shift to underwater prey capture and consumption in penguins. We verified that penguins only retain genes associated with detecting sour and salty tastants, and lack functional copies of genes linked to umami, sweet and bitter tastants49 (Figs. 4a and 5a). The mutational loss of capacity for umami taste in penguins is puzzling, given the continued consumption of amino acid-rich prey. Intriguingly, the loss of umami has also been reported in secondarily aquatic mammals50. Potential explanations include a lower reliance on taste when swallowing food whole or weakened ability to taste prey due to cold temperatures and the sodium content of seawater (reviewed in50).
A strong genomic indicator of diet is presented by chitinases that are expressed in the gastrointestinal tract51. The chitinase genes (CHIAs) exist as several paralogs, and the retention or loss of these paralogs in mammals has been correlated with diet51. Retention of intact CHIAs correlates with a higher degree of insectivory, and CHIA losses tend to occur in lineages that undergo dietary shifts to carnivory or herbivory. We examined CHIAs in penguins, and in contrast to most examined birds, which have one to four intact CHIAs52, penguins have a single pseudogenized CHIA. At first glance, it is perplexing that penguins would lose CHIAs, as many species consume large amounts of crustaceans. Fossil evidence, however, reveals that stem penguins focused primarily on larger prey items like fish and squid, and that adaptations for capturing smaller planktonic prey arose as recently as the Pliocene6. We propose that the two inactivating mutations shared by extant penguins (Fig. 5) evolved during a ~50 Ma interval during which stem penguins consumed little or no arthropod prey.
Co-evolution between hosts and pathogens is pervasive in vertebrates. Given the range of different climatic niches occupied by penguins, and the differences in pathogen assemblages to which they are undoubtedly exposed, penguins may have undergone significant adaptation to local pathogen pressures53. Accordingly, we detected 51 PSGs in penguins that have a role in immunity (Supplementary Data 4). Several of these genes might be under positive selection corresponding to host-pathogen co-evolution. For instance, we confirm previous reports53,54 that the bacterial-recognizing Toll-like receptors TLR4 and TLR5 (Figs. 4a and 5b) are positively selected in penguins. Moreover, the positively selected sites located proximal (<5 Å) to the lipopolysaccharide-binding site in TLR4 (codon 276, homologous to chicken codon 30255) and at a flagellin-binding site in TLR5 (codon 3356) (Fig. 5b) are both in domains crucial for bacterial recognition. In addition, we detected several other pattern-recognition receptors, such as IFIT5, that are also under positive selection in penguins (Fig. 4a). IFIT5 is a cellular detector of viral RNA57, and we found a cluster of positively selected sites located in a connecting helix forming part of the RNA-binding cleft (codons 407, 409, 413, and 421, corresponding to human codons 412, 414, 418 and 42658,59) (Fig. 5b). This may imply that penguin IFIT5 has undergone adaptation to different viral RNA motifs in response to viral pathogen pressure. We also found evidence of positive selection at viral targets of cell entry. For example, CD81 is a co-receptor required for glycoprotein-mediated hepatitis C viral entry into cells in mammals60, and positive selection has been reported at the glycoprotein interface in bat CD8161. We also found a cluster of positively selected sites in the hepatitis C glycoprotein interface in penguin CD81 (sites 181, 182, and 186, corresponding to human sites 180, 181, and 185, and penguin site 86, corresponding to human site 185) (Fig. 5b). This may suggest that penguins have experienced co-evolution with a viral pathogen that relies on CD81 for cell entry. Finally, we detected positive selection in penguin transferrin, which is part of the “nutritional” immune system that sequesters iron from iron-scavenging pathogens62. Outbreaks of diphtheritic stomatitis in Megadyptes antipodes have caused increasing chick mortality and are hypothesized to be related to increasing susceptibility to Corynebacterium as a secondary infection63 potentially triggered by chick malnutrition due to changes in diet, and potentially iron intake. The co-evolutionary arms race to sequester and scavenge iron has also been detected in mammals and fishes (e.g.,64). Taken together, these observations illustrate that immune genes have undergone diversification in penguins. Furthermore, many positively selected sites were clustered in regions known to be involved in pathogen binding, which provides evidence for extensive host-pathogen co-evolution during the diversification of penguins into novel pathogen environments.
Extant penguins range from ~1 kg in Eudyptula spp. to 40 kg in Aptenodytes forsteri, but giant fossil penguins exceeded 100 kg65. We found two genes associated with large body size that are under positive selection in Aptenodytes compared to all other penguin lineages (Fig. 4a). CREB3L1 is important during bone development, and vertebrates lacking CREB3L1 have underdeveloped growth66. SMARCAD1 is related to the skeleton and plays a role in transcriptional regulation, maintenance of chromosome stability, and various aspects of DNA repair. Vertebrates with mutant SMARCAD1 also have underdeveloped growth67. We hypothesize that these genes have contributed to the large body size of Aptenodytes. Although genetic data are inaccessible for stem penguins, the recovery of Aptenodytes as sisters to all other extant penguins and the large size of many stem penguins (e.g., Kumimanu and Kairuku) suggests positive selection in these genes could be ancestral for crown penguins with selection relaxed in non-Aptenodytes taxa.
Discussion
Our comprehensive study encompassing all extant and many fossil penguins provides a new window into the processes that have shaped >60 Ma of evolution. Our phylogenomic analyses confirm the Zealandian origin of penguins, extensive radiation before dispersal to South America and Antarctica, and the second pulse of speciation at the onset of the ACC. Our study reveals new evidence that penguin speciation events were driven by changes in global climate and oceanic dispersal, leading to allopatric speciation across the Southern Hemisphere. Recent speciation in Eudyptes, Megadyptes, Spheniscus, and Eudyptula has been rapid, with a complicated history of gene flow and ILS that make species boundaries within these taxa difficult to untangle (e.g.,5,14). Importantly, the mechanisms that have shaped penguin diversification in the past (e.g., development of major current systems, geological uplift of oceanic islands) remain important for taxa that appear to still be in the process of speciation today (e.g., within Pygoscelis papua and between Eudyptes chrysolophus chrysolophus/E. C. schlegeli, E. pachyrhynchus/E. robustus, E. chrysocome/E. filholi/E. moseleyi, and Spheniscus spp.5,14,68).
By comparing our penguin genomes to >300 other avian genomes, we demonstrate that penguins and Procellariiformes have the lowest evolutionary rates observed among birds to date. These low evolutionary rates seem to belie the profound adaptations penguins show for a secondary aquatic existence, but a synthetic reading of the fossil record and the genomic data suggests that penguins rapidly acquired many of the key features associated with their aquatic life very early in their diversification and rates of change slowed towards the present. Genomic signals of molecular adaptations with evidence of positive selection or penguin-specific substitutions were identified in a variety of genres, including genes associated with oceanic diving, thermoregulation, oxygenation, underwater vision, taste, and immunity. Though the overall evolutionary rate in penguins is slow, we identified higher evolutionary rates in crown penguin ancestors than in extant penguins and shifts in rates in individual lineages over the past 14 Ma.
While evolutionary rates and sea surface temperatures appear to be negatively correlated, evolutionary rates and body mass are positively correlated, suggesting that large-bodied species inhabiting colder climates are more equipped to adapt to new environments during climate events. Indeed, our demographic results reveal that penguins have had a complicated history, shaped by climatic oscillations, which has led to population crashes in those species reliant on restricted niches and ecologies. Genomic evidence highlights how some penguin populations collapsed during previous climatic shifts13,14, and the risks of future collapses are ever-present as penguin populations across the Southern Hemisphere are faced with rapid anthropogenic climate change69. While our analyses suggest that ocean temperature may regulate certain selection pressures, the current pace of warming combined with limited refugia in the Southern Ocean will likely far exceed the adaptive capability of penguins70. Over 60 Ma these iconic birds have evolved to become highly specialized marine predators, and are now well adapted to some of the most extreme environments on Earth. Yet, as their evolutionary history reveals, they now stand as sentinels highlighting the vulnerability of cold-adapted fauna in a rapidly warming world.
Methods
Genome sequencing, assembly, and annotation
We analyzed 27 genomes comprising all extant and recently-extinct penguin species, subspecies, and major lineages. 21 of the high-coverage genomes have been published by members of our consortium for this project8,9. To supplement the dataset, we sequenced three high-coverage genomes from the remaining Pygoscelis papua lineages from Falkland Islands/Malvinas “FAL”, Kerguelen Island “KER” and South Georgia “SG” (see68), and partial genomes from the recently-extinct Eudyptes warhami, M. a. richdalei and M. a. waitaha (see ref. 5 and citations within). See Supplementary Methods for more detail on sample collection, extraction, sequencing, assembly and sex chromosomes. As such, we present the most comprehensive genomic dataset spanning all modern penguins, and to the best of our knowledge, present the first genomic dataset encompassing an entire multi-species vertebrate order. To compare our penguin genomes to other bird genomes, we obtained 361 bird genomes recently released by20 as part of the B10K project (https://b10k.genomics.cn), representing 36 orders and 218 families.
Additional data on modern and fossil penguins
We expanded the morphological dataset of6 by incorporating additional fossil penguin species and seven additional characters. The final matrix comprised 72 fossil and extant penguin taxa, two outgroup taxa, and 281 morphological characters (Supplementary Data 5). The average sea surface temperatures were obtained from spot locations from each lineage (Supplementary Data 3). Generation times of each extant lineage were obtained from the IUCN. For M. a. richdalei we used the M. a. antipodes generation time (Supplementary Data 2) (see Supplementary Methods).
Phylogenomic inference and divergence time estimation
We combined all penguin genomes with the morphological matrix to resolve the timing and drivers of >60 million years of penguin evolution. In doing so, we update previous phylogenies (e.g., 4,5,7,9) to include genomes and morphology from all penguin taxa, including all major P. papua lineages and recently-extinct taxa. To explore the diversification of penguins, we undertook multiple phylogenomic analyses encompassing different subsets of taxa (Fig. 1, Supplementary Figs. 2–4 and Supplementary Software).
We aligned and merged our genomes to the 363-bird alignments from the B10K project20. The final alignments were extracted and multiple hits were filtered out for downstream analyses. We then created four alignments accounting for different subsets of taxa: (1) all putative species, subspecies and lineages (27 penguin taxa + 5 outgroups in total); (2) all extant lineages (24 penguin taxa + 5 outgroups in total), removing Eudyptes warhami, M. a. richdalei and M. a. waitaha from the former alignment; (3) all putative species and subspecies, removing P. papua “FAL”, “KER” and “SG” lineages (21 penguin taxa + 5 outgroups in total); and (4) only putative species (19 penguin taxa in total), further removing Eudyptula minor “BAN” and Eudyptes chrysolophus schlegeli. We also created one large genome alignment with all 385-bird taxa (not including Eudyptes warhami, M. a. richdalei and M. a. waitaha) (see Supplementary Methods).
To verify the phylogenomic relationships of modern penguins, we ran coalescent-based and concatenation-based phylogenies accounting for the different subsets of taxa described above (see Supplementary Methods). The topology for all clades was strongly supported and identical using all methods (Supplementary Fig. 2 and Supplementary Data 5), except for the placement of Eudyptes warhami among Eudyptes lineages in a single phylogeny.
We estimated the divergence time between modern taxa using the calibration points in ref. 5 (Supplementary Data 5), except we removed Pygoscelis calderensis based on recent revisions of topology7,9. We also added a “Crown Procellariiformes” (which is a sister to penguins) calibration point to calibrate the divergence between albatross and storm petrels. We also added three tip dates for extinct taxa, using the fossils Madrynornis mirandus, Spheniscus muizoni, and the fossil specimen NMNZ S.046318 (Eudyptes sp.) (see Supplementary Methods). All trees shared the same topology with our initial analyses, with the exception of the placement of the extinct Megadyptes antipodes waitaha, and had similar divergence times with each other (Supplementary Fig. 2b) We then generated a Bayesian total-evidence dating tree using the fossilized birth-death process (Fig. 1), expanding4 by including more species, genome data, and updating the morphology. We also calculated the genetic distances between our modern penguin genomes (Supplementary Fig. 12).
Ancestral range estimation
We estimated the ancestral distribution of penguins with the total-evidence dated phylogenomic tree and twelve models, expanding on6,7 and following6. We used ten geographical areas and six-time slices, and normalized distances against the shortest pairwise distance in the time slice in this analysis. We then undertook standard model-testing (Likelihood Ratio Test and Akaike information criterion) to identify the best-fitting model for our data. We also used a Biogeographical Stochastic Mapping method to account for the apparent dispersal/vicariance/etc events. See Supplementary Methods for more details.
Quantifying introgression and ILS between taxa
Controversy still remains regarding taxonomic boundaries between some closely related penguin taxa (See Supplementary Methods for more details). We undertook multiple analyses to assess the discordance of gene trees and levels of ILS and introgression (Supplementary Data 6). We first calculated the frequency of gene tree discordance for each internal branch and summarized the topologies for three different gene tree data sets. We assessed levels of ILS and introgression by quantifying them via internal branch lengths between all species (Supplementary Software). We tested the direction of introgression among lineages and assessed what genomic regions have introgressed, by analyzing 16 five-species combinations with symmetric phylogenies (Supplementary Data 2). We also examined introgression, by selecting different taxa from different genera and some closely related lineages/species. Finally, we assessed the cessation of gene flow between six closely related penguin groups. See Supplementary Methods for more details.
Demographic history of penguins
We undertook analyses of demographic history by profiling heterozygosity across each genome (Supplementary Fig. 10), and undertaking analyses of effective population size (Ne) over the last 1 Ma. As the number of heterozygous sites for M. a. waitaha and Eudyptes warhami remained too low, we only present analyses for M. a. richdalei. We used the species divergence time tree as an estimation of the mutation rate and detailed the divergence times in Supplementary Data 2. We focussed on the last 500 Kya, a period encompassing dramatic glacial/interglacial cycles (see Supplementary Methods).
Comparison of evolutionary rate
The evolutionary rate between penguins and other birds was compared using both genomic distance and rate comparisons (Supplementary Fig. 12). We calculated P and K2P distances between taxa following the formulas: P distance = p + q and K2P-distance = −1/2ln((1-2p-q)*sqrt(1-2q)). Here, p is the proportion of transitions while q is the proportion of transversions between two genomes. We also estimated the evolutionary rate of penguins using the substitution rate (substitution per site per year) = substitution per site/divergence time. The correlation relationship between the substitution rate and sea surface temperature for extant penguins was tested using a phylogenetic generalized least squares (PGLS) regression (Fig. 3 and Supplementary Data 3). We also conducted PGLS regression analysis to determine the correlation relationship between sea surface temperature and body mass or generation time (Supplementary Software). We also compared the genome size among birds to check whether the genome size has a correlation with the proportion of repeat elements (Supplementary Data 3). See Supplementary Methods for more details.
Putative molecular adaptations
We undertook comparative genomic analyses across all extant penguin taxa to identify genes and regulatory changes contributing to the remarkable morphological and physiological variation within penguins. We do not include Eudyptes warhami, M. a. richdalei, and M. a. waitaha or additional P. papua lineages (“FAL”, “SG”, “KER”) in these analyses. Our analyses expand on previous analyses that have only examined A. forsteri and P. adeliae (e.g., 8,49), or those that have relied on only on-site analysis for penguins (e.g., 7).
To understand the adaptive evolution of specific phenotypes in the branch leading to the last common ancestor of penguins, we identified positively selected genes, rapidly evolving genes, and evolutionarily conserved genes for extant penguins under a branch model and a branch-site model (see Supplementary Methods). We obtained orthologous genes against the chicken genome for 44 bird species including penguins, retaining a total of 8716 high-confidence orthologous genes. These genes were used to conduct a multiple sequence alignment. We then detected positively selected genes/rapidly evolving genes in the branch leading to the last common ancestor of penguins and detected positively selected genes/rapidly evolving genes in the branches of the last common ancestor of penguins plus four flightless/nearly flightless birds (see Supplementary Methods for more details). Genes with a false discovery rate adjusted P-value less than 0.05 were treated as candidates for positive selection or rapid evolution (Supplementary Data 4). To reveal more characteristics in penguins, we predicted whether an amino acid substitution site may have an impact on the biological function of a protein, by comparing penguins to the 23 other birds, and scanning for premature stop codons in each gene alignment. We also examined specific genes individually. In addition, we annotated and undertook further qualitative comparisons of these genes identified in penguins with over 300 other avian species to explore what happens in other birds (Supplementary Data 7). See Supplementary Methods for more details. While transcriptional evidence to support adaptive inferences is highly important, such data remains unrealizable in our study due to cultural and ethical hurdles.
Behavioral study of gentoo penguin vision
As a representative of penguins, we undertook a behavioral study on captive P. papua at SEALIFE Kelly Tarlton’s Aquarium, Auckland, New Zealand to examine their ability to see in the ultraviolet (UV) spectrum. A Tank007 TK566 black OEM 365 nm torch (Shenzhen Grandoor Electronic Co., Ltd., China) was projected onto the snow in the enclosure, and penguins were observed to determine whether they would follow the movements of the torch’s UV projection. At least five penguins appeared to be able to follow the torch’s projection. No such interest was displayed when the torch was turned off, demonstrating that P. papua are able to see in the near UV spectrum (Supplementary Movie 1).
Reporting summary
Data availability
The sequencing data and genome assemblies generated in this study have been deposited in the NCBI database under BioProject PRJNA722815 and PRJNA556735, as well as the CNSA of the CNGBdb database under the accession number CNP0000605. Appendix datasets (BioGeoBEARS results and PSMC results) have been deposited on Figshare [https://doi.org/10.6084/m9.figshare.c.5535243.v1]. Supplementary data files and source data generated in this study are provided in the Supplementary Information and Source Data file. The following datasets were also used in this study: CNSA accession number CNP0000505, and NCBI Genbank accession number NP_990272, NP_001071646, NP_001071647. Source data are provided in this paper.
Code availability
Analyses were performed using open-source software tools and the detailed parameters for each tool are shown in the relevant methods in Supplementary Information. The custom scripts and codes used in this study are also available in Supplementary Software files.
Acknowledgements
We thank the British Antarctic Survey, Institut Polaire Français (IPEV), Laura Seaman, and staff at SEALIFE Kelly Tarlton’s Aquarium, Simone Giovanardi, Misha Vorobyev, David Ainley, Jason Turuwhenua, Nic Dussex, Kieren Mitchell, Damien Fordham, Stuart Brown, James Cahill, Shanlin Liu, Yun Zhao, Fang Li, Min Wu, Yun Wang, Guangji Chen, and B10K members for sample/data collection and discussions. This project was supported by the National Key Research and Development Program of China (MOST) grant (no. 2018YFC1406901) to D.-X.Z. and the International Partnership Program of Chinese Academy of Sciences (no. 152453KYSB20170002) to G.Z. This project was also supported by the National Natural Science Foundation of China grant (no. 31901214 and No. 32170626) to S.F. and a Villum Investigator grant (no. 25900) from The Villum Foundation to G.Z. This project was also funded by the China National GeneBank.
Source data
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
|
Introduction
Penguins are one of the most iconic groups of birds, serving as both a textbook example of the evolution of secondarily aquatic ecology and as sentinels for the impacts of global change on ecosystem health1. Although often associated with Antarctica in the popular imagination, penguins originated more than 60 million years ago (Mya), evolving wing-propelled diving and losing the capacity for aerial flight long before the formation of polar ice sheets2. Over time, penguins evolved the suite of morphological, physiological, and behavioral features that make them arguably the most uniquely specialized of all extant birds. These adaptations have allowed penguins to colonize some of the most extreme environments on Earth.
Previous phylogenetic studies have yielded insights into penguin evolution, yet have been limited by sampling issues (e.g., number of lineages incorporated and quality of molecular markers3,4,5,6,7). Genomic studies have shed light on the diversification of extant penguins7,8,9 but have not integrated extinct species. Because nearly three-quarters of known penguin species are represented only by fossils (e.g., 2,3), sampling extinct species is crucial for improving phylogenetic resolution and dating accuracy, reconstructing biogeographic events, and understanding the environmental context in which key adaptations arose. While several studies have included fossil penguins, these utilized only mitochondrial genomes and/or small numbers of nuclear genes (e.g., 3,4,5,6), limiting their ability to disentangle confounding processes, such as historical and ongoing introgression and incomplete lineage sorting.
Here, we take a comprehensive approach to inferring the tempo and drivers of penguin diversification by combining genomes from all extant and recently-extinct penguin lineages (27 taxa) (Table 1), stratigraphic data from fossil penguins (47 taxa), and morphological and biogeographic data from all species (extant and extinct) (Fig.
|
no
|
Ornithology
|
Did penguins originate in the Antarctic?
|
yes_statement
|
"penguins" "originated" in the antarctic.. the "origin" of "penguins" is in the antarctic.
|
https://www.pnas.org/post/podcast/origin-and-diversification-penguins
|
Origin and diversification of penguins | Science Sessions | PNAS
|
PNAS: Welcome to Science Sessions, the podcast of the Proceedings of the National Academy of Sciences, where we connect you with Academy members, researchers, and policymakers. Join us as we explore the stories behind the science. I'm Paul Gabrielsen, and I'm speaking with Juliana Vianna of the Pontifical Catholic University of Chile and Rauri Bowie of the University of California, Berkeley. In a recent PNAS article, they and their colleagues examined genomes of 18 species of penguins to learn more about their origins. Penguins are often associated with Antarctica, but according to the authors, that's not where they started out.
Have either of you had the opportunity to observe penguins in the field? Juliana?
Vianna: Yes. So we have been capturing penguins for several years in different parts; Humboldt and Magellanic in the coast of Chile; in Patagonia, rockhopper penguins, macaroni penguins. And we also have been working in Antarctica for several years with gentoo, chinstrap, adélie. And I have been to South Africa as well. I could see the African penguin in the wild, and also in New Zealand the very endangered yellow-eyed penguin and the little penguin. So yeah, I am lucky to have seen 11 species of penguins already.
Bowie: I've seen only two species of penguin, both more associated with human habitats. I've spent quite a bit of time with African penguins, rehabilitating them after an oil spill. And then I've been to New Zealand to see the smallest species, the little penguin. But I hope to see many more.
PNAS: Tell me about the diversity of penguin species across the Southern Hemisphere.
Vianna: Only two species of penguins are distributed around Antarctica, the entire continent. This is the emperor and adélie. You can find several other species in the Antarctic peninsula, like gentoo, adélie, chinstrap, and the macaroni penguin. And there [are] several other species that are sub-Antarctic: king, macaroni, rockhopper penguins. So they are found north and south of the Antarctic polar front and also have species that are north of subtropical front, like the northern rockhopper penguins. You can find species associated with the cold waters of Benguela Current in South Africa. And you can find species in Australia and New Zealand. And also there [are] several species in South America: Magellanic, rockhopper, Humboldt, macaroni, king. Humboldt, for example, can go up to the coast of Chile, to Peru, associated with the cold waters of Humboldt current. And the one that is the lowest latitude is the Galápagos penguin that you can find in the Galápagos islands. So this is about a general distribution.
Bowie: I think it's just fascinating that penguins managed to occupy some of the most remote landmasses on Earth; on these really tiny little islands you can find a penguin colony. So they've been a remarkably successful seabird group from that perspective.
PNAS: Before your study, what was known about the origin and diversification of penguins? Rauri?
Bowie: So there's been an interesting debate in the literature with exactly where the two largest species—the very charismatic king and emperor penguins—where they fitted in the family tree for penguins. And one of the ideas was that they were closer to some of the other living penguins, some of the smallest species nested inside the family tree. And then the other idea was that they were most distantly related to all of the other ingroup—we say they were sister to the rest of the penguins. Penguins also have a rich fossil history. So although modern penguins only date back to about 20 million years, penguins go all the way back to 60 million years. So given this debate, one of the things that we wanted to try and resolve with our study was where exactly do king and emperor penguins go in the family tree. And that would then help us understand how penguins originated and where exactly their origination occurred and when. And so you know, our main conclusions resolve one of these longstanding questions. And we were able to determine that penguins originated along the coast of Australia and New Zealand and the nearby South Pacific islands, and that this occurred about 22 million years ago.
PNAS: How do you determine where and when penguins may have originated?
Vianna: So we use 22 genomes of 18 species, and we could reconstruct the phylogeny; and we use the recent distribution of all of these species, and we modeled the best distribution of the species. And we could find the region of New Zealand and Australia. But we also used ecological data. We obtained satellite information of sea surface temperature, chlorophyll, and also a salinity for each of the 18 species. And we model the best historical niche distribution. And we could also find that the historical maximum temperature was nine degrees Celsius, which match with the geographical region that we found. So both data, ecological and genomic, support the same timing. Rauri, maybe you want to add something?
Bowie: No, I think that's good. So one of the things that we were able to do by reconstructing where penguins originated and using the phylogeny, as well as this large amount of ecological data that's available, is we were able to show how penguins have been able to diversify, to occupy the incredibly different thermal niches that they live in today. And so, by mapping this environmental data back across the family tree of penguins, we could show that penguins originated in temperate waters, probably around nine degrees Celsius—or about 48 degrees Fahrenheit—which is roughly the water temperatures around Australia and New Zealand today. And then from there, we see this really remarkable twin axes of radiation, one which is down into the really frigid water of Antarctica and the sub-Antarctic, which you can get down to negative degrees. And then the other is, as this Drake's passage between the tip of South America and Antarctica opened and changed how the currents flowed around the bottom of the world allowing this circular current to form, penguins were able to move up the coast of South America, eventually reaching the Galápagos islands, which of course are right on the equator.
And so as a consequence of that, they can occur in temperatures right up to 26 degrees or even slightly warmer. So you see penguins being able to span from their ancestral conditions of around 48 degrees Fahrenheit to being able to then colonize freezing temperatures, as well as up to about 80 degrees Fahrenheit on the equator. So really occupying a broad diversity of thermal environments.
PNAS: What did genetic adaptations allow them to do that they couldn't before?
Bowie: So one of the advantages of using the whole genomes is that it provides a record of how genes have changed through time and allows us to estimate different levels of selection across different genes. And so we were able to take all the coding genes from the genome and look at how selection had operated. And from that, we could identify certain parts of the genome that have been what we call enriched or overemphasized in different penguin lineages. And some of the genes that came out were related to pathways, for example, that relate to how blood vessels constrict and expand. And if you think about it, that makes a lot of sense because penguins that live in really cold temperatures, if they can reduce the circulation of blood to their extremities, they can maintain a warmer core body temperature, much as many marine mammals do in the same vein. Similarly, penguins have really interesting adaptations for binding oxygen in the same way that many species that live at high altitudes do. And this fits in with some penguins being able to dive to relatively deep depths where really efficient oxygen metabolism is really important.
And then another category that we see is related to osmoregulation. And that again, when penguins are limited with how much fresh water they may have access to, and as a consequence need to be able to drink seawater, being really efficient in your osmoregulatory pathway allows you to, for example, excrete salts and it facilitates them being able to colonize these really diverse habitats.
PNAS: How does this finding help us better understand the penguin species we have today?
Vianna: We have answered lots of questions about the evolution of the group. And we could understand that like big times of decrease of temperature, like the middle Miocene, was associated with the diversification of penguins as well, the intensification of the Antarctic circumpolar current and other more recent decrease of temperatures as well was associated with a great diversification in penguins. But right now, climate change is occurring too fast for some species to adapt. And so we can already see some species decreasing population sizes, like adélie and chinstrap in the Antarctic peninsula. And on the other hand, gentoo penguin, we know that came from sub-Antarctic region, it's increasing in Antarctica and expanding farther south.
So we know that this species could adapt in the past with a large geological time scale to climate changes. But right now it's too fast for them to adapt. And we know that in South America in Chile and Galápagos, Humboldt and Galápagos penguins have been impacted by the El Niño Southern Oscillations; and with increase in temperature is associated with high mortality for both species. And El Niño is becoming more intense and more frequent with climate changes. So we expect now to use our ecological data and our genomic data to see how each one of those species were going to adapt in the future. So this is our next step in our research. And also, our data gave lots of answers about the taxonomy of the group. So, how many species there are; we could see that we didn't find many genetic differentiation, genomic differentiation between the macaroni and royal penguin, for example; but [o]n another hand, there was a debate about how many species of rockhopper penguins, and our data supports three species as well. And right now, two of them are considered by the IUCN as only one and vulnerable and decreasing. But we know that one of those—the eastern rockhopper—it has much more strong decreases in population sizes. And it's more affected by climate changes and also other impacts like fisheries and predation in the nest, also invasive species, cats, dogs, rats, and many other impacts.
Bowie: I think I can really think of three interesting ways that our data leads to a better understanding of penguins that can influence their conservation. I think the first thing that's really important to realize is that the genetic variation that we identify and the mutations that may have facilitated penguins expanding across the Southern Hemisphere occurred over a period of millions and millions of years. And the rate at which climate's changing today is so fast that it's unforeseeable that penguins will be able to change rapidly enough to be able to adapt to these changing environments. And this is as a consequence of why we're seeing certain colonies starting to disappear, and other penguins having to redistribute their distributions or having to redistribute themselves because of changes in food resources. And then the other thing that Juliana mentioned as well is that our data gives interesting insight into actually the diversity of penguins.
And we find one instance where perhaps what we think of as two current penguin species, macaroni and royal, should actually be considered one with a really interesting polymorphism in coloration. But in other cases, with rockhopper and gentoo penguins where the diversity has been underestimated and where we may have one species today, maybe three or four different species that are very different evolutionary histories, and have responded very differently through time to changes in environments. And there seems little doubt that, whether we want to call them species or not, they should be managed as separate entities, and so as a consequence should have much greater conservation attention placed upon them because they each represent isolated little units rather than one unit broadly distributed across the Southern hemisphere. And then the last point I think that's worth making is one of the really fantastic things that we can do with genomes is, because we have so much data, we can look back in time as far as a million years of how population sizes have changed.
And so by doing that, we could very conclusively show that most penguins had the largest bump in population size somewhere between 40 and 70,000 years ago, so, when the world was much cooler. And the world has continuously warmed since the last glacial maximum and penguin colonies have been declining for a long time, and it's only been accelerated by these human-induced changes. So they really are in dire need of conservation attention.
PNAS: One last question. Do you have a favorite penguin species?
Vianna: It's difficult. The little one is very cute, but I have been working for a long time with gentoo penguin[s], and I'm really impressed how this species has adapt[ed] and have diversified in Southern Ocean and Antarctica. So gentoo penguin has taken my attention, just because of the results and the work I have done with gentoo, but I really like most of the species and it's very beautiful. And when I was in New Zealand and I could see the yellow-eyed that is very threatened and the little penguin I was fascinated, but I was very happy to see them. Rauri, do you have a favorite penguin?
Bowie: Now you’re asking me! Um, it's always so hard. I think my favorite is the emperor penguin. They're such majestic looking animals and so charismatic, but also they have, you know, a most fascinating life history being secluded on the ice, looking after a single egg for such long periods of time and how the male and female need to cooperate to raise their young, and so I've always found them fascinating. But penguins as a whole are a really, truly remarkable group of birds as well as an adorable group of birds.
Vianna: So yes I think emperor and Galápagos are the two extremes in terms of adaptation to the different temperature and environmental conditions, and it’s very interesting that both of them take attentions to the public. Like most of the people now talk to me and said, I didn't know there was, like, penguins in Galápagos or in the coast of Chile and Peru.
Bowie: For me, I think one of the great results of this paper, one of the things I most enjoyed the most, is that this is a nice example of work that could never have been completed without international collaboration with people and scientists from all over the world, contributing material, contributing their expertise—Juliana in Chile, bringing her extensive knowledge of penguins and genomics, and then my lab being able to help. And so I think that's a really great example of how NSF and other funding agencies have facilitated bringing together different groups of scientists to really do science that has real implications for conservation of charismatic organisms but could never be done by one individual or one organization on their own.
PNAS: Thanks for tuning into Science Sessions. You can subscribe to Science Sessions on iTunes, Stitcher, Spotify, or wherever you get your podcasts. If you liked this episode, please consider leaving a review and helping us spread the word.
|
Vianna: So we use 22 genomes of 18 species, and we could reconstruct the phylogeny; and we use the recent distribution of all of these species, and we modeled the best distribution of the species. And we could find the region of New Zealand and Australia. But we also used ecological data. We obtained satellite information of sea surface temperature, chlorophyll, and also a salinity for each of the 18 species. And we model the best historical niche distribution. And we could also find that the historical maximum temperature was nine degrees Celsius, which match with the geographical region that we found. So both data, ecological and genomic, support the same timing. Rauri, maybe you want to add something?
Bowie: No, I think that's good. So one of the things that we were able to do by reconstructing where penguins originated and using the phylogeny, as well as this large amount of ecological data that's available, is we were able to show how penguins have been able to diversify, to occupy the incredibly different thermal niches that they live in today. And so, by mapping this environmental data back across the family tree of penguins, we could show that penguins originated in temperate waters, probably around nine degrees Celsius—or about 48 degrees Fahrenheit—which is roughly the water temperatures around Australia and New Zealand today. And then from there, we see this really remarkable twin axes of radiation, one which is down into the really frigid water of Antarctica and the sub-Antarctic, which you can get down to negative degrees. And then the other is, as this Drake's passage between the tip of South America and Antarctica opened and changed how the currents flowed around the bottom of the world allowing this circular current to form, penguins were able to move up the coast of South America, eventually reaching the Galápagos islands, which of course are right on the equator.
|
no
|
Ornithology
|
Did penguins originate in the Antarctic?
|
yes_statement
|
"penguins" "originated" in the antarctic.. the "origin" of "penguins" is in the antarctic.
|
https://www.mdpi.com/1424-2818/14/4/255
|
Evolutionary and Biogeographical History of Penguins ...
|
Notice
Notice
All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
Abstract
Despite its current low diversity, the penguin clade (Sphenisciformes) is one of the groups of birds with the most complete fossil record. Likewise, from the evolutionary point of view, it is an interesting group given the adaptations developed for marine life and the extreme climatic occupation capacity that some species have shown. In the present contribution, we reviewed and integrated all of the geographical and phylogenetic information available, together with an exhaustive and updated review of the fossil record, to establish and propose a biogeographic scenario that allows the spatial-temporal reconstruction of the evolutionary history of the Sphenisciformes, discussing our results and those obtained by other authors. This allowed us to understand how some abiotic processes are responsible for the patterns of diversity evidenced both in modern and past lineages. Thus, using the BioGeoBEARS methodology for biogeographic estimation, we were able to reconstruct the biogeographical patterns for the entire group based on the most complete Bayesian phylogeny of the total evidence. As a result, a New Zealand origin for the Sphenisciformes during the late Cretaceous and early Paleocene is indicated, with subsequent dispersal and expansion across Antarctica and southern South America. During the Eocene, there was a remarkable diversification of species and ecological niches in Antarctica, probably associated with the more temperate climatic conditions in the Southern Hemisphere. A wide morphological variability might have developed at the beginning of the Paleogene diversification. During the Oligocene, with the trends towards the freezing of Antarctica and the generalized cooling of the Neogene, there was a turnover that led to the survival (in New Zealand) of the ancestors of the crown Sphenisciform lineages. Later these expanded and diversified across the Southern Hemisphere, strongly linked to the climatic and oceanographic processes of the Miocene. Finally, it should be noted that the Antarctic recolonization and its hostile climatic conditions occurred in some modern lineages during the Pleistocene, possibly due to exaptations that made possible the repeated dispersion through cold waters during the Cenozoic, also allowing the necessary adaptations to live in the tundra during the glaciations.
1. Introduction
Penguins (Aves, Sphenisciformes) constitute a group of birds that are exclusively marine and flightless. All the species present extreme anatomical and physiological modifications directly related with the diving habit and the adaptations to cold-temperature waters [1,2]. From an evolutionary point of view, there is consensus to include the Sphenisciformes along with other aquatic birds in Aequornithes, and within this clade they are closely related to the Procellariiformes [3,4,5,6]. More precisely, the origin of penguins would be linked to a flying ancestor that secondarily would have lost the ability to fly as they became excellent divers capable of traveling long distances ([7] and numerous later contributions) and reaching extreme depths [1,2,8].
The Sphenisciformes would have originated at the ends of the Cretaceous [9,10,11,12,13] in Zealandia [14] or Te Riu-a-Māui (Māori) or Tasmantis, lands that emerge today as New Zealand. Their appearance and diversification would be closely related to the extinction of the large marine reptiles that played the role of top predators in the southern oceans [15]. Later, these niches became vacant and were occupied by other vertebrates such as penguins in the Southern Hemisphere ([16] and references therein). Although no Cretaceous penguins are known, the fossil record is consistent with this idea. The oldest records of penguins correspond to forms that are morphologically archaic [11,17,18,19,20] that probably acquired a great size, a non-pneumatic skeleton, a flattening of the wing bones constituting propelling blades for diving, and an incipient widening and shortening of the tarsometatarsus, during the lower Paleocene.
These and other specializations for wing-propelled diving are already present in the Paleocene species (Kupoupou stilwelli, Waimanu manneringi, Sequiwaimanu rosieae, Kumimanu biceae, Muriwaimanu tuatahi, Crossvallia waiparensis, and Crossvallia unienwillia), although in the Eocene, forms with more extreme morphophysiological specializations are evident. In this regard, features such as the development of a blood plexus in the wing are observed early in the evolution of penguins (see details in [21]). This acquisition allowed them better thermal regulation during cold-water forays [22,23], as did the presence of highly modified feathers transformed into scales that cover the wings and substantially improve hydrodynamic skills during diving [24].
An increase in body size and a greater adaptation for diving in cold water would have conferred an important adaptive advantage in this context, since a greater body size implies a greater diving capacity, both in terms of depth reached and the duration of the dive [25]. The maximum expression of body size was achieved in the Eocene, when Palaeeudyptes klekowskii reached more than two meters in height [26]. Although there is no consensus about how the size of Paleogene penguins should be calculated, several cases of giant species have been reported in Antarctica, South America, New Zealand, and Australia, covering almost all the areas where penguins are recorded. Thus, penguins reached their apogee with many shapes and an incredible diversity of sizes [27].
It has been proposed that large and robust penguins would have arrived at the Peruvian coasts through two successive colonizations from different areas. The first spread, from Antarctica, would have occurred by the middle Eocene, whereas the second colonization, from New Zealand, would have occurred by the end of the Eocene. According to this proposal, based on the Eocene record of Peruvian penguins [24,28], the presence of Antarctic forms in the middle Eocene in Chile [29] and Argentina [30] is also explained. This stage does not extend beyond the Oligocene. It is not possible to determine the causes or the exact mechanisms that caused these faunal changes, but the diversity of diving birds is inversely proportional to the diversity of marine mammals, especially odontocetes cetaceans. Giant penguins were extinguished where marine mammals became successful as the top predators in the oceans [31]. A new stage in the evolution of the group begins in the Neogene, which includes the appearance of modern forms closely related to living species [32,33]. Taxonomic and morphological diversification in living species is notably less than what was known in the past, and post-Pliocene species are almost entirely attributed to modern genera [34,35].
An example of the transition that occurred during the Neogene is the avian assemblage of Horcón, on the central coast of Chile, which reflects the existence of a mixed fauna during the Pliocene, connecting the seabird associations of the late Miocene with the modern regional avifauna [36]. However, the Cenozoic history of penguins seems to have been somewhat more complex than previously believed. The current avifauna would be the result of a series of successive colonizations and extinctions closely linked to the establishment and development of the ocean currents and the ecological dynamics of species [35,37]. A recent analysis identified New Zealand (either exclusively or with South America) as the most likely ancestral area for crown clade penguins [38].
Despite being a group with a low current diversity (18 species), considering the species known from the fossil record, the Sphenisciformes are one of the best-known avian clades, with about 65 recognized species [6,39]. Likewise, the phylogenetic relationships have led to the proposition of various phylogenetic hypotheses, which have been possible due to the good state of preservation of many fossils and the deep and widely comparative studies of the morphological features among the described lineages. In recent years, extensive morphological knowledge and the consolidation of molecular analysis techniques have allowed phylogenetic approaches to reconstruct the evolution of penguins by integrating extant and extinct forms [17,19,32,33,35,36,40,41,42,43].
Some approaches have generated hypotheses where the influence of events such as those that occurred during the Neogene on the biogeographic patterns and the evolution of the Sphenisciformes niche are reconstructed; however, many scenarios only consider the current species [35,43]. In this sense, the richness of the penguin fossil record [6] allows the possibility of considering and integrating all the available information to propose broader approximations in a deep time approach.
Thanks to the vast amount of information available on the presence of species during the Cenozoic in several locations of the Southern Hemisphere and modern biogeographic analyses methodologies, it is possible to reconstruct geographical scenarios of evolution over time and to understand the influence of environmental and geological changes on the diversification of penguins. In particular, BioGeoBEARS [44,45] analysis allows the reconstruction of ancestral areas in a context of maximum likelihood and employs Bayesian modeling from a calibrated phylogeny. Some previous contributions have dealt with this topic (e.g., [28,35,38]). We focus our review on detecting ancestral areas of origin and describing the paleobiogeographical patterns of the Sphenisciformes lineage based on a broad and complete analysis of the Sphenisciformes fossil record and the most recently published phylogenetic proposal based on the total evidence for the group [32]. This approach allows us to visualize the speciation, dispersal, and extinction events that would have occurred throughout their evolutionary history, shedding more light on how the environmental changes that occurred throughout the Cenozoic could have influenced the evolution and diversification patterns of penguins. This gives us the possibility of comparing our own results with the previous proposals.
2. Materials and Methods
2.1. Fossil Record and Penguin Phylogenies
According to the available scientific literature and the Paleobiology Database, we consolidated a new biogeographical and temporal matrix, considering all the records for penguin species (Table 1). In this way, we recorded the time intervals according to their chronostratigraphic distribution range and encoded the presence (1) or absence (0) of the species in each geographical area. It should be mentioned that although a single occurrence is the only data for some fossil species (e.g., Crossvallia unienwillia), the stratigraphical range provided in Table 1 corresponds to the age of the level where the fossil was collected. The same criterion applies for species with multiple records (e.g., Palaeeudyptes klekowskii), in which the stratigraphical range corresponds to the ages of the levels where it was reported. For species with an uncertain age, due to the lack of a strict stratigraphic control (i.e., Marplesornis novaezealandiae), the range includes a different-ages proposal. Table 1 includes the source of the data.
Data from 83 species (18 living and 65 fossil ones) were obtained. Given the need for a completely resolved and calibrated phylogeny to perform the BioGeoBEARS analysis, a review of the latest phylogenies proposed for Sphenisciformes was carried out. After considering the number of species included, the consistency of the calibrated ages, the degree of resolution, and the integration of multiple information sources, we applied the Bayesian total evidence phylogeny proposed by Gavryushkina et al. [32]. Another proposal, the Bayesian Markov Chain Monte Carlo (MCMC) framework for phylogenetic analysis, takes an extensive data source from molecular sequences derived from extant species and morphological traits from extant and fossil species. It also considers the stratigraphic intervals as the fossil occurrences. The phylogenetic proposal of Gavryushkina et al. considers the evolutionary affinities of penguins according to 202 morphological characters [42] derived from reasonably complete fossil specimens (n = 36), together with molecular and morphological information from the 18 living species [32]. With this input, this approach estimates and dates species phylogenies. The Bayesian method integrates the fossil information under a new perspective, unlike other methods that only use fossils to calibrate nodes or stablished origin intervals. For our purposes, we used the maximum sampled-ancestor clade credibility tree (the MSACC tree). This tree is a summary tree derived from a posterior sample that maximizes the product of posterior clade probabilities (see details in [80], cited in [32]). Other biogeographical proposals discussed below are based on different phylogenetic approaches [38,43] and references cited therein).
2.2. Species Considered in the Present Analysis
The description of the first fossil penguin was followed by a great proliferation of new genera and species, which after some years were re-evaluated and, in many cases, dismissed or considered as synonyms. This work took, as a starting point, a complete review of the fossil record for the Sphenisciformes lineage (Table 1, Figure 1). Even though the list of penguin fossil species is much more extensive, rigorous analyses carried out over recent decades have established long synonymic lists of species and genera that are no longer considered valid. Table 1 follows the taxonomic arrangements proposed for Argentinian [63,64,65,66,67,68,69,70,71], Chilean [67,69,72,73,78,81], Peruvian [24,28,41,57,69,73], Antarctic [27,49,50,51,52,53,54,57,58,59], New Zealand [11,17,19,20,33,38,42,48,55,60], Australian [56,67,75], and African [76,77] taxa. This compilation is essential to obtain complementary information for the discussion and the palaeoecological analysis.
A particular case worth commenting on is that of Eudyptula. In this work, the traditional and most widely analyzed proposal, in which Eudyptula minor would be the only modern species of the genus Eudyptula, was adopted as input for the present analysis. According to that proposal, the diversity of Eudyptula forms is reflected in the six subspecies inhabiting Australia and New Zealand [2,82]. Other more recent proposals consider that Eudyptula would be constituted by E. minor and E. novaehollandiae, species of recent divergence [83,84] that have been accepted as such by the ornithological community [85]. The inclusion of Eudyptula as the only living species does not modify or bias our results. Further, the incorporation of extinct species was constrained by several additional factors, including taxonomic status, given that some taxa are currently synonyms or have been considered non-valid taxa in subsequent revisions, and their previous inclusion in a phylogeny.
On the other hand, Spheniscus anglicus, a species described from materials that presumably come from the Miocene Bahía Inglesa Formation of Chile [86], was excluded from this analysis due to serious irregularities. The material was bought and removed the country illegally, violating the laws for the protection of the paleontological heritage in Chile. In this context, the species’ geographical and stratigraphic origin is not reliable. In addition, the characters used for its diagnosis are not adequate, and the proposal of a new species is unjustified. For these reasons, we decided to exclude S. anglicus from our analysis, a species that has never been listed or considered in any of the subsequent specialized scientific publications.
In short, despite not being included in the present biogeographical analysis, the information derived from all the species was not included in phylogenetic proposal of Gavryushkina et al. [32], which provided complementary information on the presence and diversity in the continental areas considered, allowing the enrichment of aspects of the discussion. The details of the fossil species considered here, and those included in our analysis, are provided in Table 1.
2.3. Paleobiogeographical Analyses
The biogeographic regions established for the analysis were chosen based on the extant and ancient distribution of Sphenisciformes species, as well as on geological and climatic criteria. Thus, we established six biogeographic regions or areas: north-central South America, including Galapagos (from 23° S), southern South America, southern Africa, Antarctica, Australia, and New Zealand, unlike the nine [28], ten [35], or the twelve [38] areas included in other contributions. For analysis, we proposed a flexible scenario for the dispersal events among the various study areas. This criterion was determined by the proximity and distances among the six areas and their geological histories linked to the fragmentation and drift of Gondwana since the Cretaceous and, later, during the Paleogene and the Neogene [87,88]. These drift processes triggered the oceanographical evolution of marine currents [89], which are key factors in the dispersal possibilities for penguins. Likewise, the possibilities for the colonization of areas were established based on the long-distance swimming characteristics observed in current penguins, which were presumably present in Paleogene forms according to fossil distribution since the Paleogene [33]. Given the outstanding dispersal and marine movement capacity reflected in modern species, as well as the Southern Hemisphere distribution of the fossil and modern species, a matrix of the probability of colonization was adjusted to 1 with respect to the studied areas.
In accordance with the BioGeoBEARS analysis [44], we carried out the evaluation of three models: Dispersal—Extinction Cladogenesis (DEC); a likelihood version of the Dispersal—Vicariance model (DIVALIKE); and a likelihood version of the BayArea model (BAYAREALIKE). The DEC model considers and emphasizes changes in the range of distribution in speciation events (cladogenesis). Under that model, during events a descendant lineage will always occupy a single region of the ancestral area, considering sympatry or vicariance. The DIVALIKE model allows a daughter lineage to retain more than a single geographical region of ancestral occupation during the vicariant event. This model does not allow a daughter lineage to inherit a small rank that is sympatric to the rank of another descendant lineage. Conversely, the BAYAREALIKE model does not emphasize geographic range variation at speciation events; instead, it estimates range changes along speciation events through range expansion−contraction dynamics., We assessed these models including the Jump-dispersal (+J) parameter [44,90]. This parameter allows evolutionary founding events, where an emerging novel lineage disperses outside of the area(s) occupied by its ancestor during the speciation process.
All the models were compared, considering the p-value for the LRT (Likelihood Ratio Test) and the value of the AIC for each evaluated scenario [91]. The estimation models using the methodologies derived from BioGeoBEARS have been applied to various bird taxa, and despite receiving criticism for the inclusion of the parameter J (the founder effect) [92], these models have been reevaluated, reinforcing, and supporting the validity of the models [93]. Thus, we incorporated founder-event speciation (+J), which results in a process that is important in island systems for birds, considering the importance of transcontinental colonization events during different bird clades diversification, and especially for penguins [35]. Specifically, models that included the +J parameter have been broadly consistent in explaining the colonization processes in biogeographic and macroevolutionary studies. Examples are the contributions on several lineages of modern birds, including the Megapodidae family within Galliformes [94], Thraupidae (Coerebinae) [90], Motacillidae [95], Coraciiformes [96], Trogoniformes [97], Rallidae [98], and those studies on fossil lineages, such as Coelurosauria clade [99], and mammals, such as horses (Equinae) [100]. With particular reference to penguins, previous works analyzed the crown group species [35] as well as fossil representatives [38]. In line with these works and the life-history traits of penguins, we considered that the +J parameter would be associated with Sphenisciformes macroevolutionary process, due to the remarkable oceanic dispersion capacity evidenced by modern and ancient forms [77,101,102]. The statistical analyses were performed using the software RASP powered by R software [103].
3. Results and Discussion
3.1. Paleogene History of Penguins
According to the results obtained here, the best model was the BAYAREALIKE + J, which provided the statistical support with the lowest AICc value and the highest AICw, compared to the other models (Figures S1–S6, Table 2 and Table S1); there are significant differences between the BAYAREALIKE + J and BAYAREALIKE model scenarios (Table S2). Similar results were obtained in previous contributions [38], although in other analyses the selected model was DIVALIKE + J [35]. As we expected, our results confirmed the relevance of the +J parameter (founder events) to explain the biogeographical history of penguins, a clade with a presumably well-stablished dispersal capacity due the early development of adaptative traits to navigate across marine environments; this is supported by the analyses of Paleogene forms, including those of Waimanu [11]. In addition, the BAYAREALIKE + J, as a better model, provides support for the importance of geographical expansion−contraction dynamics to explain the evolutionary patterns of Sphenisciformes. The Cenozoic cooling trends triggered many biomic expansions−contractions in Southern Hemisphere continents, which influenced the dispersal processes and possibly the speciation and extinction patterns.
In general terms, all the models pinpoint and concur with a center of origin for Sphenisciformes in New Zealand (Figures S1–S6). These results are consistent with previous estimates based on fossil findings, which also estimate the origin of the lineage towards the late Cretaceous [9,10,11,12,13]. This is a logical proposal given the high diversification and specialization already present in the Paleocene. The oldest records for penguins correspond to the Paleocene and are concentrated in New Zealand [17,18,19,20,104].
The results of our analysis pinpoints New Zealand as the most likely ancestral area, and secondly points to Antarctica with slightly lower probabilities (Figure 2; see the Supplementary Materials for details). It is noteworthy that New Zealand’s importance as a center of origin is strongly supported by a high concentration of records, many of them being the oldest penguin records reported to date [6,19,20,104] (Figure 1, Table 1). It should be noted that New Zealand’s geographical proximity to the Antarctic territory during the Upper Cretaceous and the early Paleogene provides some evidence of the significance of both continents during the initial diversification of the group. Findings for the Chatham Islands, and specifically associated with the Takatika Grit, show that since the Upper Cretaceous (c.83−79 Ma) Zealandia began to present a progressive rupture with respect to West Antarctica [14,87,105], continuing until the Eocene with respect to the eastern Antarctic region [106]. At the end of this stage, Zealandia would have experienced a strong marine transgression [107,108]. This process might explain the notable radiation and rapid diversification of penguins during the Eocene for Antarctica, as compared with New Zealand. The abundant fossil record of Seymour Island (Antarctica) strongly supports this idea. In this sense, the wider Antarctic territory would have offered greater opportunities, in terms of colonization of new niches and thus the generation of diverse processes of speciation, due to geographic isolation.
Our results, like previous findings [28,38], allow us to postulate New Zealand as a probable main ancestral territory. In addition, is important to consider the geographical proximity between New Zealand and Antarctica during the Paleogene; both territories during the Paleocene-Eocene climatic optimum would have presented very similar environmental conditions at the continental level, with cold temperate environments that would have presented periods of fluctuation towards warm temperate climates during the Early Eocene Climatic Optimum (EECO). This would have made it possible to configure humid temperate forest biomes, with tropical floristic components during several Paleogene intervals [109,110,111]. Likewise, marine estimates show significant warming of Pacific waters from the upper Paleocene to the middle Eocene [46,112,113]. This paleoenvironmental context could have favored the dispersal between New Zealand and Antarctica, given the importance that oceanic temperatures possibly played for the dispersal of the first penguins. Added to this is the Crossvallia record in Antarctica and New Zealand, which further strengthened the links between these two large areas during the first million years of the group’s evolution. Crossvallia unienwillia was a large penguin species, with a single record in the Paleocene of Seymour Island (Antarctica); due to the incompleteness of its skeleton, it has been repeatedly omitted from phylogenetic analyses. Its presence, however, indicates the presence of Sphenisciformes in Antarctica since the Paleocene [46,47]. Crossvallia waiparensis is the second species of Crossvallia that has been described for the Paleocene of New Zealand (Figure 3).
The recent description of numerous taxa for the Paleocene of New Zealand indicates favorable conditions for the establishment and flourishing of the group. Although only two species of the genus Waimanu have been included (Figure 2), the New Zealand Paleocene sphenicofauna also includes other species such as Kupoupou stilwelli, Crossvallia waiparensis, Sequiwaimanu rosieae, and Kumimanu biceae (Figure 1 and Figure 2). Although the Antarctic record is scarce during the Paleocene, this is probably due to a taphonomic bias rather than to regional environmental conditions, since the changes in the depositional environment of the James Ross Basin during the Eocene caused a more abundant and diverse penguin record [114].
During the middle Eocene, several lineages diversified in Antarctica, including forms of a wide spectrum of sizes, including some giant penguins such as Anthropornis grandis that reached 1.7 m high, and other tiny penguins such as Aproskoditos microtero that were only 0.35 m high. This shows great diversity, also evidenced in the number of species included in other genera, such as Delphinornis, Tonniornis, Palaeeudyptes, and others (see Table 1). This broad diversity is probably associated with niche partitioning processes powered by the development of different bill morphologies and specializations in a wide range of trophic possibilities [115,116]. Among these taxa are the forms that reached the southern and central South American coasts, allowing the establishment of the Perudyptes devriesi lineages on the Peruvian coasts. The record of taxa typically Antarctic in southern South America during the middle Eocene [29,30] supports this hypothesis (Figure 4).
During the late Eocene and probably the early Oligocene, the Antarctic species would have completely disappeared. A highlighted diversification of the New Zealand lineages is evidenced at this time. Some colonizations in South America, such as those of Icadyptes salasi and Inkayaku paracacensis, are verified during the Eocene of Peru, and would be closely linked with the penguin fauna of New Zealand. According to our results, the three lineages (together with Perudyptes devriesi) would have independently colonized the subtropical Pacific coasts of South America during the late Eocene. These colonizations could be related with migrations produced by oceanic currents established from New Zealand to South America during the Eocene after the EEOC, with the opening of the Tasman Strait and the Drake Passage. The currents suffered notable alterations at the latitudinal level, ending in the establishment of the circum-Antarctic current, the main influencing factor in progressive Antarctic freezing during the Oligocene [117,118,119] (Figure 5).
In this sense, we proposed that New Zealand could have played an important role as a refugee during the Oligocene for penguins that faced the climatic changes that transformed the Antarctic continent and the marine current regimes [117,118,119,120]. This idea is aligned with the presence of the Kaiika lineage, a taxa endemic to New Zealand [48], and the diversity of the genus Kairuku with three species recorded for the New Zealand Oligocene [60]. Likewise, Palaeeudyptes, of presumably Antarctic origin, would have been present in New Zealand, as evidenced, for example, by Palaeeudyptes marplesi [42]. Therefore, the fossil findings suggest that after the extinction of almost all of the Antarctic forms, Palaeeudyptes could have been one of the few lineages that would have colonized and persisted in New Zealand (Figure 5).
3.2. Neogene History of Penguins
According to our results, the taxa recorded in Patagonia (Argentina), would have had a New Zealand origin. Presumably, the establishment of the Antarctic circumpolar current would have allowed the dispersion of the lineages from New Zealand to southern South America, possibly given the similar environmental conditions in both places that might have been decisive for aspects related to the feeding and breeding areas. In this way, from the colonization of southern South America at the beginning of the Miocene, several lineages would have developed. First, Paraptenodyes (including P. robustus and P. antarcticus), and later, Eretiscus tonni and Palaeospheniscus (with a high diversity constituted by P. bergi, P. patagonicus, and P. biloculata), established a wide presence in southern South America, as evidenced by the fossil record of Patagonia Argentina [61,64,70,71], and reached the coasts of Chile and Peru by the middle Miocene [81,121].
The Miocene was a crucial time for the establishment of the most modern faunas [36,71,122]. Our results suggest that from New Zealand to Southern South America, three biogeographical events that were probably related with the intensification of the Antarctic circumpolar current during the middle and late Miocene [89], deserve to be highlighted. This new scenario favors the selection of physiological and biochemical adaptations to face colder environmental regimes, an idea strongly supported by genomic studies [35,122]. Thus, our results are consistent with the biogeographical proposals from the crown group [35,38]; see also [123].
The first event corresponds to the diversification of Spheniscus, widely recorded during the Mio-Pliocene of Chile and Peru [36]. Here, an origin in the south of South America and a dispersion towards the north, which was probably influenced by the beginning of the establishment process of the Humboldt current during the middle Miocene (15−12 Ma) is proposed. The ecological preferences of the Spheniscus lineage are consistent with colder waters and a diet based on fish [116]; it could be possible that these traits were inherited from their ancestors. This is also supported by the fossil record of these areas and even by the fossil record of Antarctic [54]. By the middle Miocene, there is a vast record of penguins attributed to Spheniscus, mainly in Chile and Perú, represented by species such as Spheniscus urbinai, S. megaramphus, S. muizoni, and S. chilensis. The diversification of the modern lineages corresponding to Spheniscus would have been relatively recent. Respectively, S. humboldti and S. demersus might have colonized the central-northern Pacific coast of South America and the coasts of southern Africa from southern South America during the Pleistocene [77,124]. In addition, and recently, S. humboldti colonized the Galapagos archipelago, allowing the origin of S. mendiculus [122]. These processes were probably related to the expansion of the polar caps during the glaciations, reaching almost 40° South latitude, altering the structure of the marine currents and the latitudinal thermal gradient [125]. Thus, our results are consistent with previous proposals [77] of multiple colonization events in Africa for Sphenisciformes. This is supported by the presence of Nucleornis insolitus, Dege hendeyi, and Inguza predemersus in the fossil record, which colonized Africa independently at the end of the Miocene (Figure 6 and Figure 7).
A second biogeographical event corresponds to the clade Megadyptes-Eudyptes, with a probable common ancestor in New Zealand. These results are consistent with previous analyses about the biogeographical history for this clade [38]. In addition, our findings suggest a strong generalist condition for these geographical occupations. The lineage would have developed wide dispersal capacities around Antarctica, reaching multiple continental islands close to the mainland masses, which would have generated advantages in terms of the absence of possible predators and competition for resources. However, the cooling processes that intensified during the Plio-Pleistocene led to the formation and growth of ice caps in the Antarctic Ocean. Therefore, these glacial and interglacial intervals would have generated isolation and subsequent speciation in some of these lineages [35]. On the other hand, the scenarios of allopatric speciation by isolation in islands for the Eudyptes lineage are discussed by some authors, such as Frugone et al. [126], who proposed a greater effect of the thermal zonation of the Antarctic polar front and the subtropical currents on the definition of species. Consequently, the strong dispersal capacity and a more generalist condition would not have allowed the necessary genetic isolation and subsequent speciation, as seems to be evidenced in E. schlegeli and E. chrysolophus. In this way, Eudyptes would be an example of a generalist lineage that, with its different species, migrated from New Zealand throughout the Southern Hemisphere, reaching southern South America, subantarctic islands, and Africa [35,126] (Figure 6 and Figure 7).
During the middle Miocene a third diversification process of the clade Pygoscelis-Aptenodytes is revealed in our results. The radiation center was probably from southern South America with lineages such as Madrynornis endemic to Patagonia, and a common ancestor of Pygoscelis and Aptenodytes with an outstanding skill of dispersion in the circum-Antarctic waters. These ocean currents became colder and colder since the intensification of the circum-Antarctic current 11 Ma ago, thanks to the development of biochemical and metabolic adaptations associated with thermoregulation, optimization of oxygen consumption and ATP production [35,122,126]. This would have allowed them to reach a circum-Antarctic distribution during the middle and late Miocene, as evidenced by the presence of Pygoscelis tyreei and Aptenodytes ridgeni in New Zealand, as well as the presence of P. grandis and P. calderensis in southern South America. The southern parts of Argentina and New Zealand may have been linked during the late Middle Miocene as areas of constant exchange of species with other regions, powered by the latitudinal direction of marine currents [118]. Finally, together with the cooling and the Pleistocene glaciations, processes of population isolation occurred by the formation of polar caps and the consequent changes in the currents. Some patterns triggered by glacial-interglacial intervals modified the genetic flow between populations and promoted isolation scenarios and the subsequent speciation in the Antarctic lineages. Consequently, the adaptations previously developed along the crown group lineage evolution, which allowed the occupation of more extreme thermal niches in increasingly cold waters, would have been key exaptations in the colonization and subsequent biomic specialization in extreme tundra conditions. The modern species Aptenodytes forsteri and Pygoscelis adeliae are examples of that process (Figure 6 and Figure 7).
4. Conclusions
Despite using a different phylogenetic proposal, a flexible scenario for dispersal possibilities, and an alternative areas delimitation, our results are broadly consistent with previous findings about the main paleobiogeographical patterns during penguins’ evolutionary history. Thus, our findings are broadly consistent with a New Zealand center of origin for Sphenisciformes during the late Cretaceous and early Paleogene, supporting the hypothesis generated by analyses proposed by diverse authors [28,38]. With respect to the Eocene, we found an outstanding diversification and dispersion of penguins geographically associated with Antarctica, due to the establishment of temperate conditions triggered by PETM and EECO. The Oligocene and early Miocene represented a turnover in the Sphenicofauna; the extinction of Antarctic lineages consolidated New Zealand and Southern South America as refuges associated with the latitudinal contraction of temperate biomes and warm marine currents. The outcomes suggest that crown group Sphenisciformes flourished during the Miocene and many adaptations from their ancestors would probably be established as exaptations to face increasingly cold environmental conditions during the Neogene. Thus, some lineages expanded their areas towards subtropical latitudes in South America and Africa, while other lineages (Pygoscelis and Aptenodytes) developed the colonization capacity for the hardest climatic environments, such as the tundra conditions in Antarctica during the Pleistocene glaciations. All these statements are, however, provisional, and subject to new findings and subsequent analyses. Although the penguin record is quite complete in comparison with those of other bird taxa, several deficiencies and important gaps are recognized during the time periods considered. We trust, however, that the efforts of numerous researchers currently working on these studies will at least partially reverse these findings.
Funding
This research was funded by the General Research Council (Dirección General de Investigaciones-DGI) at the Universidad Santiago de Cali (Colombia) under call No. 01-2021; La Plata University PI N955 (Argentina), and the National Scientific and Technical Research Council PIP 0096 (Argentina).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study are available on the text in the Section 2.
Acknowledgments
We thank A. Givryushkina for allowing us to use her phylogenetic proposal and for being ready to collaborate with us; Leonardo Belalcazar for the computational logistical support; and Jacobo Sabogal for the opinions and technical support in graphic design and penguin illustrations. Finally, we acknowledge the anonymous reviewers who undoubtedly contributed to the improvement of the manuscript.
Mayr, G.; De Pietri, V.L.; Love, L.; Mannering, A.; Scofield, R.P. Leg bones of a new penguin species from the Waipara Greensand add to the diversity of very large-sized Sphenisciformes in the Paleocene of New Zealand. Alcheringa2020, 44, 194–201. [Google Scholar] [CrossRef]
Figure 1.
Updated chronostratigraphic distribution of fossil penguins of the world (n = 65). Follow color key to geographical occurrence (see Table 1 for details): (a–k) indicates the species of penguin illustrations. On the right side (bottom-up), the living forms Aptenodytes forsteri, Pygoscelis papua, Spheniscus magellanicus, and Eudyptes chrysolophus are representatives of each genus. Penguins not at scale. Penguin illustration credits: Jacobo Sabogal.
Figure 1.
Updated chronostratigraphic distribution of fossil penguins of the world (n = 65). Follow color key to geographical occurrence (see Table 1 for details): (a–k) indicates the species of penguin illustrations. On the right side (bottom-up), the living forms Aptenodytes forsteri, Pygoscelis papua, Spheniscus magellanicus, and Eudyptes chrysolophus are representatives of each genus. Penguins not at scale. Penguin illustration credits: Jacobo Sabogal.
Figure 2.
Ancestral range estimation for Sphenisciformes based on results of high percentages for nodes considering the BAYAREALIKE + J scenario and using the six-area regime as shown in map of biogeographic areas powered in BioGeoBEARS. (a–l) indicates the species of penguin illustrations. Follow the color key for the cases of presence in more than one area. For details see the Supplementary Materials. Penguin illustration credits: Jacobo Sabogal.
Figure 2.
Ancestral range estimation for Sphenisciformes based on results of high percentages for nodes considering the BAYAREALIKE + J scenario and using the six-area regime as shown in map of biogeographic areas powered in BioGeoBEARS. (a–l) indicates the species of penguin illustrations. Follow the color key for the cases of presence in more than one area. For details see the Supplementary Materials. Penguin illustration credits: Jacobo Sabogal.
Figure 3.
Middle-Late Paleocene biogeographical events: origin of Sphenisciformes in New Zealand (Kupoupou stilwelli in the image) and early dispersion to Antarctica, evidenced by the presence of Crossvallia (in the image).
Figure 3.
Middle-Late Paleocene biogeographical events: origin of Sphenisciformes in New Zealand (Kupoupou stilwelli in the image) and early dispersion to Antarctica, evidenced by the presence of Crossvallia (in the image).
Figure 4.
Main Eocene biogeographical events: the diversification of diverse Sphenisciformes lineages in Antarctica (i.e., Palaeeudyptes and Delphinornis in the image) and early dispersal and colonization towards South America, evidenced by the presence of Perudyptes and Incayacu (in the image) and Icadyptes.
Figure 4.
Main Eocene biogeographical events: the diversification of diverse Sphenisciformes lineages in Antarctica (i.e., Palaeeudyptes and Delphinornis in the image) and early dispersal and colonization towards South America, evidenced by the presence of Perudyptes and Incayacu (in the image) and Icadyptes.
Figure 5.
Main Oligocene biogeographical events: the extinction of Sphenisciformes due to Antarctica cooling; New Zealand as a refuge and center of diversification, evidenced by the presence of many genera and species, such as the Kairuku (in the image).
Figure 5.
Main Oligocene biogeographical events: the extinction of Sphenisciformes due to Antarctica cooling; New Zealand as a refuge and center of diversification, evidenced by the presence of many genera and species, such as the Kairuku (in the image).
Figure 6.
Main Miocene biogeographical events: the colonization of lineages from New Zealand to South America due to circum-Antarctic oceanic currents (i.e., Paraptenodytes in the image); diversification and expansion of Spheniscus across South America.; origin and diversification of Megadyptes-Eudyptes clade from New Zealand; diversification of clade Pygoscelis in southern South America.
Figure 6.
Main Miocene biogeographical events: the colonization of lineages from New Zealand to South America due to circum-Antarctic oceanic currents (i.e., Paraptenodytes in the image); diversification and expansion of Spheniscus across South America.; origin and diversification of Megadyptes-Eudyptes clade from New Zealand; diversification of clade Pygoscelis in southern South America.
Table 1.
Updated checklist of the fossil penguin species of the world (n = 65), their occurrences, and stratigraphical ranges (SR) (in some cases, only an approximation is provided, corresponding to the age of the level where the fossil was collected).
Table 1.
Updated checklist of the fossil penguin species of the world (n = 65), their occurrences, and stratigraphical ranges (SR) (in some cases, only an approximation is provided, corresponding to the age of the level where the fossil was collected).
* Species included in the paleobiogeographical analyses. a We agree with Jadwiszczak [52,78] regarding the prematurity of the new combination Delphinornis wimani [41] for a species that already transferred from Notodyptes to Archaeospheniscus [79]. However, we maintain the new name for this table in accordance with the phylogeny on which we have based our biogeographical analyses [32].
Table 2.
Summary of results for all six models evaluated under the six-area regime. Models with +J indicate those allowing for founder effect dispersals. The best-supported model is shown in bold. p is the number of parameters.
Table 2.
Summary of results for all six models evaluated under the six-area regime. Models with +J indicate those allowing for founder effect dispersals. The best-supported model is shown in bold. p is the number of parameters.
Model
LnL
p
AICc
AICc wt.
DEC
−139.9
2
284
2.6 × 10−6
DEC + J
−133
3
272.5
8 × 10−4
DIVALIKE
−151.4
2
307.1
2.5 × 10−11
DIVALIKE + J
−143.7
3
294
1.8 × 10−8
BAYAREALIKE
−150.2
2
304.6
8.9 × 10−11
BAYAREALIKE + J
−125.9
3
258.3
1.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Follow MDPI
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely
those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or
the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas,
methods, instructions or products referred to in the content.
|
Antarctic freezing during the Oligocene [117,118,119] (Figure 5).
In this sense, we proposed that New Zealand could have played an important role as a refugee during the Oligocene for penguins that faced the climatic changes that transformed the Antarctic continent and the marine current regimes [117,118,119,120]. This idea is aligned with the presence of the Kaiika lineage, a taxa endemic to New Zealand [48], and the diversity of the genus Kairuku with three species recorded for the New Zealand Oligocene [60]. Likewise, Palaeeudyptes, of presumably Antarctic origin, would have been present in New Zealand, as evidenced, for example, by Palaeeudyptes marplesi [42]. Therefore, the fossil findings suggest that after the extinction of almost all of the Antarctic forms, Palaeeudyptes could have been one of the few lineages that would have colonized and persisted in New Zealand (Figure 5).
3.2. Neogene History of Penguins
According to our results, the taxa recorded in Patagonia (Argentina), would have had a New Zealand origin. Presumably, the establishment of the Antarctic circumpolar current would have allowed the dispersion of the lineages from New Zealand to southern South America, possibly given the similar environmental conditions in both places that might have been decisive for aspects related to the feeding and breeding areas. In this way, from the colonization of southern South America at the beginning of the Miocene, several lineages would have developed. First, Paraptenodyes (including P. robustus and P. antarcticus), and later, Eretiscus tonni and Palaeospheniscus (with a high diversity constituted by P. bergi, P. patagonicus, and P. biloculata), established a wide presence in southern South America, as evidenced by the fossil record of Patagonia Argentina [61,64,70,71], and reached the coasts of Chile and Peru by the middle Miocene [81,121].
|
yes
|
Ornithology
|
Did penguins originate in the Antarctic?
|
no_statement
|
"penguins" did not "originate" in the antarctic.. the antarctic is not the place of "origin" for "penguins".
|
https://pubmed.ncbi.nlm.nih.gov/16519228/
|
Multiple gene evidence for expansion of extant penguins out of ...
|
Abstract
Classic problems in historical biogeography are where did penguins originate, and why are such mobile birds restricted to the Southern Hemisphere? Competing hypotheses posit they arose in tropical-warm temperate waters, species-diverse cool temperate regions, or in Gondwanaland approximately 100 mya when it was further north. To test these hypotheses we constructed a strongly supported phylogeny of extant penguins from 5851 bp of mitochondrial and nuclear DNA. Using Bayesian inference of ancestral areas we show that an Antarctic origin of extant taxa is highly likely, and that more derived taxa occur in lower latitudes. Molecular dating estimated penguins originated about 71 million years ago in Gondwanaland when it was further south and cooler. Moreover, extant taxa are inferred to have originated in the Eocene, coincident with the extinction of the larger-bodied fossil taxa as global climate cooled. We hypothesize that, as Antarctica became ice-encrusted, modern penguins expanded via the circumpolar current to oceanic islands within the Antarctic Convergence, and later to the southern continents. Thus, global cooling has had a major impact on penguin evolution, as it has on vertebrates generally. Penguins only reached cooler tropical waters in the Galapagos about 4 mya, and have not crossed the equatorial thermal barrier.
Figures
Figure 1
Alternative phylogenetic hypothesis proposed for…
Figure 1
Alternative phylogenetic hypothesis proposed for all extant genera of penguins. Hypothesis including all…
Figure 1
Alternative phylogenetic hypothesis proposed for all extant genera of penguins. Hypothesis including all genera were based on (a) morphological (O'Hara 1989), (b) behavioural (Jouventin 1982), (c) myological (McKitrick 1991), (d) integumentary and breeding (Giannini & Bertelli 2004) and were compared to the topology we obtained with (e) nuclear and mitochondrial DNA sequences. Hypothesis based on (f) myology (Schreiweis 1982) and (g) DNA hybridization studies (Sibley & Ahlquist 1990) did not include all genera and, therefore, were not compared in the AU test.
Figure 2
Bayesian estimate of phylogenetic relationships…
Figure 2
Bayesian estimate of phylogenetic relationships of modern penguins. Phylogenetic reconstruction was based on…
Figure 2
Bayesian estimate of phylogenetic relationships of modern penguins. Phylogenetic reconstruction was based on 2802 bp of RAG-1 and 2889 bp of mitochondrial 12S and 16S rDNA, cytb and COI, excluding gaps and ambiguously aligned positions. Numbers above branches are Bayesian posterior probabilities/ML bootstrap proportions/MP bootstrap proportion, which are represented as open star when (1.0/100/100). Branches for more distant outgroups were shortened for graphic purposes. A bar represents the expected number of DNA substitutions per site. Each genus is colour-coded. Penguin drawings were modified from del Hoyo et al. (1992), with permission, from Lynx Edicions, Barcelona, Spain.
Bayesian estimates of ancestral areas. Areas were defined as southernmost breeding regions for each species within Antarctica and the Antarctic Convergence (blue), outside Antarctic convergence and up to latitude 45 °S (green), between 45 °S and 30 °S (red), and north of 30 °S (grey). Posterior probabilities for each state are shown as a pie diagram in each internal node.
Figure 4
Chronogram of penguin diversification. Nodes…
Figure 4
Chronogram of penguin diversification. Nodes A and B were fixed at 104 and…
Figure 4
Chronogram of penguin diversification. Nodes A and B were fixed at 104 and 90 myr. Credibility intervals (95%) are indicated by grey bars at numbered internal nodes. Vertical dashed line indicates the K/T boundary. Periods that Antarctica was ice-covered (black continuous bars) are projected as shaded grey rectangles in the chronogram. Ocean temperature is based on high-resolution deep-sea oxygen isotope records. The MMCT is indicated by an arrow. Geological time scale is given as defined by the Geological Society of America.
Figure 5
Polar stereographic projection to 35…
Figure 5
Polar stereographic projection to 35 °S at 40, 25, 15 and 5 mya.…
Figure 5
Polar stereographic projection to 35 °S at 40, 25, 15 and 5 mya. Reconstructions have continents represented by present-day shorelines (ODSN 2004). Antarctica is indicated as partially (b–c) and fully covered in ice (d–f) (Shevell et al. 2004). Genera are represented by different coloured capital letters, following the coloured names indicated at the bottom. As they start to diversify, species are represented by small letters according to the first letter of common names given in figure 2, except royal penguin represented by r1. Oldest and biggest penguin fossils (Simpson 1976; Clarke et al. 2004) are numbered 1–6 and projected at (a) 40 and (b) 25 mya. The Antarctic circumpolar current, indicated by arrows in the reconstruction at 25 mya only, was formed at the end of the Oligocene.
|
Abstract
Classic problems in historical biogeography are where did penguins originate, and why are such mobile birds restricted to the Southern Hemisphere? Competing hypotheses posit they arose in tropical-warm temperate waters, species-diverse cool temperate regions, or in Gondwanaland approximately 100 mya when it was further north. To test these hypotheses we constructed a strongly supported phylogeny of extant penguins from 5851 bp of mitochondrial and nuclear DNA. Using Bayesian inference of ancestral areas we show that an Antarctic origin of extant taxa is highly likely, and that more derived taxa occur in lower latitudes. Molecular dating estimated penguins originated about 71 million years ago in Gondwanaland when it was further south and cooler. Moreover, extant taxa are inferred to have originated in the Eocene, coincident with the extinction of the larger-bodied fossil taxa as global climate cooled. We hypothesize that, as Antarctica became ice-encrusted, modern penguins expanded via the circumpolar current to oceanic islands within the Antarctic Convergence, and later to the southern continents. Thus, global cooling has had a major impact on penguin evolution, as it has on vertebrates generally. Penguins only reached cooler tropical waters in the Galapagos about 4 mya, and have not crossed the equatorial thermal barrier.
Figures
Figure 1
Alternative phylogenetic hypothesis proposed for…
Figure 1
Alternative phylogenetic hypothesis proposed for all extant genera of penguins. Hypothesis including all…
Figure 1
Alternative phylogenetic hypothesis proposed for all extant genera of penguins. Hypothesis including all genera were based on (a) morphological (O'Hara 1989), (b) behavioural (Jouventin 1982), (c) myological (McKitrick 1991), (d) integumentary and breeding (Giannini & Bertelli 2004) and were compared to the topology we obtained with (e)
|
yes
|
Dance
|
Did tap dancing originate in America?
|
yes_statement
|
"tap" "dancing" "originated" in america.. america is the birthplace of "tap" "dancing".
|
https://www.loc.gov/item/ihas.200217630/
|
Tap Dance in America: A Short History | Library of Congress
|
Glover and Dunn: A Contest of Beat and Feet
On the evening of the thirty-ninth annual Grammy Awards that was broadcast on national television on February 27, 1997, Colin Dunn and Savion Glover faced off in the fiercest tap dance challenge of their lives. Colin Dunn, the star of Riverdance—The Musical, was challenging Savion Glover, the choreographer and star of Bring in 'da Noise, Bring in 'da Funk, to a battle of the feet that was staged to showcase and celebrate the two hottest musicals on Broadway. But there was nothing festive about the challenge dance for these two stars. Not only was their reputation as dancers at stake but also the supremacy of the percussive dance forms that each show represented—Irish step dancing and African American jazz tap dancing.
Dunn went on first. Standing tall and straight, his back to the audience and hands placed neatly at the waist of his slim black pants, he spun around quickly on his introduction, and with the stamp of his high-heeled shoe drew himself up onto the balls of the feet and clicked out neat sets of triplets and cross-backs in place. The camera zoomed in on the dazzling speed and precision of Dunn's footwork, zoomed out on the handsome symmetry of his form, and quickly panned right to reveal the hulking presence of Glover—who stood crouched over, peering at Dunn's feet. Without an introduction, Glover slapped out a succession of flat-footed stomps that turned his black baggy pants, big baggy shirt, and mop of deadlocks into a stuttering spitfire of beats. Hunkering down into a deep knee bend, he repeated the slamming rhythms with the heels, toes, and insteps of his hard-soled tap shoes. Dunn heard the challenge. Taking his hands off his hips and turning around to face Glover, he delivered a pair of swooping scissor-kicks that sliced the air within inches of Glover face; and continued to shuffle with an air of calm, the fluid monotone of his cross-back steps bringing the volume of noise down to a whisper. Glover interrupted Dunn's meditation on the "ssssh" with short and jagged hee-haw steps that mocked Dunn's beautiful line and forced the conversation back to the sound, not look.
They traded steps, spitting out shards of rhythmic phrases and daring each other to pick up and one-up. Dunn's crisp heel-clicks were taken up by Glover with heel-and-toe clicks, which were turned by Dunn into airy flutters, which Glover then repeated from a crouched position. When they tired of trading politely, they proceeded to tap over each other's lines, interrupting each other wittily with biting sounds that made the audience scream, applaud, and stamp its feet. When Dunn broke his focus just for a moment to politely acknowledge the applause with a smile, Glover seized the moment and found his edge by perching on the tip of one toe and delivering a flick-kick with the dangling other that brushed within inches of Dunn's face. All movement came to a halt. And for one long moment, the dancers just stood there, flat-footed, glaring at each other. Though the clapping melted their stares, they slapped hands and turned away from each other and walked off the stage without smiling and never looking back.
An American Genre
This performance is a sublime example of the tap dance challenge, the general term for any competition, contest, breakdown, or showdown in which dancers compete before an audience of spectators or judges. Motivated by a dare, focused by strict attention to one's opponent, and developed through the stealing and trading of steps, the tap challenge is the dynamic and rhythmically expressive "engine" that drives tap dancing—our oldest of American vernacular dance forms. What is fascinating about the tap challenge that took place between Colin Dunn and Savion Glover at the 1997 Grammy Awards is that Glover's style of tap dance, which he calls "hitting"—an unusually percussive combination of jazz and hip-hop dance rhythms that utilizes all parts of the foot to drum the floor—is radically different from Dunn's style of stepping, a highly musical and sleekly modern translation of traditional Irish step dancing. Yet both of these dance forms trace their origins and evolution to a percussive dance tradition that developed in America several hundred years ago.
Tap dance is an indigenous American dance genre that evolved over a period of some three hundred years. Initially a fusion of British and West African musical and step-dance traditions in America, tap emerged in the southern United States in the 1700s. The Irish jig (a musical and dance form) and West African gioube (sacred and secular stepping dances) mutated into the American jig and juba. These in turn became juxtaposed and fused into a form of dancing called "jigging" which, in the 1800s, was taken up by white and black minstrel-show dancers who developed tap into a popular nineteenth-century stage entertainment. Early styles of tapping utilized hard-soled shoes, clogs, or hobnailed boots. It was not until the early decades of the twentieth century that metal plates (or taps) appeared on shoes of dancers on the Broadway musical stage. It was around that time that jazz tap dance developed as a musical form parallel to jazz music, sharing rhythmic motifs, polyrhythm, multiple meters, elements of swing, and structured improvisation. In the late twentieth century, tap dance evolved into a concertized performance on the musical and concert hall stage. Its absorption of Latin American and Afro- Caribbean rhythms in the forties has furthered its rhythmic complexity. In the eighties and nineties, tap's absorption of hip-hop rhythms has attracted a fierce and multi-ethnic new breed of male and female dancers who continue to challenge and evolve the dance form, making tap the most cutting-edge dance expression in America today.
Unlike ballet with its codification of formal technique, tap dance developed from people listening to and watching each other dance in the street, dance hall, or social club where steps were shared, stolen and reinvented. "Technique" is transmitted visually, aurally, and corporeally, in a rhythmic exchange between dancers and musicians. Mimicry is necessary for the mastery of form. The dynamic and synergistic process of copying the other to invent something new is most important to tap's development and has perpetuated its key features, such as the tap challenge. Fiercely competitive, the tap challenge sets the stage for a "performed" battle that engages dancers in a dialog of rhythm, motion, and witty repartee, while inviting the audience to respond with a whisper of kudos or roar of stomps. The oral and written histories of tap dance are replete with challenge dances, from jigging competitions on the plantation that were staged by white masters for their slaves, and challenge dances in the walk-around finale of the minstrel show, to showdowns in the street, displays of one-upsmanship in the social club, and juried buck-and wing-contests on the vaudeville stage. There are contemporary examples of the tap challenge as well, such as black fraternity step-dance competitions which are fierce as gang wars, and Irish step dance competitions, in which dancers focus more civilly on displaying technical virtuosity. But no matter the contest, all challenge dances necessitate the ability to look, listen, copy, creatively modify, and further perfect whatever has come before. As they said at the Hoofer's Club in Harlem in the 1930s, where tap dancers gathered to practice their steps and compete: "Thou Shalt Not Copy Anyone's Steps— Exactly!"
1600s and 1700s: Jig and Gioube
Opportunities for whites and blacks to watch each other dance may have begun as early as the 1500's when enslaved Africans shipped to the West Indies, during the infamous "middle passage" across the Atlantic Ocean, were brought up on deck after meals and forced to "exercise"—to dance for an hour or two to the accompaniment of bag-pipes, harps, and fiddles (Emery 1988: 6-9). In the absence of traditional drums, slaves danced to the music of upturned buckets and tubs. The rattle and restriction of chains may have been the first subtle changes in African dance as it evolved toward becoming an African-American style of dance. Sailors who witnessed these events were among the first of white observers who later would serve as social arbiters, onlookers, and participants at plantation slave dances urban slave balls. Upon arriving in North America and the West Indies, Africans too were exposed to such European court dances like the quadrille and cotillion, which they adopted by keeping the figures and patterns, but retaining their African rhythms (Szwed 1988).
In the 1650s, during the Thirteen War between England and Spain (1641-54) and under the command of Oliver Cromwell, an estimated 40,000 Celtic Irish solders were shipped to Spain, France, Poland, and Italy. After deporting the men, Cromwell succeeded in deporting the widows, deserted wives, and destitute families of soldiers left behind. Thereafter, thousands of Irish men, women and children were hijacked, deported, exiled, low-interest loaned or sold into the new English tobacco islands of the Caribbean. Within a few years, substantial proportions of mostly Atlantic Coast Africans were thrown on the so-called coffin ships and transported to the Caribbean. In an environment that was dominated by the English sugar plantation owner, Irish indentured servants and West African slaves worked and slaved together. "For an entire century, these two people are left out in the fields to hybridize and miscegenize and grow something entirely new," writes Irish historian Leni Sloan. "Ibo men playing bodhrans and fiddles and Kerrymen learning to play jubi drums, set dances becoming syncopated to African rhythms, Saturday night ceili dances turning into full-blown voodoo rituals" (Sloan 1982:52). The cultural exchange between first-generation enslaved Africans and indentured Irishmen would continue through the late 1600s on plantations, and in urban centers during the transition from white indentured servitude to African slave labor.
It is believed that on the island of Montserrat in the Lesser Antilles of the Caribbean, the Africans' first European language was Gaelic Irish, and that retentions and reinterpretations of Irish forms were most pronounced in music, song, and dance (Messenger 1975: 298). And in Joseph Williams's book, Whence the Black Irish of Jamaica, the sheer number of Irish surnames belonging to former African slaves—Collins, Kennedy, McCormick, O'Hare—supports the contention that enslaved and indentured blacks and whites lived and danced together. They also rebelled together. The 1741 St. Patrick's Day Rebellion in New York was led by John Cory, an Irish dancing master, and Caesar, a Free African, who together burned down the symbols of the British rule, the Governor's mansion and main armory. Corey and Caesar died together in the brutal suppression that followed.
Jigging and Juba
As Africans were transplanted to America, African religious circle dance rituals, which had been of central importance to their life and culture, were adapted and transformed (Stuckey 1987). The African American Juba, for example, derived from the African djouba or gioube, moved in a counterclockwise circle and was distinguished by the rhythmic shuffling of feet, clapping hands, and "patting" the body, as if it were a large drum. With the passage of he Slave Laws in the 1740s prohibiting the beating of drums for the fear of slave uprisings, there developed creative substitutes for drumming, such as bone- clapping, jawboning, hand-clapping, and percussive footwork. There were also retentions by the indentured Irish, as well as parallel retentions between the Irish and enslaved Africans, of certain music, dance and storytelling traditions. Both peoples took pride in skills like dancing while balancing a glass of beer or water on their heads, and stepping to intricate rhythmic patterns while singing or lilting these same rhythms. Some contend that the cakewalk, a strutting and prancing dance originated by plantation slaves to imitate and satirize the manners of their white masters, borrows from the Irish tradition of dancing competitively for a cake. And that Africans may have transformed the Irish custom of jumping the broomstick into their own unofficial wedding ceremony at a time when slaves were denied Christian rites.
The oral traditions and expressive cultures of the West Africans and Irish that converged and collided in America can best be heard. The flowing 6/8 meter of the Irish Jig that was played on the fiddle or fife (a small flute), can be distinguished from the polyrhythm of West African drumming, with its propulsive or swinging quality. The fusion of these in America produced black and fiddlers who "ragged" or syncopated jig tunes. Similarly, the African-American style of dance that angled and relaxed the torso, centered movement in the hips, and favored flat-footed gliding, dragging, and shuffling steps, melded with the Irish-American style of dance that stiffened the torso, minimalized hip motion, and emphasized dexterous footwork that favored bounding, hopping, and shuffling (Kealiinohomoku 1976).
By 1800, "jigging" became the general term for this new American percussive hybrid that was recognized as a "black" style of dancing in which the body was bent at the waist and movement was restricted from the waist down; jumping, springing, and winging air steps made it possible for the air-born dancer, upon taking off or landing, to produce a rapid and rhythmic shuffling in the feet. Jigging competitions featuring buck-and-wing dances, shuffling ring dances, and breakdowns abounded on the slave plantations where dancing was encouraged and often enforced. As James W. Smith, an ex-slave born in Texas around 1850, remembers: "Master . . . had a little platform built for the jigging contests. Colored folk comes from all around to see who could jig the best. . .on our place was the jigginist fellow ever was. Everyone round tries to git somebody to best him. He could. . . make his feet go like triphammers and sound like the snaredrum. He could whirl round and such, all the movement from his hips down" (Stearns 1968, 37). Any dance in the so-called Negro style was called a breakdown, and it was always a favorite with the white riverboat men. Ohio flatboatmen indulged in the Virginia breakdown. And in Life on the Mississippi (1883) Mark Twain wrote that "keelboatmen got out an old fiddle and one played and another patted juba and the rest turned themselves loose on a regular old-fashioned keelboat breakdown."
Clog and Hornpipe
The Lancashire Clog was another percussive form that contributed to the mix during this period. Danced in wooden-sole shoes, the Clog came to America from the Lancashire region of England in the 1840s and in the next forty years had rapidly evolved into such new styles as the Hornpipe, Pedestal, Trick, Statue, and Waltz Clog. The Clog also melded with forms of jigging to produce a variety of percussive styles ranging from ballroom dances with articulate footwork and formal figures to fast-stomping competitive solos that were performed by men on the frontier. None of these percussive forms, however, had syncopated rhythm; in other words, they all lacked swinging rhythms that would later come in such percussive forms as the Buck and Wing and Essence dances that would lead to the Soft Shoe.
The Minstrel Show
Though African-Americans and European-Americans borrowed and copied from each other in developing a solo vernacular style of dancing, there was a stronger draw of African-American folk material by white performers. By the 1750's, "Ethiopian delineators," many of them English and Irish actors, arrived in America. John Durang's 1789 "Hornpipe," a clog dance that mixed ballet steps with African-American shuffle-and-wings, was performed in blackface make-up (Moore 1976). By 1810, the singing-dancing "Negro Boy" was established as a dancehall character by blackface impersonators who performed jigs and clogs to popular songs. In 1829, the Irishman Thomas Dartmouth Rice created "Jump Jim Crow," a black version of the Irish jig that appropriated a Negro work song and dance, and became a phenomenal success. After Rice, Irishmen George Churty and Dan Emmett organized the Virginia Minstrels, a troupe of blackface performers, thus consolidating Irish American and Afro-American song and dance styles on the minstrel stage (Winter 1978). By 1840, the minstrel show, a blackface act of songs, fast-talking repartee in Negro dialects and shuffle-and-wing tap dancing became the most popular form of entertainment in America. From the minstrel show, the tap act inherited the walk-around finale, with dances that included competitive sections in a performance that combined songs, jokes, and specialty dances.
It is largely because of William Henry Lane (c.1825-52) that tap dancing in the minstrel period was able to retain its African-American integrity. Born a free man, Lane grew up in the Five Points district of lower Manhattan, whose thoroughfares were lined with brothels and saloons that were largely occupied by free blacks and indigent Irish immigrants. Learning to dance from an "Uncle" Jim Lowe, an African-American jig and reel dancer of exceptional skill, Lane was unsurpassed in grace and technique and was popular for imitating the steps of famous minstrel dancers of the day, and then execute his own specialty steps which no one could copy. In 1844, after beating the reining Irish-American minstrel John Diamond (1823-1857) in a series of challenge dances, Lane was hailed "King of All Dancers" and proclaimed "Master Juba." He was the first African American dancer to tour with the all-white minstrel troupe, Pell's Ethiopian Serenaders, and to perform without blackface makeup for the Queen of England (Winter 1948). Lane is considered the single most influential performer in nineteenth century dance. His grafting of African rhythms and a loose body styling onto the exacting techniques of jig and clog forged a new rhythmic blend of percussive dance that was considered the earliest form of American tap dance.
When black performers finally gained access to the minstrel stage after the Civil War, the tap vocabulary was infused with a variety of new steps, rhythms, and choreographic structures from African-American social dance forms. Tap dances like "The Essence of Old Virginia," originally a rapid and pigeon-toed dance performed on the minstrel stage, was slowed down and popularized in the 1870s by the African-American minstrel Billy Kersands. The Essence would later be refined by the Irish-American minstrel George Primrose into a graceful Soft Shoe, or Song-and-Dance, to become the most elegant style of tap dancing on the musical stage.
The Reconstruction
The Reconstruction era was also the time when technical perfection in tap dance was valued and awarded, and when the obsession with precision, lightness and speed—which had long been valued in traditional Irish Jig dancing—became the ruling standard of judgment in publicly contested challenge dances. The New York Clipper (April 11, 1868) reported that in one such challenge, "Charles M. Clarke, a professional jig dancer . . . had a contest on the evening of the 3rd in Metropolitan Hall . . . for a silver cup valued as $12. Clarke did a straight jig with eighty-two steps and won the cup. Edwards broke down after doing sixty-five steps." In the 1880s, big touring shows such as Sam T. Jack's Creole Company and South Before the War brought new styles of black vernacular stepping to audiences across America. While black vaudeville troupes like Black Patti's Troubadours featured cakewalk and buck-and-wing specialists in lavish stage productions, traveling medicine shows, carnivals and Jig Top circuses featured chorus lines and comics dancing an early style of jazz-infused tap that combined shuffles, wings, drags and slides with flat-footed buck and eccentric dancing.
Turn of the Century
At the turn of the twentieth century, when the syncopated and duple-metered rhythms of ragtime were introduced on the musical stage, tap dance underwent its most significant transformation. The music of Ragtime that was created from a new and unprecedented borrowing and blending of European melodic and harmonic complexities and African-derived syncopation evolved the earliest form of jazz. So too, tap dance, in its absorption of early ragtime and jazz rhythms, evolved into jazz tap dance. The all-black Broadway musical, Clorindy, or the Origins of the Cakewalk (1898) presents a sterling example of this turn-of the-century jazz and tap fusion. Will Marion Cook's music for Clorindy was marked by the distinctly syncopated rhythm of ragtime, while Paul Laurence Dunbar's lyrics were performed in a syncopated Negro dialect ("Dam de lan', let the white folks rule it!/ I'se a-looking fo' mah pullet") and Ernest Hogan's choreography featured offbeat cocks of the head, shuffling pigeon-wings, and sliding buzzard lopes. In Dahomey (1902), another turn-of-the-century black musical, saw Bert Williams playing the role of the low-shuffling Fool, and his partner George Walker in the role of the high-strutting Dandy. Wearing blackface makeup and shoes that extended his already-large feet, Williams shuffled along in a hopeless way, interspersing grotesque and offbeat slides between choruses, while Walker as the "spic-and-span Negro" turned his cocky strut into a high-stepping cakewalk that he varied dozens of times. In "Cakewalk Jig," Williams and Walker danced buck-and-wings, bantam twists, and rubber-legging cakewalks to a "ragged" up jig, thus introducing a black vernacular dance style to Broadway that was an eccentric blend of the shuffle, strut-turned cakewalk, and grind, or mooche.
At the turn of the century, it was imperative for tap dancers to compete in buck-and-wing and cakewalk contests in order to earn the status of professional and gain entry onto the Broadway musical stage. Arriving in New York in 1900, Bill Robinson challenged Harry Swinton, the Irish- American dancing star of In Old Kentucky, to a buck-and-wing contest, and won. With a gold medal and the valuable publicity that was bestowed upon winning, Robinson was targeted as the new man to challenge. While Robinson fused ragtime syncopation with a light-footed and vertical style of jigging that favored the elegant soft-shoe of the famed Irish-American dancer George Primrose, "King" Rastus Brown was known for a flat-footed style called Buck dancing. Among the oldest styles of percussive stepping dating back to the plantation days, Buck dancing worked the whole foot close to the ground with shuffling, slipping, and sliding step, with movement mostly from the hips down. Brown developed the Buck style into a paddle-and-roll style which was perfected in his famous "Buck Dancer's Lament," which consisted of six bars of the time step plus a two-bar, improvised stop-time break.
Women in Tap
The conceptualization of tap dance as an Afro-Irish fusion, fueled by the competitive interplay of the challenge in a battle for virtuosity and authority, puts into focus issues of race and ethnicity; and inevitably takes on the painful history of race, racism, and race relations in America. In addition, there are issues of class, in which tap was considered a popular entertainment and placed in the category of "low-art," and therefore not worthy of being presented on the concert stage. Moreover, the strange absence of women in early accounts of jigging competitions forces a consideration of gender in the evolution of tap dance which, for most of the twentieth century, was considered "a man's game." That has become a kind of mythologized truth, given the plethora of tap histories that have blindsided women. By inference or direct statement, women were told they were "weak"; they lacked the physical strength needed to perform the rhythm-driven piston steps, multiple-wing steps, and flash and acrobatic steps that symbolized the (male) tap virtuoso's finish to a routine. Women were "nurturers," not competitors," and therefore did not engage in the tap challenge. A woman's role was not as a soloist but as a member of the chorus line.
Racial and ethnic lines were distinctly drawn in New York at the turn of the twentieth century, but not so strictly drawn, geographically and culturally, between Irish and African Americans living in some neighborhoods. Of the 60,666 blacks in the city in 1900, the majority was concentrated in Manhattan, with most squeezed into two neighborhoods—the so-called Tenderloin district, which generally covered the West Twenties, and San Juan Hill which spanned from Sixtieth to Sixty-fourth Streets, from Tenth to Eleventh Avenues. New York also had a population of 275,000 Irish-born residents (not counting their American-born offspring which, together with Irish immigrants, accounted for 26% of the population) living in Brooklyn, which in 1900 was considered the largest Irish settlement in the world.
The social entertainments announced in the Brooklyn Daily Eagle in that period reveal dozens of buck-and-wing performances by semi-professional male and mostly Irish dancers. There are also a surprising number of notices in the Brooklyn Eagle announcing buck-and-wing performances by female dancers: Miss Florence Brockway, "singer and buck and wing dancer" at the Knights of Columbus Hall near Douglas Street in Brooklyn (4/23/1902); Agnes Falkner, "buck and wing dancing in an elaborately-produced show in Asbury Park, New Jersey (4/17/1902); Mame Gerue, "a very graceful dancer, both in imitation of the Spanish fandango and on the sand as a buck and wing stepper" at the Orpheum Theatre in Brooklyn (12/3/1901); Miss Belle Lewis in "her famous buck and wing specialty" at a "merry party assembled on the premises of Mr. and Mrs. William E. Houtain at 282 Putnam Avenue, Brooklyn (11/30/1901); Belle Gold, who "showed considerable cleverness in buck and wing dances" in a vaudeville bill at the Floating Roof Garden at the Manhattan Beach Theater (7/16/1901); the Newell sisters, "buck and wing dancers" at the Unique vaudeville house (2/25/1902); and the Esher sisters, "buck and wing dancers" appearing at the Orpheum in Brooklyn in a show headlining the opera singer Pauline Hall (5/21/1901). Nellie De Veau received several announcements, one at Paula's Musee, formerly known as Haverly's Musee as a "buck and wing dancer" (11/12/1901); another as a "buck and wing and skirt dancer" at the Jefferson Club of the Sixteenth Assembly District in Brooklyn, headquarters of the Democrats under the leadership of James S. Regan. (2/14/1900).
With only the surnames, addresses (Miss, Mrs.), venues, occasions for dance, performances, and generic titling as buck-and-wing dancers, it is difficult to discern the style of buck and wing that each of these women danced, let alone their race or ethnicity, which such stage names as "Mame" and "Belle" disguise. Many of the occasions for female buck-and-wing dancers were clearly for social and political functions in small vaudeville houses that usually featured solo acts and some duos. Most certainly those performances continued from a strong tradition of female blackface Irish jig and clog dancing that had begun in nineteenth-century minstrel and variety stage shows.
Lotta (Mignon) Crabtree
Lotta (Mignon) Crabtree was born on Nassau Street in New York City in 1847, and raised in California during the Gold Rush, where she learned ballet, fandangos, and the Highland fling. Since in the 1850s half of California's population was Irish, her teachers made sure she excelled at the jig. As a dancer touring mining camps, she was introduced to an African-American dancer who taught her breakdowns, soft-shoes, and buck- and-wing dances. Crabtree's fame spread throughout the country, as she was as a performer of jigs and reels, with acrobatic flourishes. Her only competitors were the three Worrell sisters, Irene, Sophie, and Jennie, who performed in clog-dancing shoes. When it was later discovered that Jennie Worrell's clogs had trick heels (heels that were hollowed out with tin-lined boxes placed inside and holding two bullets), that made it sound like she was dancing faster than she really was, Crabtree had no peers when it came to jig and clog. "She can dance a regular breakdown in true burnt cork style and gives an Irish Jig as well as we have ever seen it done," wrote the New York Clipper in 1864 (Rourke 1928). In her later years she became a popular actress and the toast of Broadway. While she retired from the stage in 1891 at the age of forty-four, her renown as a female jig and breakdown dancer lasted into the early decades of the twentieth century.
Ada Overton Walker
Ada Overton Walker was born on Valentines Day in 1880 in New York's Greenwich Village. As a child she received dance instruction from a Mrs. Thorp in midtown Manhattan. Around 1897, after graduating from Thorp's dance school, she toured briefly with Black Patti's Troubadours. A girlfriend invited her to model for an advertisement with Bert Williams and George Walker, who had just scored a hit in their vaudeville debut at Koster and Bial's Music Hall. She agreed to model for the ad and subsequently joined the men to dance in the cakewalk finale. After joining John W. Isham's Octoroon, a critic for the Indianapolis Freeman declared, "I had just observed the greatest girl dancer." With Grace Halliday she formed the sister dance act of Overton and Halliday. They performed as the pair of Honolulu Belles in the Williams and Walkers' The Policy Players (1899), and from there, Overton began to develop as a soloist with more substantial roles. In the musical comedy The Sons of Ham (1900) she sang and danced "Miss Hannah from Savannah" and "Leading Lady"; and in its second edition, "Society" and "Sparkling Ruby" which brought her jubilant acclaim. James Weldon Johnson wrote that she "had a low-pitched voice with a natural sob to it, which she knew how to use with telling effect in putting over a song" (Johnson 1933). Tom Fletcher remembered her as a singer who did ragtime songs and ballads equally well; and as a dancer "who could do almost anything, and no matter whether it was buck-and-wing, cakewalk, or even some form of grotesque dancing . . . she lent the performance a neat gracefulness of movement unsurpassed by anyone" (Fletcher 1954).
On June 22, 1899, after the closing of A Lucky Coon, the ragtime musical described by Ann Charters as "a hodge-podge of everything in the 'coon' line from buck-dancing and ragtime melodies to selections from the grand opera," Overton married George Walker; their partnership would transform and elevate the cakewalk from a core African-American folk dance into a black modernist expression, a high art worthy of being performed before royalty, for the white elite, and on the concert stage.
Williams and Walker's Dahomey (1903) was one of the first black musicals to realize the cakewalk's transformation. In it, Aida (she changed the spelling of her name from Ada to Aida, the name of the Haitian lwa of fertility) played Rosetta Lightfoot with a featured solo, "I Want to Be a Real Lady," and the "Cakewalk Finale," partnered by her husband. "The line, the grace, the assured ecstasy of these dancers, who bent over backward until their heads almost touched the floor, a feat demanding an incredible amount of strength, their enthusiastic prancing, almost in slow motion, have never been equaled in this particular revel, let alone surpassed," wrote Carl Van Vechten (Van Vechten 1974). Dahomey was brought to London, where it was presented at a command performance before King Edward VII at Buckingham Palace. British high society followed the Royal Family with a gushing enthusiasm for cakewalking. The company returned to New York with a new version of the musical called In Dahomey, which opened at the Grand Opera (1904).
In Williams and Walker's next show, Abyssinia (1906), Overton Walker was both a performer and the show's choreographer. She was next featured in and staged the musical numbers for Bandanna Land (1908). One evening while onstage, George Walker, playing the role of Bud Jenkins, became ill. His symptoms were later diagnosed as syphilis. In 1909 he left the show, and his role was rewritten for Overton Walker, who donned his flashy clothes and sang his numbers, including his major song, "Bon Bon Buddie." With her husband's condition slowly deteriorating, and faced with where her future lay, she chose not to renew her contract with Williams and Walker and instead joined the cast of Bob Cole and J. Rosamonde Johnson's The Red Moon (1909) in which she was featured in two musical numbers: "Pheobe Brown" and "Pickaninny Days," dancing buck-and-wing with the chorus. She next opened at New York's American Theater with a vaudeville act featuring a new dance, the "Kara Kara," or dance l'Afrique. In 1910, she joined the Smart Set, a black theatrical company, and starred in His Honor the Barber (1911).
By July 1911, six months after her husband George Walker died, Overton Walker formed a new vaudeville act comprising one male and eight female dancers; she sang "Shine" as a male, impersonating her late husband, and performed the new dance craze, "The Barbary Coast," in close embrace with her new young male partner. From 1912 until her death in 1914 she choreographed for two black female dance groups, "The Happy Girls" and "Porto Rico Girls," whose dancers included Lottie Gee and Elida Webb.
In 1912 she danced "Salome" in a spectacular vaudeville performance at Oscar Hammerstein's Victoria Theatre in New York. She also rejoined Bert Williams for the annual Frog's Frolic, appearing onstage with Bill Robinson and minstrel showman Sam Lucas. In 1914 she switched from African-style dance to ballroom dance in her vaudeville act. With her new partner, Lackaye Grant, Aida presented several ballroom dances whose roots, she made clear were in the black vernacular: "Maxixe," "Southern Drag," "Jiggeree," and "Tango." She participated in the tango fad by giving a "Tango Picnic" (July 1914) at New York's Manhattan Casino where she and Grant performed their ballroom dance act with the "Southern Drag," receiving the most applause from the black audience. "Tango Picnic" was Overton Walker's last pubic appearance. She died October 11, 1914 from kidney disease.
Mourned as the foremost African-American female stage artist, Overton Walker's interest in both African and African-American indigenous material, and her translation of these onto the modern stage, anticipated the choreographic work of modern dance pioneers Katherine Dunham and Pearl Primus. Both in her solo work for women and in the unison and precision choreographies for the female chorus, she claimed a female presence on the stage. She also gave presence to black rhythm dancing, thus opening primetime, public professional space for tap performance. By negotiating the narrow white definitions of appropriate black performance with her own version of black specialization and innovation, Overton Walker established a black cultural integrity onstage, setting the model for which African-American musical artists could gain acceptance on the professional concert stage.
1920s and 1930s: Broadway Jazz
In the teens of the twentieth century, Americans went "dance mad" with the foxtrot, a syncopated ragtime dance that bounced couples along the floor with hops, kicks, and capers. Dozens of black- based "animal" dances, such as the Turkey Trot, Monkey Glide, Chicken Scratch, Bunny Hug, and Bull Frog Hop, were danced to ragtime rhythms. While dance bands in downtown New York Clubs were "jassing up" (adding speed and syncopation) such dances as the Grisly Bear and Kangaroo Dip for their white clientele, uptown Harlem audiences were rocking to Darktown Follies. J. Leubrie Hill's all-black musical revue of 1913 expressed an inexorable rhythm by its dancers who "stepped about, and clapped their hands, and grew mad with their bodies" (Van Vechten 1974). The show introduced the "Texas Tommy," prototype of the Lindy Hop, as well as new styles of tap dancing. One was Eddie Rector's smooth style of "stage dancing, in which every move made a beautiful picture. Another was the acrobatic and high-flying style of Toots Davis, whose "Over the Top" and "Through the Trenches" were named for wartime combat maneuvers. The dance finale, "At the Ball," was a spiraling, stomping circle dance whose rhythms, wrote Carl Van Vechten, "dominated me so completely that for days afterwards, I subconsciously adapted whatever I was doing to its demands." Florenz Ziegfeld bought the entire show for his Follies of 1914, thus helping to transplant black vernacular dance and jazz rhythms onto the Broadway stage.
By the Jazz Age twenties, both black and white dancers had discovered the rhythmic power of jazz. In this decade in which jazz music became a popular nighttime entertainment, jazz tap dance—which was distinguished by its intricate rhythmic motifs, polyrhythm, multiple meters, and elements of—emerged as the most rhythmically complex form of jazz dancing. Setting itself apart from all earlier forms of tap dance, jazz tap dance matched its speed to that of jazz music, and often doubled it. Here was an extremely rapid yet subtle form of drum dancing that demanded the dancer's center to be lifted, the weight balanced between the balls and heels of both feet. While the dancer's alignment was upright and vertical, there was a marked angularity in the line of the body that allowed for the swift downward drive of weight.
It is generally believed that Shuffle Along (1921), the all-black musical with music by Eubie Blake and lyrics by Noble Sissle, introduced the most exciting form of jazz tap dancing ever been seen on the Broadway stage. Blake's musical score provided a foot-stomping orgy of giddy rhythms that spanned traditional and early jazz styles. While the jazz dancing in Shuffle Along was never specifically referred to as "tap dance," the styles of percussive stepping certainly belonging to jazz tap dance were often described and singled out as the most exciting aspects of the dancing. In "Jimtown's Fisticuffs," the boxing match performed by Flournoy Miller and Aubry Lyles, as two would-be mayors, saw these rivals swinging and knocking each other down, jumping over each other's backs, and finishing each round with buck-and-wing and time steps. The title song, "Shuffle Along," a song-and-dance number featuring the Jimtown Pedestrians, had the Traffic Cop played by Charlie Davis performing a high-speed buck-and-wing dance that staggered the audience. Elsewhere in the musical, Tommy Woods did a slow-motion acrobatic dance that began with time-step variations that included flips landing on the beat of the music; and Ulysses "Slow Kid" Thompson, a well-known tap dancer, performed an eccentric soft shoe with rubber-legging legomania. The most obvious reference to tap dance in Shuffle Along is the "shuffle" of the title, a rapid and rhythmic brushing step that is the most basic step in tap dancing. The step also refers to the minstrel stereotype of the old and shuffling plantation slave who, accused of being lazy and venal, drags and scrapes his feet along he ground. While the book in Shuffle Along purveyed the old caricature of the black- shuffing Fool, the musical part of the show embodied a new image of the black dancer as a rhythmically propulsive source of energy. Tap dance was thus resurrected from its nineteenth-century minstrel origins to a modern twentieth-century art form. After Shuffle Along, musical comedy on Broadway in the twenties took on a new rhythmic life as chorus girls began learning to dance to new rhythms.
While Broadway chorus lines in the twenties performed simple steps in square rhythms and complicated formations by such choreographers as Ned Wayburn, the most elite of white Broadway stars worked with the African-American choreographer Clarence "Buddy" Bradley. Born in Harrisburg, Pennsylvania, Bradley moved to New York in the twenties, where he learned to tap dance at the Hoofer's Club and performed as a chorus dancer at Connie's Inn. After re- choreographing the Greenwich Village Follies in 1928, he worked at the Billy Pierce Dance Studio off-Broadway, where he created dance routines for such white Broadway stars as Gilda Grey, Jack Donahue, Ruby Keeler, Adele Astaire, Ann Pennington. On Broadway in the twenties musical comedy dancing, with simple walking steps, were reserved for ingenues, and considered the lowest common denominator in show dancing. Uptown, African-American tap dancers were inventing intricate steps with complex rhythms. Bradley's formula for creating dance routines for white dancers was to simplify rhythms in the feet, while sculpting the body with shapes from black vernacular dances. Even though he simplified rhythms, he never sacrificed the syncopated accents of jazz, and he used the accents of jazz improvisations to shape new rhythmic patterns in the body (Hill 1992).
Bill Robinson and John Bubbles
The rhythmic revolution that began with Shuffle Along (1921) continued on Broadway with Strut Miss Lizzie (1922), Liza (1922), and Runnin' Wild (1923), in which a new tap-dancing version of the Charleston was performed, while the chorus beat out the time with hand-clapping and foot- patting (the beating out of complex rhythms had never before been seen on a New York stage). It was not until Lew Leslie's Blackbirds of 1928 that jazz tap dancing began to be distinguished as the most rhythmically complex "cream" of jazz dancing. Blackbirds starred Bill "Bojangles" Robinson, a veteran performer in vaudeville and the most beloved dancer in the black community who as the age of fifty was "discovered" by Broadway audiences and pronounced "King of Tap Dancers."
Born in Richmond, Virginia, in 1878, Robinson had earned nickels and dimes by dancing and scat- singing in the street. He had begun his career performing as a member of a "pickaninny" chorus, and by the twenties became the headliner on both the Keith and the Orpheum circuits; and New York's prestigious Palace Theatre. In Blackbirds, Robinson performed his famous "Stair Dance," which he introduced in vaudeville about 1918. Dancing up and down a flight of stairs in his split- soled clog shoes (the wooden half-sole, attached from the toe to the ball of the foot, was left loose), each step was tuned to a different pitch and used a different rhythm. As he danced to clean four- and eight-bar phrases followed by a two-bar break, Robinson's taps were delicate, articulate, and intelligible. Whether interweaving buck or time steps with whimsical skating steps or little crossover steps danced on the balls of the feet, the dancing was upright and rhythmically swinging. The light and exacting footwork is said to have brought tap dance "up on its toes" from an earlier, earthier, more flat-footed shuffling style. Langston Hughes, describing these tap rhythms as "human percussion," believed that no dancer had ever developed the art of tap dancing to a more delicate perfection than Robinson, who could create "little running trills of rippling softness or terrific syncopated rolls of mounting sound, rollicking little nuances of tap-tap-toe, or staccato runs like a series of gun-shots." Reviewing Blackbirds of 1928, Mary Austin observed in The Nation that the postures of Robinson's lithe body, and the motions of his slender cane punctuated his rhythmic patter and restored for his audience "a primal freshness of rhythmic coordination" that was fundamental of art." Broadway had not only discovered Robinson, but had become newly enamored of a strikingly modern rhythm dance that interpreted Negro folk rhythms, transforming them into a sleekly modern black expression. "A Bojangles performance is excellent vaudeville," wrote Alain Locke, "But listen with closed eyes, and it becomes an almost symphonic composition of sounds. What the eye sees is the tawdry American convention; what the ear hears is the priceless African heritage."
The 1920's also saw the rise of John Sublett Bubbles, who is credited with inventing "rhythm tap," a fuller and more dimensional rhythmic concept that utilized the dropping of the heels as accents. Born in Louisville in 1902, Bubbles at the age of ten teamed with the six-year old Ford Lee "Buck" Washington in an act billed as "Buck and Bubbles." Bubbles sang and danced and Buck played accompaniments, standing at the piano. After winning a series of amateur night shows, they began touring in musical engagements. At the age of eighteen Bubbles' voice began to change and instead of giving up show business he focused on dancing. After smarting from the embarrassment of being laughed out of the Hoofer's Club as a novice, Bubbles developed his technique and returned to the Club to win everyone over with a new style of tapping laced with Over-the-Tops and triple back slides. By 1922, Buck and Bubbles reached the pinnacle in vaudeville circuit known as T.O.B.A., their singing-dancing-comedy act headlined the white vaudeville circuit from coast to coast. Buck'' stop-time piano, which was played in the laziest manner imaginable, contrasted with Bubble's witty explosion of taps in counterpoint. They appeared in Broadway Frolics of 1922, Lew Leslie's Blackbirds of 1930, and sensationalized The Ziegfield Follies of 1931. Bubbles' rhythm tapping revolutionized dancing. Before him, dancers tapped up on their toes, capitalized on flash steps, and danced to neat two-to-a-bar phrases. Bubbles loaded the bar, dropped his heels, and hit unusual accents and syncopations, opening up the door of modern jazz percussion.
While most white professional dancers learned tap dance in the studio in the twenties and thirties, black dancers usually developed on their own, on the street, or in the dance hall where dancing was hotly contested as a basketball game. And it was at the Hoofers Club in Harlem—an old pool hall that was next to and down the stairs from the Lafayette Theater, where rookie and veteran tap dancers assembled to share with, steal from, and challenge each other. Dancers who frequented the Hoofers Club and perfected their technique included Bill Robinson, John Bubbles, Honi Coles, Eddie Rector, Dewey Washington, Raymond Winfield, Roland Holder, Harold Mablin, "Slappy" Wallace, Warren Berry, and Baby Laurence.
Cora LaRedd
The rhythmic brilliance, athleticism, and open sexuality of Cora LaRedd's dancing made her, not only the most noted female soloist at the Cotton Club in the 1920s and 1930s, but also the most extraordinary jazz tap dancer in those decades. Recognized as a brilliant Harlem singer and dancer when she became the lead performer for arranger and bandleader Charlie Dixon (of the Fletcher Henderson band), La Redd received her first Broadway notices in the musical comedy Say When (1928), in which she was singled out as "a sepia-tinted Zora O'Neal who combined limber-legged dancing with wah-wah singing." Broadway saw much of LaRedd in the late 1920s. The "all-colored musical novelty," Messin' Around (1929) with music by James P. Johnson, lyrics by Perry Bradford, and dances by Eddie Rector, featured LaRedd in "Tapcopation," "Put Your Mind Right On It," and a Waltz Clog specialty with Charles Johnson. In the all-black musical comedy Change Your Luck (1930), with music and lyrics by J.C. Johnson and dances by Laurence Deas and Speedy Smith, LaRedd excelled in "Can't Be Bothered Now," "My Regular Man," and "Percolatin." Audiences were dazzled by LaRedd at the Cotton Club, where she was regularly featured as the leading song-and-dance diva. In the Fall 1930 Cotton Club revue "Brown Sugar (Sweet But Unrefined)," LaRedd was a featured soloist on the bill with Wells, Mordecai and Taylor in "Hittin' the Bottle."
The best example of LaRedd's dancing may be seen in the twelve-minute black-and-white musical short That's the Spirit (1933), regarded as one of the greatest all-black jazz shorts ever made. In it, LaRedd sings and dances. Small and compact, the dark-skinned dancer shows a fiery vitality. Wearing a white satin blouse with full-blown sleeves and black shorts which throw attention to her strong, gleaming legs and feet, she dances at shimmering speed; her low-heeled Mary-Jane shoes frame those fast feet; her triple-time steps and treble-roll steps, which resemble Bill Robinson's steps and style, were never made more up-tempo and swinging.
1930s and 1940s: Tap on Film
In the thirties and forties, jazz tap dancing continued to develop in direct relationship to jazz music. Swing-style jazz of the thirties emphasized rhythmic dynamics with relatively equal weight given to the four beats of the bar (hence the tern "four-beat jazz), solo improvisation, and a forward propulsion imparted to each note by an instrumentalist through the manipulation of attack, timbre, vibrato, and intonation. Tap dancers were often featured performing in front of swing bands in dance halls like Harlem's Savoy Ballroom. The swinging four/four bounce of bands like Count Basie and Duke Ellington proved ideal for hoofers, while intimate nightclubs such as the Cotton Club featured excellent tap and specialty dancers and tap chorus lines like the Cotton Club Boys.
It was also in the thirties and forties that tap dance was immortalized in such Hollywood film musicals as Dixiana (1930), starring Bill Robinson; Forty-Second Street (1933), starring Ruby Keeler; The Little Colonel (1935), starring Robinson and Shirley Temple; Swing Time (1936), starring Fred Astaire; Atlantic City (1944), featuring Buck and Bubbles; Lady Be Good, featuring the Berry Brothers, Stormy Weather (1943), featuring Bill Robinson and the Nicholas Brothers; and The Time, the Place and the Girl (1946), featuring the Condos Brothers. For the most part, because of continued segregation and different budgets, black dancers were denied access to the white film industry. As a result, a distinction in tap styles began to develop. In general, black dance artists such as John Bubbles continued the tradition of rhythm tap on stage and screen, with its flights of percussive improvisation; while white artists like Gene Kelly evolved a balletic, Broadway style of tap dancing in film and Broadway musicals in which jazz rhythms were less important than the integration of dance into the narrative structure of the musical. As tap became the favored form of American theatrical dance, new styles emerged: The Eccentric style was exemplified by the attention-getting routines of Jigsaw Jackson, who circled and tapped while keeping his face screwed to the floor; Clarence "Dancing" Dotson, who tapped and scratched in swinging counterpoint; and Alberta Whitman, who executed high-kicking legomania as a male impersonator. The Russian style, pioneered by Ida Forsyne in the teens by performing Russian kazotsky kicks, was made popular by Dewey Weinglass and Ulysses "Slow Kid" Thompson. The Acrobatic style exemplified by Willie Covan and the Four Covans, Three Little Words and the Four Step Brothers, who specialized in flips, somersaults, cartwheels, and splits. The Flash Act dancing of the Berry Brothers was brought to a peak by combining tap with high-stylized acrobatics and precision-timed stunts. Black Comedy Dance teams such as Slap and Happy, Stump and Stumpy, Chuck and Chuckles, and Cook and Brown infused tap dancing with jokes, knockabout acrobatics, grassroots characterizations and rambunctious translations of vernacular dance in a physically robust style.
Eleanor Powell
Eleanor Torrey Powell was born in 1913 in Springfield, Massachusetts, and raised by her maternal grandparents while her mother, Blanche Torrey, worked as a chambermaid, waitress and bank teller. At age seven she studied ballet and acrobatics with Ralph McKernan. In the summer of 1925, during a family visit to Atlantic City, New Jersey, she was turning cartwheels on the beach when discovered by the entrepreneur Gus Edwards, who offered her a job working three nights a week, earning a salary of $7 dollars per show in a dinner club at the Ambassador Grill. In the summer of 1927, Powell returned to Atlantic City to work at the Silver Slipper and at Martins, high-priced supper clubs. She headed to New York in the fall of 1929 where she worked for three months at Ben Bernie's nightclub. She also danced at private parties, where she met and appeared on the same bill as Bill Robinson; with him, she devised a dance routine in which they challenged each other. Powell and Robinson performed at various private society parties organized by the Vanderbilts, Rockefellers, and others, for which they were paid $500 a night. Robinson became her lifelong friend, and later taught her his famous stair dance.
Continuing to audition for Broadway shows, Powell decided to take tap dance lessons and enrolled in Jack Donahue's school, where she studied with Donahue and Johnny Boyle; ten tap lessons were all she needed to launch her career on Broadway. She debuted in Follow Thru (1929), thanks to Donahue's class routine for "Button Up Your Overcoat," which she used for her audition. She made her screen debut in the Paramount Picture musical comedy Queen High (1930), and returned to Broadway with Fine and Dandy (1930), in which she perform three numbers: "I'll Hit a New High," "Jig Hop," and "Waltz Ballet." Then came the Florenz Ziegfeld-produced musical Hot-Cha! (1932), with Powell dancing "There's Nothing the Matter with Me." She performed in George White's Music Hall (1933) with two rhythm tap numbers, the New York Times calling her "an excellent tap dancer, who stands out markedly."
In the early thirties, with the Depression at its worst and most Broadway producers were cutting costs, Powell was cast by Louis B. Mayer for a small role in Broadway Melody of 1936—that of struggling dancer come to the big city to become a star. Her routine combined elements of ballet and acrobatic dancing, doing pivot turns and arabesques, and letting male dancers toss her in the air. She worked her rhythms close to the ground, tapping with slurring speed. The New York Times wrote that she had "the most eloquent feet in show business" and likened her to Fred Astaire; with Time claiming that the film confirmed her status as "the world's greatest female tap dancer." She was immediately offered a long-term contract at MGM, which she began with Born to Dance (1936), a lavish musical, with songs by Cole Porter. She was treated to an MGM beauty makeover, complete with ultraviolet light freckle-removing treatment, capped teeth, and a curly, more feminine hairstyle. MGM made no secret of the fact that her voice was dubbed. At MGM, however, Powell had full control of her choreography and was given a studio in which to rehearse; she also dubbed her own tap steps.
Powell was paired with Fred Astaire for his first post-Ginger Rogers film, Broadway Melody of 1940. It featured Astaire and George Murphy as a dance team, who compete for a role in a Broadway show, and Powell as their leading lady and romantic interest. In "Begin the Beguine," the dazzling finale, Astaire met his match—not in the romantic-partnership he sought with Rogers, but in the vivacious and energizing rhythmic sense.
After Broadway Melody of 1940, there was talk of re-teaming Powell and Astaire in a film version of the Broadway musical Girl Crazy, but Astaire was less than enthusiastic about the project, causing it to be shelved. Powell began on the stage as an independent, and remained so for the rest of her career. In the spectacular finale of the "Fascinating Rhythm" number in Lady Be Good (1941), she reinstated her independence as a star soloist. Dressed in top hat and tails, she danced with a legion of men who frame and flip her into dizzying aerial forward rolls into the camera's eye.
Jeni Legon
Born in Chicago in 1916, the youngest of five children, Jeni LeGon's musical talents developed on the streets of the city's South Side, in neighborhood tramp bands. As a child, she won a tap-dancing contest during a visit to Savannah, Georgia. At the age of thirteen she landed her first job in musical theatre, dancing as a soubrette—in pants, however, not pretty skirts—in front of the chorus line. By age sixteen she was dancing in a chorus backed by the renowned Count Basie Orchestra. Soon after, she toured the TOBA circuit with the famous Whitman Sisters, dancing in an all-female chorus that, she said, "had all the colors that our race is known for. All the pretty shading—from the darkest, to the palest of the pale. . . a rainbow of beautiful girls." After dancing specialty acts in Detroit nightclubs, she headed for Los Angeles with a children's unit, stopping the show with her flips, double spins, and knee drops. It was there that RKO discovered her talent and cast her to appear with Bill Robinson and Fats Waller in the 1935 film Hooray for Love. Dubbed by the press as the "Chocolate Princess," MGM was impressed enough with her dancing to sign her to a long-term contract, paying the teenager $1250 a week.
For her first film on contract with MGM, LeGon was assigned to work on Broadway Melody of 1936, the first of MGM's Melody musicals, which was to star the tap-dancing Eleanor Powell. Given the music, LeGon began rehearsals, and at a cast dinner party to promote the show, performed before Powell. The next morning, LeGon was informed that MGM executives had decided that since Powell was already cast as the star soloist, two female tap dancers were not needed for the production. The studio assigned LeGon to the London stage production of At Home Abroad, where she performed the dances of Eleanor Powell and the songs of Ethel Waters, both of whom appeared in the Broadway stage production in 1935.
On the London stage LeGon performed in C.B. Cochran's At Home Abroad, hailed as "one of the brightest spirits," the new Florence Mills, "the sepia Cinderella girl who set London agog with her clever dancing." Back in the United States and Hollywood, however, LeGon faced the cruelest indignity of being cast to play every kind of servant imaginable. One of the cruelest, having to play the role of Ann Miller's maid Effie in Easter Parade (1948), starring Miller and Fred Astaire who "as stars," never spoke to her on the set.
In a conscious and misleading redirection of LeGon's contract, MGM put her behind the scenes, working as dance consultant and dance director, having her stage such numbers such as "Sping," for Lena Horne, in her first movie Panama Hattie (MGM 1942). The only time that LeGon was acknowledged as an actress was in such all-black films as Double Deal (1939), Take My Life (1942), and Hi-Di-Ho (1944) with black jazzman Cab Calloway. In these films, she got the chance "to be the heroin, to get kissed."
Bill Robinson, after working with LeGon in Hooray For Love, did not choose to work with her again in the new Twentieth Century-Fox movie Cafe Metropole (1937) where he chose Geneva Sawyer, a white dancer on the Fox lot who was tap teacher to Shirley Temple; and who agreed in the film to perform in blackface make-up. "The white girl will blacken her face to dance as Bojangles' partner in the production," wrote the Amsterdam News, "and speculation is rife regarding Robinson's failure to choose a colored girl for the favored spot since there are so many capable dancers eager to share the favored spot." The reference, of course, was to Jeni Legon, who was deemed the most gifted black female dancer of her generation.
Ann Miller
Ann Miller, the raven-haired, long-legged, sexy dancer with the machine-gun taps, was born Johnnie Lucille Collier in Chireno, Texas, in 1923. Her father was a criminal lawyer, and her mother, Clara Birdwell, a Cherokee. At age three, when the Colliers moved to Houston, Texas, she was enrolled her in dancing school, partly to build up her legs, which had been affected by rickets. Ballet was not her forte, and after seeing Bill Robinson in a personal appearance in Houston at the age of eight, she set her sights on tap dance. Robinson gave her a first tap lesson, and she soon was performing in clubs and local theaters. At age nine, she moved with her mother to Los Angeles, where she enrolled in the Fanchon and Marco dance school. Calling herself Annie and adopting the stage name of Ann Miller, she performed dance routines at meetings of local civic organizations, earning $5 a night, plus tips. After watching Eleanor Powell in Broadway Melody of 1936 she turned her attention to sharpening her tap dance skills. Appearances in vaudeville theaters led to nightclub bookings and a sixteen-week engagement at the Bal Tabarin in San Francisco, where she was spotted by an RKO talent scout. He arranged a movie audition, which led to her first film, a non-speaking part in New Faces of 1937. With her vibrant personality, great legs, and dazzling style of tap dancing, RKO awarded her a seven-year contract when she was only thirteen (she claimed to be eighteen); and would later insure her legs for one million dollars.
Miller was a remarkable, self-propelled young talent. At age fourteen, she played Ginger Rogers's dancing partner in the film Stage Door (1937). A year later, she was borrowed by Columbia Pictures to appear as Essie Carmichael, the fudge-making ballet-dancing daughter, in the Academy Award-winning You Can't Take It With You (1938), directed by Frank Capra. Back at RKO, she played Hilda in the Marx Brothers' Room Service (1938). In 1939, she made a smashing Broadway debut in George White's Scandals, creating a sensation dancing "The Mexiconga." She then signed a new seven year contract with Columbia and was starred in a succession of wartime B-rated musicals, such as True to the Army (1944), where her solo routines were the highpoint. Her personal rat-a-tat, mile-a-minute, tap style warranted that she choreograph all her own solo routines.
Her notoriety as "Queen of the B's" came from her musical adaptability in working with a number of Big Band, Swing and Latin orchestras. They included Rudy Vallee and Edwardo Durant in Time Out for Rhythm (1941); Freddie Martin and Orchestra in What's Buzzin' Cousin? (1943); the orchestras of Louis Armstrong, Duke Ellington, Alvin Rey, Charlie Barnet, Glen Gray, and Teddy Powell in Jam Session (1944); the Kay Kaiser Orchestra in Carolina Blues (1944); and in the musical Reveille With Beverly, accompanied by the orchestras of Duke Ellington, Count Basie, Bob Crosby, and Freddie Stack, with singing by Frank Sinatra and the Mills Brothers.
In 1948 Miller left Columbia and was signed by MGM to star in Easter Parade (1948), playing the role of Nadine Hale—the former dancing partner and one who loves Don Hewes, played by Fred Astaire, who loves Judy Garland, who loves Peter Lawford, who loves Judy Garland. She danced with a dreamlike grace with Astaire in "It Only Happens When I Dance with You," despite the 5' 7" dancer needing to wear ballet slippers. But it was as a soloist in one scene—in the film, starring Nadine Hale in the "Ziegfeld Follies of 1912" number—she sang "Shaking the Blues Away," delivering the snazziest song-and-dance in 1940s musical film.
Miller's lexicon of tap steps was similar to Eleanor Powell's hip-strutting, head-to-the-floor back-bending, multiple-turning mercuric moves, but Miller preferred a vigorous approach to those steps that was athletic and speedy. She claimed to be able to dance at 500 taps per minute, which no one disputed. Remembered in the popular imagination as an athletic, long-legged tap dancer with lacquered raven hair and Nefertiti eye makeup, in the tap world she is renowned for her dazzling and gutsy style of dancing, one that was as brassy and good-hearted as the showgirl roles she played in her films. Blending glamour and razzmatazz with speedy precision, Miller came as close to hoofing in high-heels as any female dancer in the Golden Age of movie musicals.
The Class Act
The style of Class Act dancing perfected the art of tap dancing. From the first decades of the century, the elegant-mannered song and dance teams of "Johnson and Cole" and "Greenlee and Drayton," traveled across the stage to make a beautiful picture of each motion. Soloists included Maxie McCree, Aaron Palmer and Jack Wiggins. Eddie Rector's "stage dancing" dovetailed one step into another in a seamless flow of sound and movement. "Pete, Peaches and Duke" brought unison work to a peak. By the 1940s, it was the dance team of Coles and Atkins, by combining high-speed rhythm tapping with the elegant Soft Shoe dancing that brought class act dancing to a peak.
Charles "Honi" Coles (1911-1992) learned to tap dance on the streets of Philadelphia, where dancers challenged each other in time step "cutting" contests. He made his debut at the Lafayette Theatre in 1931 as one of the Three Millers, a group that performed over-the-tops, barrel turns, and wings on six-foot-high pedestals. After discovering that his partners had hired another dancer to replace him, he retreated to Philadelphia, determined to perfect his technique, and returned in 1934, confident and skilled in his ability to cram several steps into a bar of music. Performing at the Harlem Opera House and Apollo Theatre, he was reputed to have the fastest feet in show business. And at the Hoofer's Club, he was hailed as one of the most graceful dancers ever seen. After performing with the Lucky Seven Trio (they tapped on large cubes that looked like dice), he toured with the big swing bands of Count Basie and Duke Ellington, melding high-speed tapping with an elegant yet close-to-the-floor style where the legs and the feet did most of the work. In 1940, as a soloist with Cab Calloway's orchestra, Coles met Charles "Cholly" Atkins, a jazz tap dancer who would later choreograph for the best rhythm-and-blues singing groups of the 1960s. Atkins was an expert wing dancers, while Coles' specialty was precision. They combined their talents by forming the class act of Coles & Atkins. Wearing handsomely tailored suits, they duo opened with a fast- paced song-and-tap number, then moved into a precision swing dance and soft-shoe, finishing with a tap challenge in which each showcased his specialty. Their classic soft-shoe, danced to "Taking a Chance on Love" played at an extremely slow tempo, was a tossing off of smooth slides and gliding turns in crystal-cut precision. The team of Coles & Atkins epitomized the class-act dancer.
No dancer or dance team fit neatly into any one category. The Nicholas Brothers, Fayard (1914-2006) and Harold (1921-2000) created an exuberant style of American theatrical dance melding jazz rhythm with tap, acrobatics, ballet and black vernacular dance. Though they were most often remembered for the daredevil splits, slides and flips in their routines, their rhythmic brilliance, musicality, eloquent footwork and full-bodied expressiveness was unsurpassed. From a young age, at the Standard Theatre in Philadelphia where his parents conducted a pit orchestra band, Fayard was introduced to the best tap acts in black vaudeville. He then proceeded to teach young Harold basic tap steps. The "Nicholas Kids" made their New York debut at the Lafayette Theater in 1931, and one year later opened at the uptown Cotton Club. Dancing with the orchestras of Cab Calloway and Duke Ellington, they evolved a classy and swinging style of musical performance in which comic quips and eccentric dance combined with precision-timed moves and virtuosic rhythm tapping. Alternating between the stage and screen throughout their career, they made their first film, the Vitaphone short Pie, Pie, Blackbird, with Eubie Blake in 1932, and their first Hollywood movie, Kid Millions, for Samuel Goldwyn in 1934. On Broadway, in Ziegfeld Follies of 1936 and Babes in Arms (1937), they worked with choreographer George Balanchine, and starred in the London West End production of Lew Leslie's Blackbirds of 1936, in which they worked with choreographer Buddy Bradley. At the Apollo, Harlem Opera House, Palace, and Paramount theaters, the brothers danced with the big bands of Jimmy Lunceford, Chick Webb, Count Basie, and Glen Miller. In Hollywood, on contract with 20th Century-Fox, they tapped on suitcases in The Great American Broadcast (1941), jumped off walls into back flips and splits in Orchestra Wives (1942), and jumped over each other down a flight of stairs, landing into a split on each step, in Stormy Weather (1943), these dazzling feats always delivered with a smooth effortlessness. The musicality of their performance and an insistent exploration of rhythm within an elegant form are the distinctive features of their style.
In the postwar forties, there was a radical transformation in American jazz dance, as the steady and danceable rhythms of swing gave way to the dissonant harmonies and frenzied rhythmic shifts of late 1940s-50s bebop. Jazz tap rhythms, previously reserved for the feet, were absorbed into the body, and a new style of "modern jazz" dance—less polyrhythmic and performed without metal taps—became popular in Hollywood and on Broadway. Dancers like the Nicholas Brothers, Condos Brothers, Jimmy Slyde, and especially Baby Laurence Jackson, were able to endure the radical musical shifts that bebop instigated with a high-speed, full-bodied, and improvisatory response to the music.
Born Laurence Donald Jackson in Baltimore, Maryland, Baby Laurence (1921-1974) was a boy soprano singing with McKinney's Cotton Pickers when the bandleader Don Redman discovered him and brought him on a tour of the Loew's circuit. On his first trip to New York, he visited the Hoofer's Club, saw the tap dancing of Honi Coles, Raymond Winfield, and Harold Mablin, and decided he wanted to be a tap dancer. Dickie Wells, who retired from the group Wells, Mordecai and Taylor, encouraged his dancing and nicknamed him "Baby." He continued to frequent the Hoofer's club, absorbing ideas and picking up steps from Eddie Rector, Pete Nugent, Toots Davis, Jack Wiggins, and Teddy Hale. By the 1940s, as a soloist, who became his chief dancing rival. Through the forties, he danced with the big bands of Duke Ellington, Count Basie and Woody Herman, and in the fifties made the transition by dancing in small Harlem nightclubs. Listening to such musicians as Charlie Parker, Art Tatum, Dizzy Gillespie, Bud Powell and Max Roach, Laurence duplicated in his feet what these musicians played, and thereby developed a way of improvising solo lines and variations as much like a hornman as a percussionist. More a drummer than a dancer, he did little with the top half of his torso, while his legs and feet were speed and thunder, a succession of explosions, machine-gun rattles, and jarring thumps. "In the consistency and fluidity of his beat, the bending melodic lines of his phrasing, and his overall instrumentalized conception, Baby is a jazz musician," wrote Nat Hentoff in the liner notes for Baby Laurence/Dance Master, a 1959 recording of Laurence's rhythmic virtuosity that demonstrates the inextricable tie between jazz music and dance.
1950s: Tap in Decline
By the 1950s, tap was in a sharp decline, due to a number of causes, among them the demise of vaudeville and the variety act; the devaluing of tap dance on film; the shift toward ballet and modern dance on the Broadway stage; the imposition of a federal tax on dance floor that closed ballrooms and eclipsed the big bands; and the advent of the jazz combo and the desire of musicians to play in a more intimate and concertized format. "Tap didn't die," says Howard "Sandman" Sims. "It was just neglected." The neglect was so thorough that this indigenous American dance form was almost lost, except for television reruns of Hollywood musicals. Through the early sixties, performance venues for jazz tap dancers had reached their lowest ebb in America, and many dancers found themselves out of jobs. Charles "Honi" Coles, in what he called "the lull," when there was no call for dancers, took a job as the production stage manager in the Apollo Theater. Other hoofers took jobs as bellhops, elevator men, bartenders, and carpenters. Television had come into almost every American home by this time but the regular weekly variety shows had become the more infrequent "television special." Except for those specials, with an occasional performance by Ray Bolger or John Bubbles, little or no tap dance was to be seen.
1960s and 1970s: A Slow Awakening
The one event that revived tap dancing took place on July 6, 1963, when Marshall Stearns, at the Newport Jazz Festival, presented Honi Coles, Chuck Green, Charles "Cookie" Cook, Ernest "Brownie" Brown, Pete Nugent, Cholly Atkins, and Baby Laurence in a show entitled Old Time Hoofers. These "seven virtuoso tap dancers of the old-fashioned pounding school of hoofing who drew their strength from the floor reminded an enthusiastic audience at the Newport Jazz Festival of what this much neglected American ethnic art for of exciting rhythm has to offer," wrote Leticia Jay in Dance Magazine. This old guard of black jazz tap dancers from the thirties and forties began to come back strong, eager to show that the tradition of rhythm dancing had not lost its fire. The Bell Telephone Hour's "The Song and Dance Man," broadcast on NBC-TV (January 16, 1966), presented a mini-musical history of tap dance in America and saw the Nicholas Brothers and Donald O'Connor demonstrating a tap challenge. The performance was less a challenge dance and more a brilliant demonstration of signature Nicholas jazz tap combinations, which O'Connor was able to absorb and perform as a third member of the team.
Beginning on April 7, 1969, Leticia Jay presented her Tap Happenings at the Bert Wheeler Theatre at the Hotel Dixie, on West 43rd Street, off Times Square in New York. And there, for several successive Monday evenings, such out-of-work and underemployed hoofers as Lon Chaney, Honi Coles, Harold Cromer, Bert Gibson, the Hillman Brothers, Raymond Kaalund, Baby Laurence, Ray Malone, Sandman Sims, Jimmy Slyde, Tony White, Rhythm Red, Derby Wilson, and Chuck Green participated in "jam sessions" of traditional tap dancing. Tap Happenings later reopened as The Hoofers at the Mercury Theatre off-Broadway, where it played for two months and became the toast of the dance world. After the new production of The Hoofers and the 1970 Broadway revival of the 1925 musical, No, No Nanette (choreographed by the seventy-five-year old Busby Berkeley and starring the sixty-year-old Ruby Keeler), there developed a kind of nostalgic interest in tap dance and all New York dancers wanted to learn it, giving the veteran hoofers the chance to pass on what they knew to a new generation of dancers.
By the mid-1970s, young dancers, many of them white women, began to seek out elder tap masters to teach them. Tap dance, which had previously been ignored as art and dismissed as popular entertainment, now made one of the biggest shifts of its long history and moved to the concert stage. As tap historian Sally Sommers describes: "The African American aesthetic fit the postmodern dance taste: it was a minimalist art that fused musician and dancer; it celebrated pedestrian movement and improvisation; its art seemed casual and democratic; and tap could be performed in any venue, from the street to the stage." Enthusiastic critical and public response placed tap firmly within the larger context of dance as art, fueling the flames of its renaissance.
The 1970s produced video documentaries Jazz Hoofer: The Legendary Baby Laurence, Great Feats of Feet, and No Maps on my Taps. One of the best moments in the decade was the last three days in 1979 was the Brooklyn Academy of Music's Steps in Time: A Tap Dance Festival, in which veteran tap dancers were joined by a few of their present-day heirs, took the stage to display their collective prowess. The four-hour program included an hour-long musical section by Dizzy Gillespie and his band, performances by members of the Copasetics, and the Nicholas Brothers, who closed the show with their own dazzling blend of ballet, jazz, and acrobatic dancing.
Modern Women of the Tap Resurgence
In the 1970s, women—many of them white, college-educated, modern dancers—sought out as teachers, and forged professional relationships with, black male hoofers of the rhythm tap tradition. Thus, they became the activators of the tap resurgence. Born in the late 1940s and the 1950s, these women had come out of the social and political consciousness of the 1960s Black Power Movement, the Anti-War Student Movement, and the Women's Movement—all of which had emboldened them to speak out against racism and segregation, war and violence, and the oppression of blacks and women—and to speak for saving the Earth and the arts, for saving the best of human expression. Many of those women had come from the tradition of modern dance—which had roots in being an early twentieth-century feminist art form in that it challenged Western classical ballet's standards of beauty and deportment to champion the athleticism and form of the female body, with its new-found freedom to move. No doubt, that attitude buttressed them to exercise their own freedom of choice, to even engage in an interracial exploration of rhythm dancing that seemed exciting and somewhat dangerous.
Brenda Bufalino
Born in 1937 in Swampscott, Massachusetts, of English, American Indian, Scottish, and Italian descent, Brenda Bufalino received her first lessons in tap dance in Professor O'Brien's Normal School of Dancing in Lynn, Massachusetts. At age seven she became a member of The Strickland Sisters, her mother and aunt singing and she dancing Dutch medleys (in wooden shoes), Spanish medleys (in tap shoes), and Hawaiian. At age eleven she was enrolled in Alice Duffy's School of Dance in Salem, Massachusetts, where she learned dances with jump ropes, top hats, canes, and suitcases, and solos on pedestals and on suitcases. In 1950, the day after her thirteenth birthday, she began a commute to Boston to study under the renowned dance teacher Stanley Brown, a black West Indian who had made a successful career in vaudeville and worked with John Sublett Bubbles. Brown became her first rhythm tap teacher. In 1955 at age eighteen, Bufalino moved to New York City to further her dance studies and found her way to Dance Craft, a new studio on 52nd Street that was directed by Charles Honi Coles and Pete Nugent. Coles was forty-three years old, but his speed and stylish arm and legwork was unsurpassed in rhythm tap dancing.
She also studied modern jazz dance with Jack Cole dancers Matt Mattox and Bob Hamilton; Afro-Cuban and modern-primitive dance with Sevilla Forte, Talley Beatty, and Walter Nix of Katherine Dunham's company; and Calypso from the Afro-Cuban dancer Chino; and found jobs singing and dancing at The Calypso Room, African Room and Café Society in New York.
By 1960, these venues were closing; tap was declining in popularity. After marrying in 1959 and giving birth to two sons, she spent most of the 1960s writing plays and poetry. When Bufalino returned to tap dancing in the early 1970s, she was to integrate it into avant-garde performance art. At New York's South Street Seaport in 1973, Bufalino "broadcast" the sound of her tap shoes into a synthesizer, being one of the first to experiment with modulating and reverberating the taps electronically. By 1974, established The Dancing Theater in La Grangeville, in upstate New York, where she taught a blend of Afro-Cuban, modern, and jazz dance. Around this time she reconnected with her teacher Honi Coles and began to bring some of the Copasetics to New Paltz for their earliest lecture-demonstrations—which she made sure to videotape. This culminated in the 1977 documentary Great Feats of Feet. Subtitled "A Portrait of the Jazz and Tap Dancer," this two hour intimate portrait of these dancers was the first of its kind to memorialize the achievements of rhythm dancers who had performed in the golden age of tap during the 1930s and 1940s.
In 1978, Bufalino presented Singing, Swinging, and Winging at the Pilgrim Theatre on the Bowery; this the first major showing of her tap choreography in New York included Performed three members of her Dancing Theatre Company, a jazz trio, and Charles Honi Coles as guest artist. Bufalino continued to work with Coles. In 1979 they collaborated on the creation of tap choreography for The Morton Gould Tap Concerto, performed with the Brooklyn Academy Philharmonic Orchestra. She would continue to forge deeply creative ties with Coles for the next fifteen years while continuing to build her own career as a tap soloist, performance artist, and choreographer.
Bufalino shot through the decade of the eighties like a comet, igniting dancers who flocked to her classes, conceiving new structures of choreography, and forming new tap companies while working as a jazz-tap soloist. She also turned her attention to re-envisioning the tap chorus as a tap-dancing orchestra—an ensemble dressed in black ties and tails, placed onstage like a symphony, only dancing, and founded the American Tap Dance Orchestra (ATDO), which had its first major booking on July 4, 1986, at the Statue of Liberty Festival in Battery Park, New York City.
Jane Goldberg
Born in Washington, D.C., in 1948, Jane Goldberg was a twenty-five-year-old modern dancer and social activist, living in Boston and writing about dance, when she found her way to the studio of rhythm tap master Stanley Brown and realized that she could liberate the ground upon which she stood with her feet. In 1974, she moved to New York City and was soon taking private tap lessons from various members of the Copasetics, including Charles Honi Coles, Chuck Green, Howard Sandman Sims, Charles "Cookie" Cook, Bert Gibson, Leon Collins, and Leslie "Bubba" Gaines. She also wrote about tap dance, publishing her first interview with Paul Draper, "It's All in the Feet" in Boston's Patriot Ledger (24 April 1974).
Teaching the rhythm tap tradition also became part of Goldberg's charge, as she knew that for the form to survive, it needed to be passed on by the masters. In 1977, she and Cook applied for and received a National Endowment for the Arts (NEA) Fellowship in Choreography to produce a lecture-demonstration. It was an interracial and intergenerational mix of dancers that included, along with Cook and Goldberg, rhythm-tap veterans Jazz Richardson and Bert Gibson, as well as Andrea Levine, Goldberg's student. It's About Time (24-26 February 1978), began as an informal downtown event but turned out to be a packed-to-the-walls sold-out show attended by jazz critics, downtown dancers, musicians, visual and performance artists and garnered a preview listing and dance review in the New York Times, which had Jennifer Dunning urging the public to see the show: "Break down the doors if you have to."
With the critical success of It's About Time, they were next invited to perform at the American Dance Festival (1978) as part of its Archival Project. That show led that same year to perform at Jacob's Pillow Dance Festival; it would be the first concert of tap dance there since Paul Draper's in 1941. Dance Theatre Workshop next presented Goldberg and company at its American Theatre Laboratory in New York City, in Shoot Me While I'm Happy (1979) with Goldberg and Cook, Ernest Brown, Leroy Meyers, Phace Roberts, Honi Coles, Louis Simms, Bubba Gaines and Marion Coles. This production marked the formal founding of Goldberg's Changing Times Tap Dance Company, dedicated to preserving, promoting, and creating new tap performances with a mix of dancers who were young and old, black and white, male and female.
In 1980, the Changing Times Dance company organized By Word of Foot, the first week-long tap festival billed as "a rare gathering of tap's leading dancers to pass on their tradition." Seventeen of America's foremost innovators of jazz tap dancing gathered to talk about the tradition and teach their own evolved styles to dancers from throughout the country. While some of the shows produced by Goldberg were based loosely on campy plots, they were but a thin veil masking the still serious issues surrounding tap dance in the 1970s and 1980s. These included The Depression's Back, and So Is Tap (1983); and The Tapping Talk Show (1984).
In the last two days of 1979, the crowning performance of the decade for tap occurred with Steps in Time: A Tap Dance Festival—anearly four-hour program which featured members of the Copasetics (Honi Coles, Leslie Gaines, Charles Cook, and Buster Brown), Leon Collins, Sandman Sims, the Nicholas Brothers, Chuck Green, and a brief appearance by Jane Goldberg. Barry Laine for the New York Times wrote that Goldberg's "own style respects and preserves the past, yet she makes use of her whole body, curling her arms and swaying her torso. Given her modern dance background, tap with her was bound to be different from what it was." Laine added, "While the hoofers are mostly older black males, today's crop of new tappers seem to be mostly young, white women. Many are taking tap in new directions."
Lynn Dally & Jazz Tap Ensemble
On the West Coast, there was another modern dancer taking tap dance into new directions. Born in 1941 in Columbus, Ohio, Lynn Dally's father, Jimmy Rawlins, and mother, Hazel Capretta Rawlins, ran the local Rawlins Dance Studio. Rawlins was her first tap dance teacher: "I had a beautiful training as a kid because my father was a very good tap dancer. The sound quality of his tap dancing was excellent. And in our lessons, we got to close our eyes, listen to the taps, and try to recreate what we heard. We were always dealing with rhythm."
In 1973, after graduating from Ohio State University as a modern dance major, performing abroad, teaching at Smith College in Massachusetts, and teaching modern dance and improvisation at Ohio State University, Dally moved to San Francisco and in 1974 formed her first all-woman company, Lynn Dally & Dancers. The company performed in New York City at the American Theatre Laboratory in August of 1979. When Dally returned to New York in December of that year to perform at the American Theater Lab, it was with a new company—the Jazz Tap Percussion Ensemble, later the Jazz Tap Ensemble. The newly-organized West Coast collective of jazz percussionists were venturing into the fairly uncommon territory of a simultaneous exploration of jazz music and modern dance traditions in a new approach to tap dance. The musicians were Paul Arslanian, Tom Dannenberg, and Keith Terry; the dancers were Dally, Camden Richman, a modern and jazz tap dancer who had studied with Charles "Honi" Coles and Eddie Brown, and Fred Strickler, who had studied modern dance at Ohio State University and had formed his own modern dance company. The Ensemble established itself in the seemingly disparate worlds of modern dance and tap, Dally's interests in improvisation and concepts of structure and form in dance composition joining with Strickler's interests, not only in jazz but also in Mozart and other Western classical music.
In January 1979, the Ensemble presented Riffs, a concert of dance at the Pacific Motion Dance Studio in Venice, California. It was then that the core features and focus of their percussive collective was conceptualized—making pieces that had musical structures, unusual time signatures, or no music; tap dances that had no music; pieces that rejected the role of dancer and accompanist and instead featured dancer and musician in interplay and on equal footing; and compositions that tested boundaries of the form for the concert stage. In a concert in April 1979 at the University of California Berkeley, jazz critic Derk Richardson in Down Beat magazine wrote: "Tap dancing as jazz percussion, a tradition that had been carried from the heel-dropping rhythms of John Bubbles through the bebop of Baby Laurence, may not have been a new idea, but . . . the Jazz Tap Percussion Ensemble demonstrated the vitality and the potential for growth of that nearly lost American art form."
In its first four years of existence, JTE grew from small studio performances to sold-out houses in such far-flung places as Honolulu, Hawaii, and Paris, France, with enthusiastic responses to work. They toured the country, were awarded grants from the California Arts Council and the National Endowment for the Arts, and in April of 1982 were invited to perform in a tribute to Honi Coles at the Smithsonian. In 1984, Richman and the group's three jazz musicians, gave notice, due to the company's intense touring schedule, leaving Dally and Strickler to reform the Ensemble (Linda Sohl-Donnell was hired to replace Richman, later Heather Cornell and Terry Brock); and Sam Weber replaced Stricker, who left the company in 1987, leaving Dally as JTE's sole director and prime choreographer.
Dianne Walker
White "liberated" females were not the only dancers freshly attracted to black rhythm tap dancing in the 1970s. Born in Boston, Massachusetts, in 1951, Dianne Walker studied tap from the age of seven with Mildred Kennedy (Bradic), who ran the esteemed Kennedy Dancing School in Boston, Massachusetts, but her tap renaissance came in 1978. She was a twenty-seven-year-old mother of two, living in the Jamaica Plain section of Boston, and working as a staff psychologist at Boston City Hospital when she met, at a social affair, the black vaudeville tap dancer Willie Spencer. He sent her the very next day to the studio of Leon Collins, the master bebop dancer who inspired a new blend of jazz tap and classical music. She walked into the studio to see this little man sitting at his desk, adjusting his taps with a screwdriver. "Hi dumplin'," said Collins. "I've been waiting for you. Willie called and told me you wanted to learn to tap dance."
Collins began his teachings with Routine #1, progressing to Routines 2, 3, and 4, which altogether constituted the core of his teaching. Eager, talented and mature, Walker soon found herself teaching tap to Collins' Saturday children's class, thus becoming his protégé, and in 1982 a member of Collins & Company. For young blacks in Boston in the 1980s inculcated with the new rap and hip-hop, the inspiration of jazz tap was not there. Still, Walker managed to impress such young dancers as Derick Grant, who she cast in a promotional tour of the documentary film No Maps On My Taps; and Dormeshia Sumbry, who Walker took to the Tip Tap Festival in Rome, Italy. There they met "Tap Dance Kid" Savion Glover, who Walker took under her wing.
Though she pursued a career as a soloist, dancing Collins' classic work, Flight of the Bumblebee, and performing in the Paris and Broadway productions of Black and Blue, teaching and mentoring remained Walker's central passion. Through the body of Collins' work, she has evolved her own more feminized, sensuous translation of that style that opens up the body to a more expansive rhythmic experience. Her teaching is based on "how to get more in the body" if you don't hop; attention to detail; recognizing the little things; and her core mantra, "simplicity, simplicity, simplicity."
Walker would continue to become ubiquitous in the tap community, committed less to making "art" than to making social connections with the young generation of dancers ignited by the resurgence of (black) rhythm tap. She is considered as the transitional figure between the young generation of female dancers—Dormeshia Sumbry Edwards, Idella Reed, Michelle Dorrance, Ali Bradley—and the "forgotten black mothers of tap," such as Edith "Baby" Edwards, Jeni LeGon, Lois Miller, and Florence Covan. Walker is the holder and bequeather of the classical black rhythm tap canon, making sure it will flourish with absolute perfection.
1980s: The Renaissance
In the eighties, there was a renaissance of interest in tap dancing that began to grow and spread. "It's satisfying to know that tap didn't die," remarked James "Buster" Brown in George Nirenberg's film No Maps on My Taps (1980), which documented the hoofers who helped keep tap alive through its lean years. Michael Blackwood's documentary film Tapdancin' (1980) followed the performances of veteran dancers such as the Nicholas Brothers who built their routines to irresistible climaxes meant to arouse high responses from the audience. 1981 saw the Broadway opening of Sophisticated Ladies, a musical homage to Duke Ellington that starred Gregory Hines. In1982 the new tap musical, Tappin' Uptown at the Brooklyn Academy of Music, starring Honi Coles. With the proliferation of tap festivals across the country and films such as White Nights (1985), The Cotton Club (1984), and Tap (1989), and the Broadway productions of The Tap Dance Kid (1983) and Black and Blue (1989), everyone proclaimed that tap was back. On television, the PBS production of Tap Dance in America, hosted by Gregory Hines, featuring tap masters and young virtuoso Savion Glover, bridged the gap between tap dance and mainstream entertainment.
Savion Glover
Savion Glover, virtuoso rhythm tap dancer, choreographer, director, and actor who revitalized and re-rhythmatized tap dancing for the millennium generation, was born November 19, 1973 in Newark, New Jersey. His father, Willie Mitchell, was a carpenter; and his mother, Yvette Glover, was a gospel and jazz singer who raised him. Since his Broadway debut at the age of nine as the title character in The Tap Dance Kid, Glover has been considered the artistic grandson of the most revered figures in jazz tap dance—Jimmy Slyde, James Buster Brown, Honi Coles, Arthur Duncan, Chuck Green, Harold Nicholas, Lon Chaney, Bunny Briggs—and heir to the generation of dancers led by Gregory Hines and his brother, Maurice. As a child, and then as a teenager, Glover took his place beside them in such Broadway productions as Black and Blue (1989) and Jelly's Last Jam (1991), and in the 1988 film Tap!, in which he played opposite Gregory Hines and Sammy Davis, Jr. On television, Glover appeared in the PBS Dance in America special, Tap Dance in America (1989) with Hines and Tommy Tune, and then became a regular on Sesame Street as the tap-dancing cowboy.
Trained as a drummer, Glover thinks of his tap shoe as a drum—the inside toe of the metal tap is the hi-hat, the outside toe of the tap is the snare, the inside ball of the foot is the top tom-tom, the outside rim of the is the cymbals, his left heel is the bass drum, and the right heel the floor tom-tom-tom. He regards himself as a hoofer who, unlike a classic tap dancer, uses the whole foot to elicit music, including the inside and outside, the arch and the ball, rather than just the heel and the toe. "We as hoofers are like musicians, more into rhythms," says Glover. "It's not about sensationalism. It's no arms or anything like that. Everything is just natural."
In 1991, when Glover took on his first tap choreography project commissioned by Jeremy Alliger's Dance Umbrella in Boston, it was not to create a number to classical jazz tunes like "A Train," "Cute," or "Perdido." Instead, he used a number from Quincy Jones' "Back on the Block.," from his Birdland album. "It's nothing like you've ever seen before," said Glover about the work, "I had people playing basketball, leaping, running. It's a mixture of things, but it's mostly tap." Utilizing seventeen dancers all under the age of sixteen, Glover found new sounds by recycling old steps, and letting younger people make up new rhythms, thereby paving a new direction in tap for the younger generation.
Sole Sisters
In 1986, La Mama presented Sole Sisters an all woman, multi-generational tap dance show directed by Constance Valis Hill that brought together high-heeled steppers and low-heeled hoofers, the veteran grande dames of tap and younger prima taperinas. The show, conceived by and starred Jane Goldberg, included veterans Josephine McNamara, Miriam Ali-Greaves, Marion Coles, Harriet Browne and Frances Nealy, and younger dancers Brenda Bufalino, Sarah Safford and Dorothy Wasserman. Soul Sisters was not the only production to open the door for the recognition of female jazz tap dancers. On the West Coast Lynn Dally, who founded the Jazz Tap Ensemble in 1979, combined her extensive experience in modern dance with jazz tap to organize a group of dancers that insisted on performing and interacting with a live jazz ensemble. On the East Coast, singer, jazz and tap dancer Brenda Bufalino, formerly a partner of Honi Coles, founded the American Tap Orchestra, and set about experimenting with how to layer and orchestrate rhythmic groups of dancers on the concert stage. Both Dally and Bufalino were hailed not only as leaders in the renaissance of jazz tap dance but also in concertizing jazz tap, and infusing it with upper-body shapes of jazz dance and new spatial forms from modern dance.
1990s: Contemporary Afro-Irish Traditions
The decade of the nineties saw the resurgence of percussive forms of dance forms that were an outgrowth of the tap dance's Afro-Irish cultural and musical traditions.
Stepping is a percussive dance form in which African-American youngsters in military lines run through routines in rapid- fire movements, slapping their hands on their hips, stomach and legs, crossing and re-crossing their arms to the hip-hop beat and gospel music. Often they chant praises to the Lord as they step, imbuing their performance with an air of spirituality. Stepping dated back to the early twentieth century, when black veterans of World War I who enrolled in colleges wanted to express their blackness through a communal art form of their own. Inspired by their military training, they brought to their dances a highly rigorous, drill-like component and combined it with elements from other black vernacular dances. Today's step dance or drill teams add hip-hop movements to their combinations. African-American stepping, like jazz tap, relies on improvisation, call and response, complex meters, propulsive rhythms, and percussive attack, stepping quickly took off in black fraternities, becoming an integral part of initiation, with students holding fierce contests to demonstrate their originality. Spike Lee's 1988 film School Daze brought Stepping to a wider audience.
Though Stepping would certainly not be confused with the style of step dancing performed by the Trinity Dance Company, which sprang from a school that won step-dancing competitions in Dublin, it shares elements of clean rhythmic precision, speed, and the keen sense of competition. Though the company stages its challenges in an air of competition dancing it movement is considered progressive Irish dance, and liberties, such as the semaphoring of arms movements and dazzling knee-to-toe action—have been taken with the original form of Irish step dance.
Trinity Dance Company is not the only company to revive, transform and concertize the traditional Irish step dance forms. The most creative departure from tradition was achieved by dancer/choreographer Sean Curran. A postmodern dancer and choreographer with a background in step dancing, he was also was a principal dancer with the Bill T. Jones/Arnie Zane company. Curran's dance works, such as Curran Event (2000) have co-opted related rhythmic forms, such as body percussion, to create patterns intricate enough to keep the eye alert and the pulse throbbing.
In the 1990s, two musicals were sterling representations of the evolution of the Afro and Irish music and dance traditions—Riverdance on Broadwayand Bring in 'da Noise, Bring in 'da Funk. With Riverdance, which moved to Broadway in 1996, traditional Irish dancing was virtually transformed overnight, liberated, and seen around the world. Since the sixties, Ireland had enjoyed a renaissance of Irish traditional music brought to the world by the Chieftains U2, Van Morrison, Enya, and Sinead O'Connor. With Riverdance, dozens of talented Irish dancers but also dancers from Britain and America who were dazzling world champions and principal dancers who had been perfecting their craft from going to Irish dancing classes from virtual infancy, entering competitions and brought home medals and cups. The main Irish dance numbers in Riverdance were choreographed by Michael Flatley (who went on to create Lord of the Dance), who unabashed mixed traditional Irish step dance and the sensuous flow of flamenco rhythms. Still, the pure essentials of Irish dancing—the frankness of the frontal presentation, calm neutrality of the torso, arms, and pelvis, footwork as a keen as a flickering flame, the blithe verticality of the body—glorified a centuries old Irish dance tradition.
Also in 1996, Savion Glover had the opportunity to mine the riches of jazz tap and ground its history in the heart of African American identity when he choreographed and starred in Bring 'da Noise, Bring in 'da Funk. Subtitled "A Tap/Rap Discourse on the Staying Power of the Beat," the show conceived and directed by George C. Wolfe, with lyrics by Reg E. Gaines, opened at the Public Theatre in New York and subsequently moved to Broadway to win Tony Award for best Choreography in a Musical. Noise/Funk, wrote New York Times critic Ben Brantley, was "not just the collective history of a race but the diverse and specific forms of expression that one tradition embraces." Critics commented that Glover's feet in the show spoke hip-hop, and that he was first young tapper in his generation to yet again reawaken the art form. The show brought the history of rhythm in America up-to-date, and in the process, making tap dance cool again.
In the 1990s, tap dance has continued to thrive and evolve as a unique American percussive expression. When tap dance artists were asked what was new in the technology, technique, translation, or theatre of tap in the nineties, their responses ranged from amplification, concertization, layered rhythms, verbal embellishment, instrumentation, exotic rhythms, political raps, modernist shapes, newly explored space. Incorporating new technologies for amplifying sounds and embellishing rhythms, new generations of tap artists in the nineties are not only continuing tap's heritage but also forging new styles for the future.
The Millennium
In the first decade of the twenty-first century, tap dance was regarded as a national treasure, a veritable American vernacular dance form. It was celebrated annually on National Tap Dance Day—May 25, on Bill "Bojangles" Robinson's birthday (1878-1949)—in big cities and small towns in every state. Tap festivals, from three days to two weeks in length, were held every month of the year, in more than twenty-five U.S. cities. There were also hundreds of tap classes, workshops, and festivals on all six inhabited continents. In Cuba in 2001, for example, Max Pollak established that country's first tap festival and performed with an all-star ensemble, made up of Cuba's finest jazz musicians led by Chucho Valdes.
Tap dancers as performance artists were also acknowledged in all forms of the media. Savion Glover received a lengthy review by Joan Acocella in The New Yorker for his show, Improvography, at New York's Joyce Theatre (16 December 2000), with a full-page photograph taken by fashion and fine arts photographer Richard Avedon. Glover also appeared on the cover of Dance Magazine (May 2004), as did Jared Grimes (June 2007) and Michelle Dorrance (May 2008). Melinda Sullivan made the cover of Dance Spirit (May/June of 2003), as did Ayodele Casel (May/June 2006) and Jason Samuels Smith (May/June 2008); and Gregory Hines with Michela Marino Lerman made the cover of Dance Teacher (February 2002).
In advertising, the entire Edwards family—Omar, his wife Dormeshia, and their two children—became the poster-family for Capezio tap shoes; Jumaane Taylor wore Brenda Bufalino's Tap Shoe for Leo's Dancewear; and Jason Samuels Smith became the corporate spokesperson for Bloch dancewear, engaged in a team effort to develop a new tap shoe offering quality and affordable options for professional tap dancers.
On television, Marvin, the Tap-Dancing Horse (PBS-TV) brought down the house in his big Broadway-style production number; Savion Glover and company performed on Dancing With the Stars (2007, CBS-TV); and for the short-lived Secret Talents of the Stars (2008, ABC-TV) Jason Samuels Smith choreographed a production number for rhythm-and-blues singer Mya Harrison (whose secret desire was to be a tap dancer), using fifteen hot young tap dancers. At Radio City Music Hall, the precision tap dancing of the Rockettes continued in show-stopping numbers—five shows a day, seven days a week. On Broadway, the chorus kids in choreographer Randy Skinner's Broadway revival of 42nd Street (2001), the tap-dancing flappers in Thoroughly Modern Millie (2002), and the show-stopping soft-shoe dancers in Jerry Mitchell's Hairspray (2002), certified, as did the City Center Encores! production of No, No, Nanette (2008), that "tap is the language of love."
On film, the romantic hero in the Warner Brothers Academy Award-winning animated musical Happy Feet (2006), was an unstoppably cheerful penguin named Mumble, who could not sing but could dance, tap dance—and that he did brilliantly. Slapping his webbed feet on the icy Antarctic terrain, his body upright and flippers hanging rigidly out at his side (almost passing for an Irish step dancer), his feet made dazzling ornamental flourishes, Mumble the penguin was an exact spin-off of Savion Glover—because Glover was Mumble—by computer, he provided Mumble's dancing moves. The film's director, lead screenwriter, and producer, George Miller, explained how the entire film hinged on persuading Glover to don a motion-capture body suit to become the tapping feet of "our tap-dancing-fool-hero."
Tap International
Fusion, the union or blending together of unlikely elements to form a whole, might be the term that best describes the musical and cultural mix in tap dance that resulted from an explosion of global cultural consciousness in the first decade of the new century. Max Pollak combined taps with Afro-Cuban rhythms and body percussion for his company, Rhumba Tap; Tamango blended of tap and Afro-Brazilian rhythms for his company, Urban Tap; and Roxanne "Butterfly" Semadini melded tap with flamenco and rhythms from North Africa, not far from her ancestral roots, in her tap work, Dejallah Groove. While these fusion works derive from a relatively simple equation, more intricate multi-stranded weavings have made the term fusion in the millennium relatively obsolete.
Tapage's Morango…Almost a Tango
What do we call the multiple cultural intertwinings of Tapage, the dance company of Olivia Rosenkrantz and Mari Fujibayashi? Born in Briey (Lorraine), France, of French-German descent, Rosenkrantz moved to New York City in 1988 and in 1991 joined Brenda Bufalino's American Tap Dance Orchestra. Her five years of experience with Ka-Tap, a North Indian music and dance ensemble that blended jazz with tap, and in which she performed with the Indian tabla master Samir Chatterjee, nourished her choreographic interest in crossing rhythm tap with the music and dance styles of other cultures. Fujibayashi was born in Kyoto, Japan, and was the first dance artist to be awarded a grant from the Japanese government for artistic studies abroad. In New York, like Rosenkrantz, she danced with the American Tap Dance Orchestra and with Manhattan Tap. Tapage was founded to create a unique voice and choreographic approach to tap, incorporating dramatic intensity and rhythmic complexity with contemporary gesture, as demonstrated in the Morango...Almost a Tango, performed at the New York City Tap Festival's All-Stars/Tap Internationals program in 2005.
Herbin Van Cayseele (Tamango) presented Bay Mo Dilo with his company Urban Tap at New York's Joyce Theater in 2007. Born in rural Cayenne, French Guiana, and raised in Paris, where he studied tap dance at the American Center with Sarah Petronio, he moved to New York in 1988 and was soon participating in tap jams hosted by Jimmy Slyde at La Cave (a jazz club on 62nd Street and First Avenue). There, he changed his name to Tamango to reflect his African roots. Bayo Mo Dilo is a visualization of those ancestral roots, realized not through a fusion of elements but by an incorporation of separate elements that reference and cross-reference one another. The set was by "Naj" Jean de Boysson—huge projected videos of a moon seen through dark foliage, raindrops falling on a banana leaf, and similarly luscious tropical visions, along with street scenes. It created a Caribbean atmosphere, within which were musicians Eric Danquin and Daniel Soulos (from Guadeloupe), and "Bonga" Gaston Jean-Baptiste (from Haiti); Vado Diomande offered the acrobatic stilt dancing associated with West African ritual; actor/dancer Jean-Claude Bardu played the amiable fellow-about-town, limbs akimbo; and Haitian dancer Belinda Becker (as Oshun, the Yoruba spirit goddess, or orisha, performed the fluid spinal undulations, and swinging arms of the west African-styled mangiani. Tamango appeared, wearing a jingling belt over a pair of black pants so thickly fringed with shreds of fabric that they suggested the costumes of featured dancers in certain West African rituals. His bells resembled the percussive embellishments (gold and bronze ornaments, fetishes, jewelry) worn by some tribes in Senegambia, Sierra Leone, the Gold Coast, the Bights of Benin, and Biafra—they created, as he danced, body music. Yet onstage in heavy-booted tap shoes (wired to amplify sound), his trail of flat-footed paddle-and-rolls sealed his pedigree in the old rhythm-tap tradition of African-American hoofers Ralph Brown, Lon Chaney, and Jimmy Slyde.
India Jazz Suite was the collaboration of the sixty-two-year-old foremost East Indian kathak guru Chitresh Das and the twenty-six year-old rhythm-tap dancer Jason Samuels Smith. It was not the first to explore the affinities between tap dance and kathak ; Ka-Tap, directed, choreographed, and performed by kathak dancers Janaki Patrik and Anup Kumar Das, and tap dancers Neil Applebaum and Olivia Rosenkrantz, was performed at New York's Symphony space in 1998. Nor did it seek to fuse East and West, but to instead present a conversation between the two forms that was interactive, in which each was circumspect about maintaining its uniqueness. What the audience got to watch was a thinning of boundaries between two generations. The two men met in 2004 at the American Dance Festival in North Carolina. Smith was drawn to Das's charisma, intrigued by the intricate patterns the kathak artist could weave with bare feet and five pounds of bells on his ankles; Das, who had always dreamed of working with Gregory Hines, was intrigued by Smith's intense American energy and improvisational skills. Their collaboration, in which each would perform with their own native musicians, showcasing each form conscientiously while highlighting its likeness to the other, was based on one shared quality: footwork. Across cultures came two sets of musicians as well: North Indian classical music was represented by the foursome of Ramesh Mishra (saranji), Abhijit Banerjee (tabla), Swapnamoy Banerjee (sarod), and Debashish Sarkar (vocals); American jazz was represented by musicians Channing Cook Holmes (drums) and Theo Hill (piano). In performance, Das' barefooted ghungru-enhanced sounds matched those of Smith's shoe-clad feet, tap for tap, rasp for rasp. What was conceived as a friendly dance conversation was sometimes pleasant banter and at other times fierce competition, in which each emerged as winner in his own way.
Women in Tap Conference
In 2008, at the historic Women in Tap Conference at the University of California, Los Angeles (UCLA), four generations of female tap dancers celebrated their contributions to the historically male-dominated field. The central aim of the conference, organized by Lynn Dally, director of the Jazz Tap Ensemble and professor in the department of World Arts and Cultures at UCLA, was to unearth stories about and contributions of women in tap. There were keynote addresses, historical overviews, and panel discussions on challenging issues to women in tap. The corporeal proof of the conference was its Saturday night concert of, primarily, solos.
Miriam Nelson, the eighty-nine-year old tap dancer and Hollywood film and television choreographer provided a sweet souvenir of pre-jazz-tap song-and-dance showmanship to "Fascinatin' Rhythm." The much younger Terry Brock aimed for a perky profile to "Lady Be Good," conjuring Eleanor Powell, an icon of the 1930s. Deborah Mitchell's dance to "Sunny Side of the Street" was a moving, brilliantly rich, unpredictable, and disarmingly slinky tribute to her mentor, Leslie "Bubba" Gaines. Barbara Duffy turned to the past, dancing to "Soldier's Hymn," her quiet, unhurried meditation of a rumbling rumba. Then Heather Cornell forged a new direction in her solo career by playfully finding new sounds in her leather-soled shoes (sans metal taps) in interplay with pianist Doug Walter. Lynn Dally turned bluesy and bittersweet to "You Gotta Move," and Linda Sohl broke out of jazz tap's musical formulas with Espiritu, a collaboration with her husband Monte Ellison. Brenda Bufalino, in My Mind's on Mingus, brought a conceptual rigor to the program, as well as genuine jazz music—recapturing her lifelong tango with her mentor and jazz counterpart, Charlie Mingus. Acia Grey's Twos and Threes aimed for stark and intricate display dancing. The supremely gracious Dianne Walker skated over "Autumn Leaves" with suave delicacy, and then Dormeshia Sumbry Edwards brought passionate spontaneity to her solo—with deliberately rough-edged footwork at maximum velocity—a tap artist who belonged to the new century, whatever her debt to the past.
The evening was crowned—and the fate of future women in tap foretold—by Michelle Dorrance, Josette Wiggan, and Cloe Arnold, the youngest women in the corps, who tore the place apart with their unscheduled trio, the only group work on the program. Based on the challenge dance—the original forum for tap virtuosity—they traded and ornamented steps with joyous vehemence. Separately, Dorrance tapped without music, in darkness, and reminded the audience of the essentials in this percussive art form. Wiggan made a take-no-prisoners attack at maximum complexity, with a style of dancing informed by contemporary black culture. Arnold's feisty tap adaptation to the Maya Angelou poem, Phenomenal Woman, woke the house and told the people: "Pretty women wonder where my secret lies / I'm not cute or built to suit a fashion model's size," she recited. Changing from low-heeled to high-heeled shoes to hit the floor with sass and fury, she demonstrated where her powers lay—in "the span of my hips, the stride of my step. . .the swing in my waist, the joy in my feet. . . the click in my heels . . . I'm a woman. Phenomenal woman. That's me."
Embracing Tradition, Forging Change
Tap dancers take deepest into their hearts the revering of old souls; perhaps because, as a cultural form more than a dance practice, tap eternally binds dancers into a family that always looks back as it moves ahead. The elders are revered and respected, always respected. One could even argue that, like the African talking drums, every rhythm that is tapped on a stage sounds out praise for its elders. Their ghosts are ever present, implicit in every step. Formal and informal tributes to the elders are incorporated into every tap dance festival's culminating evening of performance, in public honor of the rhythmic wit and poetry the masters have transmitted.
Most moving is when young dancers pay tribute to the masters. And so it was at the Tap City Youth Concert in 2008 at Symphony Space, when thirty-four members of the American Tap Dance Foundation's Tap City Youth Ensemble, a multiracial, and multiethnic group of intermediate and advanced girls and boys aged ten to eighteen, in their tribute to Honi Coles and The Copasetics, performed a historic suite of dances of that legendary tap fraternity founded in 1949, in memory of Bill "Bojangles" Robinson. They began with "The Copasetics Song/Coles Stroll," and continued with "The Mayor of Harlem," Honi Coles's lyrics about the great Bojangles; the "The New Low Down," Robinson's signature number in Blackbirds of 1928 as performed by Robinson and the Blackbird chorus; to end with the "Copasetics Chair Dance." The suite comprised an initiation to the classic jazz-tapping style of 1930s and 1940s—but the words and steps were also a mantra of brotherhood and sisterhood that inscribed the tap dancing:
When you feel blue, The best you can do Is tell yourself to forget it,
sang the dancers as they clicked and scuffed their heels in strolling walking patterns that snaked around the stage.
Life's a funny thing
It's really great when you sing, And everything will be copasetic…
The cheery refrain recalled the Copasetics' happiest moments onstage; in the Preamble to their organization, they had pledged themselves "a social, friendly, benevolent club," its members "to do all in our power to promote the fellowship and strengthen the character within our ranks."
Never look down, Chin up and don't frown, Don't let life get pathetic. Show a happy face to the whole human race, And everything will be copasetic . . .
And everything will be . . .
"So long as we pledge to do all in our power to promote the fellowship and strengthen the character within our ranks…"
And everything will be,
"So long as it remains our every desire to create only impressions that will establish our group in all walks of life as decent and respectable . . ."
And everything will be . . .
So long as we have fingers to snap, hands to clap, feet to raise up the beat,
—————. "Tap Modernism." Unpublished paper and remarks presented on the panel entitled "Re-Percussions: The Power of Rhythm as the Engine of Transformation in 20th Century Dance." Society of Dance History Scholars Conference. Albuquerque, New Mexico, June 1999.
Hill, Constance Valis and Sally Sommer. "Tap Dance," African American Encyclopedia of Culture and History. New York: Macmillan, 1993.
Featured in
Rights & Access
Cite This Item
Citations are generated automatically from bibliographic data as
a convenience, and may not be complete or accurate.
Chicago citation style:
Hill, Constance Valis. Tap Dance in America: A Short History. Web.. https://www.loc.gov/item/ihas.200217630/.
APA citation style:
Hill, C. V. Tap Dance in America: A Short History. [Web.] Retrieved from the Library of Congress, https://www.loc.gov/item/ihas.200217630/.
MLA citation style:
Hill, Constance Valis. Tap Dance in America: A Short History. Web.. Retrieved from the Library of Congress, <www.loc.gov/item/ihas.200217630/>.
More Articles like this
Article
Folklife Concerts (LC Concerts)
Article. The Library of Congress has a long history of presenting folk music concerts. The first such event was held on December 20, 1940, when Alan Lomax arranged for a Coolidge Auditorium...
The Sousa March: A Personal View
Article. Article. I heard the first performance of John Philip Sousa's The Black Horse Troop when I was eleven years old. My father had taken me to a concert by Sousa's Band...
Contributor:
Fennell, Frederick
Article
A Century of Creativity
Article. I heard the first performance of John Philip Sousa's The Black Horse Troop when I was eleven years old. My father had taken me to a concert by Sousa's Band at...
Article
Take Me Out to the Ball Game
Article. Article. One hundred years ago, on the 2nd of May, 1908, the United States Copyright Office received two copies of a new song titled Take Me Out to the Ball Game,...
|
Yet both of these dance forms trace their origins and evolution to a percussive dance tradition that developed in America several hundred years ago.
Tap dance is an indigenous American dance genre that evolved over a period of some three hundred years. Initially a fusion of British and West African musical and step-dance traditions in America, tap emerged in the southern United States in the 1700s. The Irish jig (a musical and dance form) and West African gioube (sacred and secular stepping dances) mutated into the American jig and juba. These in turn became juxtaposed and fused into a form of dancing called "jigging" which, in the 1800s, was taken up by white and black minstrel-show dancers who developed tap into a popular nineteenth-century stage entertainment. Early styles of tapping utilized hard-soled shoes, clogs, or hobnailed boots. It was not until the early decades of the twentieth century that metal plates (or taps) appeared on shoes of dancers on the Broadway musical stage. It was around that time that jazz tap dance developed as a musical form parallel to jazz music, sharing rhythmic motifs, polyrhythm, multiple meters, elements of swing, and structured improvisation. In the late twentieth century, tap dance evolved into a concertized performance on the musical and concert hall stage. Its absorption of Latin American and Afro- Caribbean rhythms in the forties has furthered its rhythmic complexity. In the eighties and nineties, tap's absorption of hip-hop rhythms has attracted a fierce and multi-ethnic new breed of male and female dancers who continue to challenge and evolve the dance form, making tap the most cutting-edge dance expression in America today.
Unlike ballet with its codification of formal technique, tap dance developed from people listening to and watching each other dance in the street, dance hall, or social club where steps were shared, stolen and reinvented.
|
yes
|
Dance
|
Did tap dancing originate in America?
|
yes_statement
|
"tap" "dancing" "originated" in america.. america is the birthplace of "tap" "dancing".
|
https://ums.org/2019/06/21/from-margins-to-mainstream-tap-dance-history/
|
From Margins to Mainstream: A Brief Tap Dance History – UMS ...
|
From Margins to Mainstream: A Brief Tap Dance History
Photo: Dorrance Dance in performance. The company returns to Ann Arbor on February 21-22, 2020. Photo by Nicholas Van Young.
Brief History
Tap dance originated in the United States in the early 19th century at the crossroads of African and Irish American dance forms. When slave owners took away traditional African percussion instruments, slaves turned to percussive dancing to express themselves and retain their cultural identities. These styles of dance connected with clog dancing from the British Isles, creating a unique form of movement and rhythm.
Early tap shoes had wooden soles, sometimes with pennies attached to the heel and toe. Tap gained popularity after the Civil War as a part of traveling minstrel shows, where white and black performers wore blackface and belittled black people by portraying them as lazy, dumb, and comical.
Evolution
20th Century Tap Tap was an important feature of popular Vaudeville variety shows of the early 20th century and a major part of the rich creative output of the Harlem Renaissance.
Tap dancers began collaborating with jazz musicians, incorporating improvisation and complex syncopated rhythms into their movement. The modern tap shoe, featuring metal plates (called “taps”) on the heel and toe, also came into widespread use at this time. Although Vaudeville and Broadway brought performance opportunities to African-American dancers, racism was still pervasive: white and black dancers typically performed separately and for segregated audiences.
Tap’s popularity declined in the second half of the century, but was reinvigorated in the 1980s through Broadway shows like 42nd Street and The Tap Dance Kid.
Tap in Hollywood
From the 1930s to the 1950s, tap dance sequences became a staple of movies and television. Tap stars included Shirley Temple, who made her film tap dance debut at age 6, and Gene Kelly, who introduced a balletic style of tap. Fred Astaire, famous for combining tap with ballroom dance, insisted that his dance scenes be captured with a single take and wide camera angle. This style of cinematography became the norm for tap dancing in movies and television for decades.
The Greats
Master Juba (ca. 1825 – ca. 1852) was one of the only early black tap dancers to tour with a white minstrel group and one of the first to perform for white audiences. Master Juba offered a fast and technically brilliant dance style blending European and African dance forms.
Bill “Bojangles” Robinson (1878—1949) began dancing in minstrel shows and was one of the first African-American dancers to perform without blackface. He adapted to the changing tastes of the era, moving on to vaudeville, Broadway, Hollywood Radio programs, and television. Robinson’s most popular routine involved dancing up and down a staircase with complex tap rhythms on each step.
Clayton “Peg Leg” Bates (1907-98) continued to dance with after losing a leg in a cotton gin accident as a child. He danced in vaudeville, on film, and was a frequent guest on the Ed Sullivan Show. Bates also frequently performed for others with physical disabilities.
Jeni Le Gon (1916-2012) was one of the first black women to become a tap soloist in the first half of the 20th century. She wore pants rather than skirts when she performed and, as a result, she developed an athletic, acrobatic style, employing mule kicks and flying splits, more in the manner of the male dancers of the time.
The Nicholas Brothers Fayard (1914-2006) and Harold (1921-2000) Nicholas had a film and television tap career spanning more than 70 years. Impressed by their choreography, George Balanchine invited them to appear in his Broadway production of Babes in Arms. Their unique style of suppleness, strength, and fearlessness led many to believe that they were trained ballet dancers.
Glover (b. 1973) is best known for starring in the Broadway hit The Tap Dance Kid. Glover mixes classic moves like those of his teacher Gregory Hines with his own contemporary style. He has won several Tony awards for his Broadway choreography.
The Blues Project – Dorrance Dance Company with Toshi Reagan and BIGLovely
Dorrance Dance returns to Ann Arbor on February 21-22, 2020. Content created in collaboration by Jordan Miller and Terri Park.
|
From Margins to Mainstream: A Brief Tap Dance History
Photo: Dorrance Dance in performance. The company returns to Ann Arbor on February 21-22, 2020. Photo by Nicholas Van Young.
Brief History
Tap dance originated in the United States in the early 19th century at the crossroads of African and Irish American dance forms. When slave owners took away traditional African percussion instruments, slaves turned to percussive dancing to express themselves and retain their cultural identities. These styles of dance connected with clog dancing from the British Isles, creating a unique form of movement and rhythm.
Early tap shoes had wooden soles, sometimes with pennies attached to the heel and toe. Tap gained popularity after the Civil War as a part of traveling minstrel shows, where white and black performers wore blackface and belittled black people by portraying them as lazy, dumb, and comical.
Evolution
20th Century Tap Tap was an important feature of popular Vaudeville variety shows of the early 20th century and a major part of the rich creative output of the Harlem Renaissance.
Tap dancers began collaborating with jazz musicians, incorporating improvisation and complex syncopated rhythms into their movement. The modern tap shoe, featuring metal plates (called “taps”) on the heel and toe, also came into widespread use at this time. Although Vaudeville and Broadway brought performance opportunities to African-American dancers, racism was still pervasive: white and black dancers typically performed separately and for segregated audiences.
Tap’s popularity declined in the second half of the century, but was reinvigorated in the 1980s through Broadway shows like 42nd Street and The Tap Dance Kid.
Tap in Hollywood
From the 1930s to the 1950s, tap dance sequences became a staple of movies and television. Tap stars included Shirley Temple, who made her film tap dance debut at age 6, and Gene Kelly, who introduced a balletic style of tap.
|
yes
|
Dance
|
Did tap dancing originate in America?
|
yes_statement
|
"tap" "dancing" "originated" in america.. america is the birthplace of "tap" "dancing".
|
https://www.britannica.com/art/tap-dance
|
Tap dance | Origin, History, Styles, & Facts | Britannica
|
tap dance, style of dance in which a dancer wearing shoes fitted with heel and toe taps sounds out audible beats by rhythmically striking the floor or any other hard surface.
Early history
Tap originated in the United States through the fusion of several ethnic percussive dances, primarily West African sacred and secular step dances (gioube) and Scottish, Irish, and English clog dances, hornpipes, and jigs. Until the last few decades of the 20th century, it was believed that enslaved Africans and Irish indentured servants had observed each other’s dances on Southern plantations and that tap dancing was born from this contact. In the late 20th century, however, researchers suggested that tap instead was nurtured in such urban environments as the Five Points District in New York City, where a variety of ethnic groups lived side by side under crowded conditions and in constant contact with the distinctly urban rhythms and syncopations of the machine age.
In the mid- to late 1800s, dance competitions were a common form of entertainment. Later called “cutting contests,” these intense challenges between dancers were an excellent breeding ground for new talent. (One of the earliest recorded such challenges took place in 1844 between Black dancer William Henry Lane, known as Master Juba, and Irish dancer John Diamond.) Dancers matured by learning each other’s techniques and rhythmic innovations. The primary showcase for tap of this era was the minstrel show, which was at its peak from approximately 1850 to 1870.
During the following decades, styles of tap dancing evolved and merged. Among the ingredients that went into the mix were buck dancing (a dance similar to but older than the clog dance), soft-shoe dancing (a relaxed, graceful dance done in soft-soled shoes and made popular in vaudeville), and buck-and-wing dancing (a fast and flashy dance usually done in wooden-soled shoes and combining Irish clogging styles, high kicks, and complex African rhythms and steps such as the shuffle and slide; it is the forerunner of rhythm tap). Tap dance as it is known today did not emerge until roughly the 1920s, when “taps,” nailed or screwed onto shoe soles at the toes and heels, became popular. During this time entire chorus lines in shows such as Shuffle Along (1921) first appeared on stage with “tap shoes,” and the dance they did became known as tap dancing.
Tap dance was a particularly dynamic art form, and dancers continually molded and shaped it. Dancers such as Harland Dixon and Jimmy Doyle (a duo known for their buck-and-wing dancing) impressed audiences and influenced developing dancers with their skill, ingenuity, and creativity. In addition to shaping dance performance, tap dancers influenced the evolution of popular American music in the early to mid-20th century; drummers in particular drew ideas as well as inspiration from the dancers’ rhythmic patterns and innovations. Early recordings of tap dancers demonstrate that their syncopations were actually years ahead of the rhythms in popular music.
In the early 20th century, vaudeville variety shows moved to the entertainment forefront, and tap dancers such as Greenlee and Drayton, Pat Rooney, Sr., and George White traveled the country. A number of family acts formed, including that of the future Broadway actor, producer, and songwriter George M. Cohan, who with his sister, mother, and father formed the Four Cohans. The Covan brothers together with their wives formed the Four Covans, one of the most sensational fast tap acts ever. The comedian and dancer Eddie Foy, Sr., appeared with his seven tap-dancing children, the Seven Little Foys. By the late 1910s, more than 300 theatres around the country hosted vaudeville acts.
Get a Britannica Premium subscription and gain access to exclusive content.
According to the producer Leonard Reed, throughout the 1920s “there wasn’t a show that didn’t feature tap dancing. If you couldn’t dance, you couldn’t get a job!” Nightclubs, vaudeville, and musicals all featured tap dancers, whose names often appeared on the many marquees that illuminatedNew York’sBroadway. Stars of the day, including Fred Astaire and his sister, Adele, brought yet more light to the “Great White Way” with their elegant dancing. Bill Robinson, known for dancing on the balls of his feet (the toe taps) and for his exquisite “stair dance,” was the first Black tap dancer to break through the Broadway colour line, becoming one the best-loved and highest-paid performers of his day.
Because this was an era when tap dancing was a common skill among performers, a tap dancer had to create something unique to be noticed. The Berry Brothers’ act, for example, included rhythmic, synchronized cane twirling and dazzling acrobatics. Cook and Brown had one of the finest knockabout acts. King, King, and King danced in convict outfits, chained together doing close-to-the-floor fast tap work. Buster West tap-danced in “slap shoes”—oversized clown-style shoes that, because of their extended length, slapped audibly on the floor during a routine—and did break dancing decades before it had a name. Will Mahoney tap-danced on a giant xylophone.
The “challenge”—in which tap dancers challenged one another to a dancing “duel”—had been a major part of the tap dancer’s education from the beginning. It filtered into many theatrical acts. Possibly the finest exponents of the challenge were the Four Step Brothers, whose act consisted of furious, flying steps, then a moment when each attempted to top the others.
From the outset, tap dancers have stretched the art form, dancing to a wide variety of music and improvising new styles. Among these innovative styles were flash (dance movements that incorporated acrobatics and were often used to finish a dance); novelty (the incorporation into a routine of specialty props, such as jump ropes, suitcases, and stairs); eccentric, legomania, and comedy (each of which used the body in eccentric and comic ways to fool the eye and characteristically involved wild and wiggly leg movements); swing tap, also known as classical tap (combining the upper body movement found in 20th-century ballet and jazz with percussive, syncopated footwork, a style used extensively in the movies); class (precision dancing performed by impeccably dressed dancers); military (the use of military marching and drum rhythms); and rhythm, close floor, and paddle and roll (each of which emphasized footwork using heel and toe taps, typically of a rapid and rhythmic nature).
For each one of these styles there were hundreds of dancers creating a unique version. John Bubbles, for instance, went down in history as the “Father of Rhythm Tap.” Though he may not have been the very first tap dancer to use the heel tap to push rhythm from the 1920s jazzbeat to the 1930s swing beat, he certainly was the most influential; generations of dancers learned his style. Three young dancers from Philadelphia—the Condos Brothers (Frank, Nick, and Steve)—became legendary among dancers for their exceptionally fast, rhythmic footwork; few tap dancers ever achieved Nick’s mastery of a difficult move he is credited with inventing known as the five-tap wing. Of the eccentric and legomania dancers, Buddy Ebsen, Henry (“Rubber Legs”) Williams, and Hal Leroy stand out. A unique style was invented by one of tap’s greatest dancers, Clayton (“Peg Leg”) Bates. After losing his leg at age 12, he reinvented tap to fit his own specifications—a peg and a shoe with two taps.
|
Early history
Tap originated in the United States through the fusion of several ethnic percussive dances, primarily West African sacred and secular step dances (gioube) and Scottish, Irish, and English clog dances, hornpipes, and jigs. Until the last few decades of the 20th century, it was believed that enslaved Africans and Irish indentured servants had observed each other’s dances on Southern plantations and that tap dancing was born from this contact. In the late 20th century, however, researchers suggested that tap instead was nurtured in such urban environments as the Five Points District in New York City, where a variety of ethnic groups lived side by side under crowded conditions and in constant contact with the distinctly urban rhythms and syncopations of the machine age.
In the mid- to late 1800s, dance competitions were a common form of entertainment. Later called “cutting contests,” these intense challenges between dancers were an excellent breeding ground for new talent. (One of the earliest recorded such challenges took place in 1844 between Black dancer William Henry Lane, known as Master Juba, and Irish dancer John Diamond.) Dancers matured by learning each other’s techniques and rhythmic innovations. The primary showcase for tap of this era was the minstrel show, which was at its peak from approximately 1850 to 1870.
During the following decades, styles of tap dancing evolved and merged.
|
yes
|
Dance
|
Did tap dancing originate in America?
|
yes_statement
|
"tap" "dancing" "originated" in america.. america is the birthplace of "tap" "dancing".
|
https://www.adanceplace.com/history-of-tap-dance/
|
A Brief History Of Tap Dance
|
A Brief History Of Tap Dance
Looking to start tap dancing? When you take tap dance classes, you’re participating in an old and very interesting form of dancing.
Tap dance has evolved over the years and had a large impact on other types of dances and cultures.
Here is a brief history of tap dance:
Early Years In the U.S.
Tap dancing originated in the U.S. and brought together elements from a number of other ethnic dances, including West African step dances and Scottish, Irish, and English jigs.
Many elements of modern tap dance, including syncopated rhythms, come from African tribal dances and songs. When enslaved people couldn’t perform with their traditional drums, they found ways to make similar sounds with their feet and bodies to keep their culture alive.
Tap is believed to have first originated in the U.S. from African and Irish slaves observing each other’s dances on Southern plantations in the 19th century.
Dance competitions and minstrel shows were popular forms of entertainment in the post-Civil War era.
Traveling shows would include both Black and White dancers. Early wooden shoes allowed performers to combine fancy footwork with sounds that transfixed audiences. At these events, dancers would showcase their skills and learn techniques from other dancers.
One of the most famous early tap dancers was William Henry Lane, also known as Master Juba, who was one of the only Black dancers to perform with traveling white minstrel groups. His fast dancing blended African and European styles and had a huge impact on the next generation of tap dancers.
During these years, tap dancing was defined more by syncopated rhythms than by the tapping sound itself because most people performed in soft shoes or wooden shoes, similar to clogs. Some performers attached pennies to their shoes for the earliest versions of modern tap shoes.
Many variations of tap dance came out of minstrel shows, including “buck and wing”, which includes shuffle steps and taps to mark tempo. Dancers also started using both their heels and toes to create variations in movement and sound.
The Addition Of Tap Shoes
The type of tap dance we would recognize today was introduced in the 1920s when tap shoes were first created. The first tap shoes were built by nailing or screwing small pieces of metal to the toes and heels of dance shoes.
Performing groups and chorus lines started performing in tap shoes, and the dancing and footwear quickly gained popularity. Metal taps allowed for a louder and more rhythmic sound.
Around the same time, tap dancing became a popular addition to traveling vaudeville shows. Tap dancing had previously been largely an individual performance, but it became a group routine with the help of choreographers and standardized steps.
Groups and families of tap dancers traveled across the country sharing their unique rhythms. As the taps got louder, they also got faster and drew bigger crowds. Because tap dancing was defined by its unique syncopations and rhythms, many solo performers would improvise their dances. It was also at this time that tap battles were introduced.
Tap battles are still popular elements of the dance style today and involve two dancers improvising and battling back and forth with feverish footwork and taps. Tap battles became a hallmark of the style and a great way to bring the audience into the performance.
The unique sounds of tap dancing paired well with the sounds of jazz music. Jazz even soon became the most common accompaniment to tap dance. Many tap dancers began collaborating with jazz musicians to create unique performances.
Tap Goes To Broadway and Film
Tap dancing soon expanded from small traveling stages to the biggest stages of all on Broadway. Popular stars like Fred Astaire and Bill Robinson were known for their elegant and charismatic dancing.
They added a performative element to tap dancing and used it to tell a story and share emotion. Fred Astaire combined ballroom dance with tap to create a new version of the dance that was elegant and refined.
Robinson was one of the first Black tap dancers to perform for major audiences and had a major influence in bringing tap dance to the mainstream. His famous “Stair Dance” from 1918 introduced many people to the light and graceful elements of tap dance as he performed complicated movements up and down a set of stairs.
Tap dancing soon became a requirement for performers on stage and screen, which pushed dancers to create innovative new steps and styles to gain attention. It was at this time that performers started using props, incorporating acrobatics, tapping even faster, and wearing attention-grabbing costumes.
Tap dance was no longer just a simple performance—it was the main attraction and a full-fledged show. Tap dancers started performing to new types of music and crossing genre lines.
Tap dancing continued to make its way into major stage and film productions. Stars like Gene Kelly and Shirley Temple added their own personal takes to tap and continued to evolve and personalize the style. Kelly was known for adding ballet-inspired moves to tap.
During the 1930s, 40s, and 50s, nearly every theatrical production and movie had some form of tap dance sequence.
At the same time, tap dance performances became a regular occurrence at nightclubs and musical venues. Tap dancers often performed to classical music and with live orchestras at some of the best venues in major cities, especially New York. With the invention of television, tap dancers found new audiences.
Variety shows that were popular in the early days of television, including The Ed Sullivan Show and The Colgate Comedy Hour, often featured tap dancers. The new medium allowed tap dancers to perform for a wider audience.
Many performers also moved off Broadway and started tap dancing in Las Vegas, which was gaining popularity as an entertainment destination.
A Decline In Popularity
As movies moved away from musical classics, tap dance started to lose popularity. Musicals, starting with Oklahoma, moved away from tap dancing routines and more towards ballet.
During the 1950s and 60s, jazz music was replaced by rock and pop music, which was less conducive to tap dance. At the same time, fewer people were going to nightclubs as the baby boomer generation focused on getting an education and joining the workforce.
Although tap was in decline, its impact was still felt. Many aspects of tap led to the growth of jazz dance, which surged in popularity.
Tap Is Back
Despite its lull, tap managed to stay alive during the 1960s and 70s—but barely. Although most dancers found themselves with nowhere to perform, it was in the 70s that tap troupes were created. These groups kept tap alive and eventually led to younger dancers asking to learn how to tap dance.
In the 1980s, Broadway shows like 42nd Street and The Tap Dance Kid introduced new audiences to tap dance, and the style began to regain some of its original popularity. Gregory Hines is largely credited for reviving tap dance and bringing the dance back to movies and stages.
One of the biggest proponents of modern tap dancing is Savion Glover, who was taught by Hines and who starred in The Tap Dance Kid. His style of tapping adds many contemporary elements, which made it a hit with a new generation of dancers.
Modern tap dancers have largely moved away from old-timey music and incorporated more popular music, especially pop and hip hop, into their routines to better connect with audiences.
Modern tap dance has two main categories: rhythm and theatrical. Theatrical tappers bring back elements of classic dancers from the 1930s and 40s and dance with their entire bodies as they move around the stage.
Rhythmic tappers like Glover use their feet to make music. For these dancers, it’s more about the rhythms and precise sounds, and movement is typically contained to the feet. Both types of tap dance are staples of the dance world and growing in popularity as a new generation of tap dancers emerges.
Today, tap dance is popular among dancers of all ages and there are dance studios teaching tap across the country. Many popular movies including Happy Feet have revived the dance style and shown how it can be performed in a variety of situations.
Because tap can be interpreted in a number of ways, dancers can add their unique style and make each performance their own.
Participating in tap dance classes allows you to join the unique history of tap dance and add your own personal spin. Tap dance has evolved over the years and played a major role in shaping the culture of the United States.
Learning how to tap dance puts you right in the middle of the history of this fascinating and beautiful art form.
|
A Brief History Of Tap Dance
Looking to start tap dancing? When you take tap dance classes, you’re participating in an old and very interesting form of dancing.
Tap dance has evolved over the years and had a large impact on other types of dances and cultures.
Here is a brief history of tap dance:
Early Years In the U.S.
Tap dancing originated in the U.S. and brought together elements from a number of other ethnic dances, including West African step dances and Scottish, Irish, and English jigs.
Many elements of modern tap dance, including syncopated rhythms, come from African tribal dances and songs. When enslaved people couldn’t perform with their traditional drums, they found ways to make similar sounds with their feet and bodies to keep their culture alive.
Tap is believed to have first originated in the U.S. from African and Irish slaves observing each other’s dances on Southern plantations in the 19th century.
Dance competitions and minstrel shows were popular forms of entertainment in the post-Civil War era.
Traveling shows would include both Black and White dancers. Early wooden shoes allowed performers to combine fancy footwork with sounds that transfixed audiences. At these events, dancers would showcase their skills and learn techniques from other dancers.
One of the most famous early tap dancers was William Henry Lane, also known as Master Juba, who was one of the only Black dancers to perform with traveling white minstrel groups. His fast dancing blended African and European styles and had a huge impact on the next generation of tap dancers.
During these years, tap dancing was defined more by syncopated rhythms than by the tapping sound itself because most people performed in soft shoes or wooden shoes, similar to clogs. Some performers attached pennies to their shoes for the earliest versions of modern tap shoes.
Many variations of tap dance came out of minstrel shows, including “buck and wing”, which includes shuffle steps and taps to mark tempo. Dancers also started using both their heels and toes to create variations in movement and sound.
|
yes
|
Dance
|
Did tap dancing originate in America?
|
yes_statement
|
"tap" "dancing" "originated" in america.. america is the birthplace of "tap" "dancing".
|
https://westharlem.art/2021/09/26/history-of-tap-dance/
|
History of Tap Dance – WEST HARLEM ART FUND
|
History of Tap Dance
To learn even more, you can now sign up as a member and get special videos, information and invitations to special events.
Tap dance is an indigenous American dance genre that evolved over a period of some three hundred years. Initially a fusion of British and West African musical and step-dance traditions in America, tap emerged in the southern United States in the 1700s. The Irish jig (a musical and dance form) and West African gioube (sacred and secular stepping dances) mutated into the American jig and juba. These in turn became juxtaposed and fused into a form of dancing called “jigging” which, in the 1800s, was taken up by white and black minstrel-show dancers who developed tap into a popular nineteenth-century stage entertainment. Early styles of tapping utilized hard-soled shoes, clogs, or hobnailed boots. It was not until the early decades of the twentieth century that metal plates (or taps) appeared on shoes of dancers on the Broadway musical stage. It was around that time that jazz tap dance developed as a musical form parallel to jazz music, sharing rhythmic motifs, polyrhythm, multiple meters, elements of swing, and structured improvisation. In the late twentieth century, tap dance evolved into a concertized performance on the musical and concert hall stage. Its absorption of Latin American and Afro- Caribbean rhythms in the forties has furthered its rhythmic complexity. In the eighties and nineties, tap’s absorption of hip-hop rhythms has attracted a fierce and multi-ethnic new breed of male and female dancers who continue to challenge and evolve the dance form, making tap the most cutting-edge dance expression in America today.
As Africans were transplanted to America, African religious circle dance rituals, which had been of central importance to their life and culture, were adapted and transformed (Stuckey 1987). The African American Juba, for example, derived from the African djouba or gioube, moved in a counterclockwise circle and was distinguished by the rhythmic shuffling of feet, clapping hands, and “patting” the body, as if it were a large drum. With the passage of he Slave Laws in the 1740s prohibiting the beating of drums for the fear of slave uprisings, there developed creative substitutes for drumming, such as bone- clapping, jawboning, hand-clapping, and percussive footwork. There were also retentions by the indentured Irish, as well as parallel retentions between the Irish and enslaved Africans, of certain music, dance and storytelling traditions. Both peoples took pride in skills like dancing while balancing a glass of beer or water on their heads, and stepping to intricate rhythmic patterns while singing or lilting these same rhythms. Some contend that the cakewalk, a strutting and prancing dance originated by plantation slaves to imitate and satirize the manners of their white masters, borrows from the Irish tradition of dancing competitively for a cake. And that Africans may have transformed the Irish custom of jumping the broomstick into their own unofficial wedding ceremony at a time when slaves were denied Christian rites.
The conceptualization of tap dance as an Afro-Irish fusion, fueled by the competitive interplay of the challenge in a battle for virtuosity and authority, puts into focus issues of race and ethnicity; and inevitably takes on the painful history of race, racism, and race relations in America. In addition, there are issues of class, in which tap was considered a popular entertainment and placed in the category of “low-art,” and therefore not worthy of being presented on the concert stage. Moreover, the strange absence of women in early accounts of jigging competitions forces a consideration of gender in the evolution of tap dance which, for most of the twentieth century, was considered “a man’s game.” That has become a kind of mythologized truth, given the plethora of tap histories that have blindsided women. By inference or direct statement, women were told they were “weak”; they lacked the physical strength needed to perform the rhythm-driven piston steps, multiple-wing steps, and flash and acrobatic steps that symbolized the (male) tap virtuoso’s finish to a routine. Women were “nurturers,” not competitors,” and therefore did not engage in the tap challenge. A woman’s role was not as a soloist but as a member of the chorus line.
Lotta (Mignon) Crabtree
Lotta (Mignon) Crabtree was born on Nassau Street in New York City in 1847, and raised in California during the Gold Rush, where she learned ballet, fandangos, and the Highland fling. Since in the 1850s half of California’s population was Irish, her teachers made sure she excelled at the jig. As a dancer touring mining camps, she was introduced to an African-American dancer who taught her breakdowns, soft-shoes, and buck- and-wing dances. Crabtree’s fame spread throughout the country, as she was as a performer of jigs and reels, with acrobatic flourishes. Her only competitors were the three Worrell sisters, Irene, Sophie, and Jennie, who performed in clog-dancing shoes. When it was later discovered that Jennie Worrell’s clogs had trick heels (heels that were hollowed out with tin-lined boxes placed inside and holding two bullets), that made it sound like she was dancing faster than she really was, Crabtree had no peers when it came to jig and clog. “She can dance a regular breakdown in true burnt cork style and gives an Irish Jig as well as we have ever seen it done,” wrote the New York Clipper in 1864 (Rourke 1928). In her later years she became a popular actress and the toast of Broadway. While she retired from the stage in 1891 at the age of forty-four, her renown as a female jig and breakdown dancer lasted into the early decades of the twentieth century.
Ada Overton Walker
Ada Overton Walker was born on Valentines Day in 1880 in New York’s Greenwich Village. As a child she received dance instruction from a Mrs. Thorp in midtown Manhattan. Around 1897, after graduating from Thorp’s dance school, she toured briefly with Black Patti’s Troubadours. A girlfriend invited her to model for an advertisement with Bert Williams and George Walker, who had just scored a hit in their vaudeville debut at Koster and Bial’s Music Hall. She agreed to model for the ad and subsequently joined the men to dance in the cakewalk finale. After joining John W. Isham’s Octoroon, a critic for the Indianapolis Freeman declared, “I had just observed the greatest girl dancer.” With Grace Halliday she formed the sister dance act of Overton and Halliday. They performed as the pair of Honolulu Belles in the Williams and Walkers’ The Policy Players (1899), and from there, Overton began to develop as a soloist with more substantial roles. In the musical comedy The Sons of Ham (1900) she sang and danced “Miss Hannah from Savannah” and “Leading Lady”; and in its second edition, “Society” and “Sparkling Ruby” which brought her jubilant acclaim. James Weldon Johnson wrote that she “had a low-pitched voice with a natural sob to it, which she knew how to use with telling effect in putting over a song” (Johnson 1933). Tom Fletcher remembered her as a singer who did ragtime songs and ballads equally well; and as a dancer “who could do almost anything, and no matter whether it was buck-and-wing, cakewalk, or even some form of grotesque dancing . . . she lent the performance a neat gracefulness of movement unsurpassed by anyone” (Fletcher 1954).
Published by WHAF
The West Harlem Art Fund (WHAF) is a twenty-five year old, public art and new media organization. Like explorers from the past, who searched for new lands and people, WHAF offers opportunities for artists and creative professionals throughout NYC and beyond wishing to showcase and share their talent. The West Harlem Art Fund presents art and culture in open and public spaces to add aesthetic interest; promote historical and cultural heritage; and support community involvement in local development. Our heritage symbol Afuntummireku-denkyemmtreku: is the double crocodile from West Africa Ghana which means unity in diversity.
View all posts by WHAF
|
History of Tap Dance
To learn even more, you can now sign up as a member and get special videos, information and invitations to special events.
Tap dance is an indigenous American dance genre that evolved over a period of some three hundred years. Initially a fusion of British and West African musical and step-dance traditions in America, tap emerged in the southern United States in the 1700s. The Irish jig (a musical and dance form) and West African gioube (sacred and secular stepping dances) mutated into the American jig and juba. These in turn became juxtaposed and fused into a form of dancing called “jigging” which, in the 1800s, was taken up by white and black minstrel-show dancers who developed tap into a popular nineteenth-century stage entertainment. Early styles of tapping utilized hard-soled shoes, clogs, or hobnailed boots. It was not until the early decades of the twentieth century that metal plates (or taps) appeared on shoes of dancers on the Broadway musical stage. It was around that time that jazz tap dance developed as a musical form parallel to jazz music, sharing rhythmic motifs, polyrhythm, multiple meters, elements of swing, and structured improvisation. In the late twentieth century, tap dance evolved into a concertized performance on the musical and concert hall stage. Its absorption of Latin American and Afro- Caribbean rhythms in the forties has furthered its rhythmic complexity. In the eighties and nineties, tap’s absorption of hip-hop rhythms has attracted a fierce and multi-ethnic new breed of male and female dancers who continue to challenge and evolve the dance form, making tap the most cutting-edge dance expression in America today.
As Africans were transplanted to America, African religious circle dance rituals, which had been of central importance to their life and culture, were adapted and transformed (Stuckey 1987).
|
yes
|
Dance
|
Did tap dancing originate in America?
|
yes_statement
|
"tap" "dancing" "originated" in america.. america is the birthplace of "tap" "dancing".
|
https://lareviewofbooks.org/article/tap-dancing-reports-death-grossly-exaggerated/
|
Tap Dancing: Reports of Our Death Have Been Grossly Exaggerated
|
Tap Dancing: Reports of Our Death Have Been Grossly Exaggerated
THE LAST FEW YEARS have been an astounding time for tap dance. Michelle Dorrance won the MacArthur "genius" grant in recognition of her extraordinary tap choreography, while powerhouse Dormeshia Sumbry-Edwards, along with Dorrance and other tap artists nabbed several New York Dance and Performance Awards (the Oscars of the NYC dance scene), and George C. Wolfe and Savion Glover’s Broadway envisioning of Shuffle Along garnered 10 Tony nominations. Rhythm tap artists appear regularly at Jazz at Lincoln Center and the form has infiltrated music videos, most notably Chloe Arnold’s viral take of Beyoncé’s “Formation”: 13 million views, and counting. Scholar Constance Valis Hill’s tap research database is now free to all on the Library of Congress website. It’s an encyclo-tap-edia of thousands of entries, available at the click of a finger. Tap dance, that most accessible of dance forms, has not been this easy to access in many decades. And the work is good. Elder stateswomen and young visionaries alike are dancing with stellar technique, breaking new creative ground, and reaching internet-age audiences. But if you read Brian Seibert’s What the Eye Hears: A History of Tap Dancing (Farrar, Straus and Giroux, 2015), you may get the impression that tap dance is dying.
Tap has a small body of formal scholarship compared to ballet, modern dance, or tap’s sister form of jazz music and the release of a new history was eagerly awaited by scholars and fans alike. Seibert, a dance writer and amateur tap dancer, has made a name for himself as a specialist in the field through his New Yorker pieces and dance reviews in The New York Times. What the Eye Hears is his first book and he proves himself a meticulous researcher, filling over 500 pages with colorful characters, events, and a kaleidoscope of tap artistry, gleaned from thousands of fragments, some glittering, some odious. The 400-year story of percussive dance starts with the first meetings of enslaved West Africans and immigrants from the British Isles on American soil, moves through the development of tap in the 19th century in minstrelsy and vaudeville, details 20th-century stars and unknowns alike in jazz-era nightclubs and Hollywood, examines postwar decline and late-century resurgence, ending in the 21st century, seemingly weeks before the book went to press. While the writing sparkles, the underlying structure, the way Seibert tells the story of tap is retrograde. The author problematically supports his anecdotes with his own sharp critique and popular culture narratives. As Seibert repeats racially charged epithets, or aesthetic condemnations based in white-dominant society, or sexist statements rooted in tired ideas of jazz authenticity, he routinely fails to put biased pronouncements from 1790 or 1890 or 1990 into their historical or cultural context. What the Eye Hears is not a history of tap at all. It is a retelling of what people have said about tap.
Tap has delighted fans for generations. Audiences fill seats of Broadway musicals and local concert halls. Devotees troll the internet for beloved old movie clips. Kids and adults cram the floors of dancing schools. Professionals do this also, while recognizing that as soon those shoes are laced up, a tap dancer steps directly into the United States’s racial history. Systems of oppression and restricted opportunity in the United States have influenced — even determined — every era of tap dance. Seibert instructs his reader to avoid essentialisms and consider the socially constructed idea of race, but he ignores endemic racism and uses his prodigious skills as a wordsmith to recreate supremacist structures. While Seibert does include many female tap dancers in his history, the book marginalizes women’s contributions — a standard tactic in jazz writing that traditionally has privileged male artistry. Ultimately, What the Eye Hears perpetuates the worn-out idea that tap dance is a dying form. Untrue, Mr. Seibert. As contemporary tap artists can tell you: Reports of our death have been grossly exaggerated.
¤
Tap dance emerged out of minstrelsy and its entwinement with this history grows clear when you stop to think that “Jim Crow” was originally a stock minstrel character of the happy, dancing black man. Tap dance historians must contend with multiple complexities, what scholar Eric Lott in Love and Theft: Blackface Minstrelsy and the American Working Class (1993) calls “the terrible pleasures” of minstrelsy. From the 1840s, white men in blackface and black men in blackface developed a massively entertaining form based in virtuosic foot techniques and rhythmic innovations. Buck-and-wing and soft shoe — the 19th-century predecessors of 20th-century tap — evolved within a setting of ethnic humor, appropriation, and exploitation. The seemingly dimwitted Sambo with articulate feet danced on every minstrel stage, in every city, town, and country carnival, reinforcing the argument that blacks required the “civilizing influence” of enslavement and white rule.
Minstrel performers brought to the stage a 19th-century rhythmic blend that originated in two dancing cultures that met at the United States’s black-white color line. From the 1600s through the 1800s, the juba, ring shout, and other secular and religious step dancing of enslaved Africans and free blacks interacted with the jig, reel, and clog dancing of indentured servants and immigrants from the British Isles. Constance Valis Hill, in Tap Dancing America: A Cultural History (2009), offers the term “Afro-Irish fusion” to describe the hybridization of West African body movements and rhythms with Anglo-Irish footwork techniques, noting that the idea of fusion neither equalizes the contributions of Irish and African Americans, nor privileges one over the other. Seibert ignores Hill’s and Lott’s scholarship and broadly dismisses the current body of research in pre-20th-century dance and minstrelsy. He offers his own interpretations, many of which fall flat. “Blackface minstrelsy was and was not about black people,” he writes, demonstrating a lack of awareness of the way minstrelsy’s degrading depictions of blacks pervaded every level of US culture and informed a nation at war with itself over how to define blackness and whiteness. Seibert collapses the multiply-signifying performances of the black dancing body, or the white embodiment of Africanist dance forms, into one-dimensional meanings, warning that minstrelsy is not “simple racial denigration,” while overlooking that de-nigr-ation (blackening) in the United States is anything but simple.
African-American, feminist scholar bell hooks writes that black men are victimized by stereotypes “that were first articulated in the nineteenth century but hold sway over the minds and imaginations of citizens of this nation in the present day” (We Real Cool: Black Men and Masculinity, 2004). She could be referring to African-American tap dance stars when she notes, “the price of visibility in the contemporary world of white supremacy is that black male identity be defined in relation to the stereotype whether by embodying it or seeking to be other than it.” The tap dancing “class acts” of the early 20th century used cool elegance, sharp clothing, and virtuosic footwork to counter the image of what hooks summarizes as the supposed “untamed, uncivilized, unthinking, and unfeeling” black brute. Today, the public identifies this sophistication with Fred Astaire, who was just one in a long line of black and white class acts, and one of the few who had control of the camera that recorded his artistry. Throughout What the Eye Hears, Seibert fails to attend to the power structures at work or the motives of those who held the pen or camera. In 1935, Bill “Bojangles” Robinson hit international fame as the affable, dancing “Uncle,” a role it must be noted that Robinson did not play on Broadway or vaudeville. The myth of the childlike but competent Uncle Tom, the faithful retainer of the “Old South,” assured whites in the 1860s and 1930s of “simpler” times when blacks “knew their place.” Seibert rejects the idea that Hollywood needed this race casting to ensure the sale of movies to the Southern market, and wonders how anyone could feel “threatened by a snowy-haired black caretaker.” Seibert misses the point that Robinson negotiated his fame within an industry that promoted him precisely because he could embody their vision of a non-threatening black man.
Gregory Hines, unlike his forebears, promoted tap dance by being a sexy charmer. Seibert makes much of this, but again seems not to recognize the racialized constructions of masculinity involved. Hines’s movie Tap (1989) opens with him dancing in a prison cell, anticipating the arguments made by Michelle Alexander about mass incarceration in what she calls the era of The New Jim Crow (The New Jim Crow: Mass Incarceration in the Age of Colorblindness, 2010). Seibert unwittingly perpetuates ideas of the criminal black body by framing the 1950s bebop tap artistry of Baby Laurence and Teddy Hale with repeated comments about their jail time.
¤
Seibert traces the formative years of tap dance, from minstrelsy to ragtime, with a wealth of details on the blacks and whites, women and men, who jigged, clogged, ragged, and tapped the Mobile Buck. The music of these dancing feet, though, is strangely silent. There is little reference to the rhythmic revolution danced by these hoofing and soft-shoeing bodies, who very likely initiated the swing groove that contributed to the birth of jazz. Instead, what the eye hears in these chapters is predominantly the n-word, repeated dozens of times in as many pages, or “coon,” used 10 times in a single page. Seibert explains the words “carried a different charge” depending on who was using them yet seems blithely unaware of the impact of his own use, often without quotation marks or a frame of reference. As black lesbian social critic Audre Lorde wrote, “The Master’s Tools Will Never Dismantle the Master’s House” (published in Sister Outsider: Essays and Speeches, 1984); but in Seibert’s attempts to illustrate the racial milieu, he eagerly unearths the master’s tools — racial epithets — and rebuilds the house of white supremacy with a new, 21st-century platform for racist language.
Seibert also quotes snarky or negative reporting about tap, sometimes racially charged, sometimes misogynist, like a musical coda or the two-bar break that concludes every traditional tap dance phrase. The negative reviews and racialized commentary on Leticia Jay’s Tap Happenings illustrate his use of historic evidence. Jay’s 1960s productions pulled older tap men out of retirement and laid the groundwork for the Tap Renaissance of the following decades. The section opens with a New York Times review of the great dancer Chuck Green alongside Leticia Jay, but using his own voice he seems to agree with reviews that “unfavorably contrasted Green’s ease with her self-indulgent straining.” Insulting a pioneering woman who danced with his beloved male hoofers, he describes Jay, with her hair in an Afro, as “a minstrel Black Panther.” John Bubbles, who had revolutionized tap rhythms and technique in the 1920s by abandoning buck-and-wing phrasing and footwork, also loses out in this scene. In 1967, the headliner opted out of the scruffy Tap Happenings and appeared instead with show biz royalty — Judy Garland — at the Palace Theatre, the site of his former triumphs. Seibert provides a backhanded jab again via The New York Times, who described Bubbles as “a veteran trouper from the uncomplicated, naive, pre-Stokely Carmichael era.” In the next sentence, Seibert wheels Bubbles off the tap dance stage on a gurney, noting that after his “quaint” shows at the Palace, the father of rhythm tap suffered a stroke and never danced again.
Seibert’s narration swoops into the middle of tap scenes and eyes fascinating tap dance tidbits. Then, much like a seagull, he flaps sand in our eyes and poops on the way out. Eloquently. All over tap dance history. While his formula of setting up dance targets and lobbing negative critiques is certainly an accepted practice in his day job as a New York Times dance reviewer, a historian has a higher responsibility. By positioning commentators of the past as accurate informants, their racism, misogyny, and other biases get the last word, again and again, unaccompanied by any hint of their worldview. (Imagine a history of Obama’s presidency, written a hundred years from now, using only the information from Fox News). Seibert neglects to provide footnotes for large sections of writing and fails to place his research in conversation with what scholars of the last 25 years have said about tap dance — or for that matter, about minstrelsy, jazz cultural studies, popular dance, theater, American pop culture, African-American studies, critical race theory, or feminism. He neither acknowledges nor honestly refutes Hill’s Tap Dancing America, the first 21st-century comprehensive history of tap, and dismisses dance historian and biographer Jacqui Malone as a “ghost writer.” Seibert can be commended for fresh narratives based on the original interview notes made by Marshall and Jean Stearns for their seminal work Jazz Dance: The Story of American Vernacular Dance (1968). But he casts Jean Stearns as her husband’s stenographer, not as a jazz authority and co-author in her own right.
¤
My recommendation to readers interested in tap dance is to skip the first third and last third of the tome. When Brian Seibert is good, he’s very, very good. And when he’s bad, he’s horrid. Read the charming “Opening Act,” the author’s spot-on account of holding his own at Buster Brown’s Crazy Tap Jam while Savion Glover and his acolytes tear up the floor. Then jump over three centuries of early history and start with the delightful section on Jazz Age New York City. Poignant and exciting, finally the musicality of the tap dancing jumps off the page. From the 1920s to the 1950s, Seibert revels in the glamour of jazz virtuosity with a cornucopia of big stars and lesser-knowns, including black and white women tap soloists and chorus dancers who hoofed in nightclubs and early TV, and obscure tap choreographers who toiled behind the scenes of hit shows and movie musicals.
These sections succeed for the exact reason the beginning and end sections fail: Seibert allows dancers and jazz musicians to report on tap artistry. He meticulously mines the wealth of interviews and oral histories archived by the Smithsonian and the Institute of Jazz Studies, weaving a multitude of voices into a compelling and nuanced narrative about the improvisational bebop artistry of Hale and Laurence. A midcentury chapter is aptly titled “Before the Fall,” because the work makes clear that only a handful of artists belong in Seibert’s concept of paradise. When he cites established white writers like John Martin, dean of The New York Times dance criticism, the critiques are positioned in service to their favorite dancers, including the enormously popular Paul Draper, who tap danced in concert halls to classical music before his career was cut short by the McCarthy blacklist. Seibert superbly delineates Astaire’s unstaunched Hollywood creativity and provides a fascinating discussion of Astaire and Bubbles at the intersection of tap’s segregated worlds, raising “perennial questions of imitation and theft.” Even the author’s outright dislike of Gene Kelly, who “didn’t advance the art of tap on film so much as preside over its eclipse,” cannot dampen the love-fest.
The remainder of Seibert’s history is underpinned by ideas from old authenticity wars that have raged throughout writings on popular dance and music since the beginnings of African-American cultural forms — traditional narratives that have dismissed tap and jazz as low art, lacking rigor or history. Proponents of bebop managed to wrest jazz from minstrelsy by elevating the music as an aural form, rather than visual, complex instead of commercial. Scholar Jayna Brown, in Babylon Girls: Black Women Performers and the Shaping of the Modern (2008), points out that official histories maneuver jazz into the “art” category by distancing it from the bodies of black women. Jazz authenticity has long been promoted as manly and straight, with white authors defining manliness in racialized terms. Seibert aligns his story with jazz’s master narrative and employs the alchemy established by the Stearnses of positioning the golden legacy of midcentury jazz tap artistry above the dross of all the female bodies and effeminate (read gay) Broadway tap that followed.
The women of the Tap Renaissance — the resurgence of tap artistry of the 1970s to the present — picked up where the bebop artistry of their mentors left off. A new breed of female choreographers and soloists collaborated with live musicians (a rarity on the modern dance concert stage) and created new polyrhythmic compositions for tap dancing bodies and jazz music. Twenty-first century solo improvisers and ensemble choreographers alike are still working the artistic ground broken by these women, but Seibert is uninterested. The surging thrill of the Jazz Era is gone and Seibert turns mean with flaccid descriptions of women who could be considered his tap mothers and grandmothers: Brenda Bufalino (Bufalino was my teacher and I performed in her company, the American Tap Dance Orchestra), Jane Goldberg, Lynn Dally, Dianne Walker, Anita Feldman, Linda Sohl-Donnell, Heather Cornell, and others. He notes that “Bufalino struggled against preconceptions of female performers over forty,” while recreating the exact same environment of dismissive distain. Once again, the eye does not hear Bufalino’s unique musical concept of orchestral tap. Instead, Seibert claims that she gets in the way of her art with an ungainly body style and “perfunctory” floor patterns. Another lauded woman soloist is noted only for her weight. The “brightly vanilla,” “chipper” ensemble of a younger female choreographer “came together haphazardly” and receives a patronizing compliment that her shows were best at maintaining “the spirit of the old guys.” One tap composer is known for her “gimmick” and “art school exercises,” another for works that are not “of the highest originality, invention or poetry.” Another artist “never improved as a performer,” and her leadership and choreography on the concert stage is described as a “tap subculture,” as if decades of tap innovation amount to crashers at someone else’s party.
Seibert insists that “the place of women in tap persisted as a question” throughout the 1980s and ’90s, obscuring the fact that prominent arts presenters, the older generation of tap dancers, audiences, and even many critics applauded their concert work. Instead, Seibert again positions reviews out of context and offers a fresh platform for racist aesthetic condemnations like Arlene Croce’s 1980s critique of women tap dancers and their “low-primate stuff.” While Seibert discusses the artistry of a few younger African-American women in his later chapters, particularly Ayodele Casel and Dormeshia Sumbry-Edwards, he excludes key black tap dancing women of the last 30 years — Mercedes Ellington, Germaine Goodson, Karen Callaway Williams — and marginalizes choreographers like Deborah Mitchell and Germaine Ingram, who are mentioned only as the protégés of older hoofers.
Seibert’s repeated racisms and dismissal of women’s work, interspersed with some very fine writing, kind of mimic all those B-musicals that are unwatchable now except for a few minutes of good tap dancing. But his erudite biases do the greatest disservice by misleading the reader toward his erroneous — however meticulously researched and lovingly detailed — funeral knell: tap is dead, tap is dead, tap is dead. The stench of tap’s purportedly decomposing corpse hangs over every chapter as the grim reapers of heroin, heart disease, and HUAC scythe down all the really good male talent, leaving a bunch of middle-aged tap mothers who embarrass Seibert and disappear from his story by late century, despite the fact that all of these black and white women continue to perform, choreograph, produce, teach, and lead to this day.
Seibert’s last pages whine about the younger generation of 21st-century artists — “why can’t they use their bodies with fuller and more articulate expressiveness […] why can’t they be more poetically suggestive and structurally sophisticated?” — as if hammering nails into what he sees as tap’s coffin. Every major reviewer of What the Eye Hears has jumped on the idea of tap as a dying art form, writing gleeful eulogies. True, tap dance does not have the large audiences that ballet, jazz music, or hip-hop-inspired dance styles enjoy. It also doesn’t have the financial support available to these forms. The 21st-century arts economy has dealt brutal blows of paltry arts funding and skyrocketing real estate, making it almost impossible for young professionals and established companies to afford housing, rehearsal space, or concert production. This means little to Seibert, who paints a world where tap dancers passively accept their lot, when in fact, the form has responded to artistic, social, and economic challenges with breathtaking creativity.
The question remains: For whom is Seibert writing? Not really for the casual reader, at 500 pages, nor for the tap dance fan, who can tire of the densely detailed early history before the fun stuff starts. Not for the professional tap dancer: Seibert manages to disparage or misrepresent the majority of current mentors and innovators. I cannot recommend the book for high school or college students unless instructors spend valuable class time providing context for an overuse of hate speech that serves little purpose in elucidating the history. Fellow researchers and graduate students are left in the dark with Seibert’s refusal to thoroughly cite sources or place his ideas in conversation with current scholarship. What the Eye Hears reads like the book Seibert may have wished for when he was a gawky Los Angeles kid following his sister to tap class. The smart teen who falls in love with tap, wants to find out every little detail, and doesn’t feel the need to examine his own participation in the United States’s power structures.
So, au contraire, Mr. Seibert. Tap is not dying. Far from it. If tap dancers know anything, it is how to roll with bad economic times and changing tastes. We keep putting on our shoes and making music with our feet. Tap dance certainly won’t be stopped by some bad press.
|
As contemporary tap artists can tell you: Reports of our death have been grossly exaggerated.
¤
Tap dance emerged out of minstrelsy and its entwinement with this history grows clear when you stop to think that “Jim Crow” was originally a stock minstrel character of the happy, dancing black man. Tap dance historians must contend with multiple complexities, what scholar Eric Lott in Love and Theft: Blackface Minstrelsy and the American Working Class (1993) calls “the terrible pleasures” of minstrelsy. From the 1840s, white men in blackface and black men in blackface developed a massively entertaining form based in virtuosic foot techniques and rhythmic innovations. Buck-and-wing and soft shoe — the 19th-century predecessors of 20th-century tap — evolved within a setting of ethnic humor, appropriation, and exploitation. The seemingly dimwitted Sambo with articulate feet danced on every minstrel stage, in every city, town, and country carnival, reinforcing the argument that blacks required the “civilizing influence” of enslavement and white rule.
Minstrel performers brought to the stage a 19th-century rhythmic blend that originated in two dancing cultures that met at the United States’s black-white color line. From the 1600s through the 1800s, the juba, ring shout, and other secular and religious step dancing of enslaved Africans and free blacks interacted with the jig, reel, and clog dancing of indentured servants and immigrants from the British Isles. Constance Valis Hill, in Tap Dancing America: A Cultural History (2009), offers the term “Afro-Irish fusion” to describe the hybridization of West African body movements and rhythms with Anglo-Irish footwork techniques, noting that the idea of fusion neither equalizes the contributions of Irish and African Americans, nor privileges one over the other. Seibert ignores Hill’s and Lott’s scholarship and broadly dismisses the current body of research in pre-
|
yes
|
Dance
|
Did tap dancing originate in America?
|
no_statement
|
"tap" "dancing" did not "originate" in america.. america is not where "tap" "dancing" "originated".
|
https://www.danceus.org/irish-dance/
|
Irish Dance: History, Music, Styles, Steps, Dresses, Shoes ...
|
Irish Dance
Irish dance or Irish dancing is traditional Gaelic or Celtic dance forms that originated in Ireland. It can be performed as a solo or in groups of up to twenty or more trained dancers. In Ireland, Irish dance is part of social dancing or may be for formal performances and competitions.
It is performed traditionally with intricate foot work and is most known for the dancers performing with a stiff upper body. Unlike other dance forms, Irish dancers do not move their arms or hands so that footwork is accented.
CONTENT
The History of Irish Dance
The history of Ireland is also the history of Irish Dance. The actual dates of its origin has never been determined specifically. However, Irish history is steeped in Druidic, Celtic and other religious history which affected the origins of Irish dance. For example, processionals in Druidic and Celtic religious practices required precision movement as do Irish reels and jigs.
The Celts are a 2,000 year old civilization that brought with them their own folk dances. Many of their dances were comprised of circular formations around sacred trees or they consisted of certain patterns performed by males and females in a religious rite.
If there has been any influence in Irish Dance, it may have been the Quadrille. Ireland has been a country of many travelers who brought with them various continental dance styles. The Quadrille was one of these styles that impacted Irish dance.
The Quadrille was popular across Europe in the 18th and 19th centuries when royalty held balls and cotillions. Although, the Quadrille was popular toward the end of the 18th century and spread to England and Ireland around the early 19th century.
A Quadrille is a square dance performed by four couples. It contains five choreographic figures. Each of these figures is a complete dance sequence of itself. Thus, it is easy to see how Irish reels became a prominent part of Irish dance.
Irish Dance Costumes, Dresses and Shoes
In the early days of Irish dance the dance costumes for females were basically ankle length dresses or blouses and skirts. For male dancers, costumes might have consisted of a shirt with a kilt in the Irish clan plaid or it may have been a long coat, shirt, vest and briques (calf length pants) with leggings.
Modern Irish dancers and dancers performing in traditional Celtic dance wear several different costume styles. For Traditional Celtic dance, female dancers wear blouses and long skirts while the male dancers perform with traditional shirt and kilt.
Modern Irish female dancers perform in beautiful short dresses in bright colors, mostly always with their arms fully covered. Modern Irish male dancers perform in trousers and a shirt with a colorful sash tied at the waist.
Shoes for male Irish dancers depends on the type of dance they are performing. For Flat Down step dancing, shoes have metal cleats on the toes and heels. For Ballet Up dance, shoes for males have soft soles.
Female dancers wear black leather "Ghillies" that have soft soles for flexibility for Ballet Up steps. The soft leather of Ghillies help Irish dancers perform dance steps either on the balls of the feet or on tips of their toes.
Female Irish dancers wear two basic types of shoes. For Flat Down step dances, shoes are an oxford style with a thick heel with metal cleat attached to the full heel and a thick frontal sole that also has a metal cleat attached. The oxford is usually black leather, has laces and a leather strap to secure the shoe to the foot.
Irish Dance Styles & Types
In total, there are six Irish dance styles. However, it is equally important to note that within each of the six Irish dance styles, there are basically only two dance style techniques, these are known as Ballet Up and Flat Down. These describe how Irish dancers use their feet in the six styles.
Ballet up describes a balletic style where toes are pointed and steps are performed high on the balls of the feet or on tips of the toes. Body weight is lifted upward from the floor.
Flat Down describes a technique that relies more on the use of the heels in a flat, gliding motion. Body weight sinks downward into the floor to emphasize the sound of the metal cleats.
The six Irish dance styles include:
Traditional Irish Step Dancing - only the legs and feet move in flat down technique
Modern Irish Step Dancing - full body movement with ballet up technique
Irish Set Dancing - with Flat Down technique
Irish Ceili Dancing - with Ballet Up technique
Irish Sean Nos Dancing - with Flat Down technique
Irish Two Hand Dancing - with Flat Down technique
Traditional Irish Step
Traditional Irish Step dancing is performed by male and female dancers in long lines, circles, squares or as partnered reels. Traditional Iris Step Dancing consists of dances set to traditional Irish music with a fast tempo that dancers are required to perform sets of steps to.
For example, two groups of dancers face opposite each other and shuffle, hop, jump, tap and stamp to the music as they more toward each other. Dancers then move between the dancers of the opposite line and then back to their original position. This is often referred to as a "competition" line dance.
Modern Irish Step Dancing
Modern Irish Step dancing has female dancers performing ballet up dance movements like leg swings, hopping and jumping or sashaying to the music. The female dancers perform in soft ghillies while the male dancers are heard tapping in Oxford tap shoes to the music Modern Irish Step Dancing
Modern Irish Step dancing has female dancers performing ballet up dance movements like leg swings, hopping and jumping or sashaying to the music. The female dancers perform in soft ghillies while the male dancers are heard tapping in Oxford tap shoes to the music.
Irish Set Dancing
Irish Set Dancing, as its name implies consists of dances performed in "sets." For example, a performance of Irish Set Dances may be part of a whole choreographed dance performance that is broken up into several separate parts. The set usually requires dancing in couples in four sets.
The Set Dance begins with all four couples dancing to the same choreography. This is followed by each couple performing the same sets as individual couples.
Irish Ceili Dancing
Irish Ceili (pronounced "kay-lee) Dancing is a very traditional dance form. It originated in the 1500's and is always performed to traditional Irish music. The Ceili Dances consist of quadrilles, reels, jigs and long or round dances. These were the most native Irish traditional folk dances.
Irish Sean Nos Dancing
Irish Sean Nos Dancing is one of the oldest of the traditional Irish dance styles. It is the only one performed as a solo. It differs from other Irish dances in that it allows free movement of the arms and it is flat down with the heavy weight on the accented beat of the music.
Sean Nos Dancing is the only Irish dance that also allows the solo dancer to improvise the choreography simultaneously as the dance is performed. The taps consist of shuffles and brushes as the dancer moves across the floor.
Irish Two Hand Dancing
This style of Irish Dance was a predominant part of Irish socializing. It is performed much like Irish Set Dancing with the exception that is it danced to polkas, Irish hornpipes, waltzes and jigs. Like the Irish Set Dancing, it is performed by couples with specific choreographic dance patterns, although in Irish Two Hand Dancing the patterns are repeated.
In Irish Two Hand Dancing couples dance in a relaxed style while they tap their feet in shuffling, hopping and spinning motion. By all appearances, when Irish Two Hand Dancing is performed on a large dance floor, the couples seem to be gliding along as they dance.
Basic Irish Dance Steps
Ballet Up styles of Irish dance rely on several uniformly performed steps. The first comes from the ballet step, "chasse," which means to "chase." In this step the Irish dancer steps with the right foot while the left foot "chases" the right in three counts. This is often called the "1-2-3."
Another step borrowed from Ballet is the "cabriole" which is to leap into the air while the left calf beats under the right calf that is extended forward in the air. There are several other steps that require the dancer to perform full or half turns.
In Flat Down Irish dance steps, the dancer's foot strikes the floor in a twisting shuffle of the right foot while hopping into the air with the left foot.
There are also combinations of Irish dance steps that include the "1-2-3", shuffle, stamping the whole foot and tapping one toe behind the other foot that holds body weight.
Although traditional Irish dance limits movement of the arms, today's modern Irish dancers are seen starting a dance routine with their hands on their hips and using certain movements of the arms that coordinate with music for interpretation of choreography.
Irish Dancers are as young as pre-school age to adult. There are numerous Irish Dance schools that teach traditional and modern Irish dance styles in the U.S. and Europe. The syllabus for Irish Dance is less complex than ballet, although several Irish steps originate from ballet.
Irish Dance is a combination of ballet and tap dancing. Although, it can be said that tap dancing originated from Irish flat down dance technique.
Unlike ballet, however, Ballet Up dance steps require dancers to place full weight on their toes in ghillies that are not blocked as ballet pointe shoes are.
In Flat down dance steps, the shoe is more flexible across the front of the shoe than a traditional tap dance shoe. This enables the Irish dancers to perform shuffling steps with more speed.
Irish Dance Today
The first international reintroduction to Irish Dance performances was with the performance of "Riverdance", composed by Bill Whelan.
The first performance was in 1995 in Dublin. It starred the now famous Irish Dancer and Irish Dance choreographer, Michael Flatley.
Although, it predominantly features Irish step dancing, "Riverdance" has a Baroque style that incorporates other dance styles like flamenco and a Russian dervish. The end result for dance experts is that "Riverdance" provides insight into how dances are linked in technique and styles.
It has since been performed as a touring Irish Dance show in New York City and at the Vatican for Pope Francis.
In addition to "Riverdance," Michael Flatley choreographed his first full length Irish Dance performance in which he starred, in "Lord of the Dance." This was followed by "Feet of Flames" and "Celtic Tiger Live." These are modern Irish dance shows that include Ballet Up and Flat Down Irish dance techniques.
In "Feet of Flames," Michael Flatley performs a lengthy series of movements that seem to defy gravity, all while maintaining balance and musical timing.
However, in the Flatley shows, he included traditional Irish songs in the Gaelic language as well as actual story lines for each of his Irish dance shows. For example, in Lord of the Dance, the story line has both a romantic and a fairy tale plot that includes a whimsical fairy piper.
As a result of the addition of a Gaelic singer and two Irish talented Irish fiddlers in these shows, similar Irish entertainment emerged from these Irish performances such as "Celtic Woman" and "Irish Tenors."
All of these Irish performances include some of the most extraordinary dance talent and shows the extreme skill needed to maintain Irish dance choreography as well as a semblance of acting talent.
There are also Irish Dance Championships that encourage students of Irish Dance to take part in competitions for awards for their dance techniques, skills and choreography.
Conclusion
There is no doubt Irish Dance captures the attention of audiences wherever it is performed. There are also many Irish societies and organizations that help promote Irish dance performances like the Ancient Order of Hibernians, Milwaukee St. Patrick's Day Parade, Friendly Sons of the Shillelagh and the World Irish Dance Organization. Today, Irish Dance is seen in the Thanksgiving Day Macy's Parade as well as the St. Patrick's Day parade in New York City and Chicago.
|
There are also combinations of Irish dance steps that include the "1-2-3", shuffle, stamping the whole foot and tapping one toe behind the other foot that holds body weight.
Although traditional Irish dance limits movement of the arms, today's modern Irish dancers are seen starting a dance routine with their hands on their hips and using certain movements of the arms that coordinate with music for interpretation of choreography.
Irish Dancers are as young as pre-school age to adult. There are numerous Irish Dance schools that teach traditional and modern Irish dance styles in the U.S. and Europe. The syllabus for Irish Dance is less complex than ballet, although several Irish steps originate from ballet.
Irish Dance is a combination of ballet and tap dancing. Although, it can be said that tap dancing originated from Irish flat down dance technique.
Unlike ballet, however, Ballet Up dance steps require dancers to place full weight on their toes in ghillies that are not blocked as ballet pointe shoes are.
In Flat down dance steps, the shoe is more flexible across the front of the shoe than a traditional tap dance shoe. This enables the Irish dancers to perform shuffling steps with more speed.
Irish Dance Today
The first international reintroduction to Irish Dance performances was with the performance of "Riverdance", composed by Bill Whelan.
The first performance was in 1995 in Dublin. It starred the now famous Irish Dancer and Irish Dance choreographer, Michael Flatley.
Although, it predominantly features Irish step dancing, "Riverdance" has a Baroque style that incorporates other dance styles like flamenco and a Russian dervish. The end result for dance experts is that "Riverdance" provides insight into how dances are linked in technique and styles.
It has since been performed as a touring Irish Dance show in New York City and at the Vatican for Pope Francis.
|
no
|
Dance
|
Did tap dancing originate in America?
|
no_statement
|
"tap" "dancing" did not "originate" in america.. america is not where "tap" "dancing" "originated".
|
https://www.leeandlow.com/books/baby-flo/teachers_guide
|
Teacher's Guide - Baby Flo | Lee & Low Books
|
TEACHER'S GUIDE FOR:
Synopsis
Pint-sized dynamo “Baby Florence” Mills was singing and dancing just about as soon as she could talk and walk. She warbled a tune while her mama did laundry, especially the poignant songs her mother knew. Everywhere Flo went, she strutted through the streets of Washington, DC, with a high-steppin’ cakewalk.
Baby Flo’s family was very poor and lived in Goat Alley, a Washington DC, slum. Baby Flo’s mother did laundry for white residents of DC. One day, while Baby Flo was accompanying her mother on a laundry delivery to the butchers on L Street, the butchers requested that Baby Flo sing for them. Her small shop performance stunned them, and soon Baby Flo was making money to help support her family.
Flo’s mama and daddy knew they had a budding entertainer in the family, so they entered her in a talent contest. At the age of three, Baby Flo performed at the Bijou Theatre in Washington, DC, but was overcome by stage fright. Flo would eventually go on to become an international superstar during the Harlem Renaissance—but first she had to overcome shyness and discover that winning wasn’t everything.
Determined never to let stage fright stand in her way again, Baby Flo worked hard to learn new songs and dances. By age six, she began winning medals for her cakewalk dancing, and her captivating performances attracted fans, including ambassadors and dignitaries. To support Baby Flo’s musical pursuits, her mother taught her songs and her father taught her new dances, including the buck-and-wing.
When the African American show The Sons of Ham came to town in 1903, Baby Flo competed in a buck-and-wing dance act. Her performance did not earn her first place, but did catch the eye of the Empire Theatre’s manager, who hired her to perform at every intermission of the show. Flo’s name even appeared on the front of the theater in lights, much to the delight of Flo and her father.
This is the spirited story of a spunky young girl learning to chase her dreams with confidence. A sensation in her time, Baby Flo is back, dancing and singing her way into hearts and history.
BACKGROUND Florence Mills (from Author’s Note): Baby Flo was born in 1896 in Washington, DC. Her parents soon realized Florence had a natural talent for song and dance, and she made her stage debut at age 3 in 1899. Florence went on to perform for local politicians and ambassadors, to compete in numerous dance competitions, and eventually to be invited to dance and sing for a week as part of the popular black show The Sons of Ham. Later, she and her sisters performed a vaudeville act as The Mills Sisters, and in 1917 she joined a vaudeville group, the Tennessee Ten. It was here that Florence met U. S. Thompson, whom she married in 1921. Because of her race and despite her international success, Baby Flo faced segregated trains, rundown theatres, and stingy managers.
Florence later landed a role in the Broadway musical Shuffle Along, which is credited as being one of the key events that ushered in the Harlem Renaissance. She was immensely popular in Europe and her fans included many contemporaries, including Charlie Chaplin, Duke Ellington, Paul Robeson, Bill “Bojangles” Robinson, and Irving Berlin. On November 1, 1927, at the age of thirty-one, Florence Mills died from surgery complications after battling tuberculosis. No one has discovered film footage and there are no recordings of her songs. In 1943, Duke Ellington dedicated his song “Black Beauty” to Florence Mills.
Historical Accuracy (from Author’s Note): Reliable information about Florence Mills’s early years is limited, and some details and dialogue have been imagined for storytelling purposes. In the interest of accuracy, it should be noted that Florence’s vocal debut took place not in a butcher shop, as portrayed here, but in a less savory establishment. The stage name of “Mills,” incidentally, was given to Florence when she was four years old. Her parents presumably did not think a black woman could make it in show business with the last name of Winfrey!
Cakewalk Dance: The dance style originated on Southern plantations and involves a male/female couple. Enslaved people developed the cakewalk dance to mock plantation owners and white upper classes. According to the blog Edwardian Promenade, the dance was adopted into minstrel shows and eventually lost its satire. Instead of making fun of pompous ballroom dancers, the exaggerated style of the cakewalk was perceived by audiences as African Americans attempting to emulate whites. Later, leading up to the early jazz period and Harlem Renaissance, a few African American dancers attempted to reclaim the cakewalk and elevate its stature among white and African American audiences. The cakewalk is credited with being the first crossover dance from African Americans to white culture. For more information on the history of the cakewalk dance, check out the blog Edwardian Promenade. Slate magazine also offers an in-depth explanation on the origins of how “cakewalk” became synonymous with “easy.” For an example of the dance steps, the Library Congress has posted a cakewalk performance recorded in 1903.
Buck-and-Wing Dance: This is a solo dance style and type of tap dancing that began circa nineteenth century. It was also popular in minstrel and vaudeville performances. Buck-and-wing dancing originated in Europe as British and Irish clogging, but evolved into a distinct dance form in North America. African American dancers fused the clogging steps with African American rhythms and footwork. For a history of tap-dancing in North America, be sure to check out TheatreDance.com. PBS also has a three-part documentary on the impact of African American dance on American culture with a powerful essay, “From Slave Ships to Center Stage,” that chronicles African American dance forms in early American history. Finally, Duke University’s Professor Thomas F. DeFrantz demonstrates the “buck-and-wing” and other nineteenth century African American dances.
Segregation of African American Performers and Audiences: Outright racial segregation occurred in all parts of society in the United States through the 1960s, including housing, education, politics, food service, and entertainment. It was “a system derived from the efforts of white Americans to keep African Americans in a subordinate status by denying them equal access to public facilities and ensuring that blacks lived apart from whites” as defined by Steven F. Lawson of Rutgers University. Minstrel and vaudeville shows opened up pathways for African Americans to perform on stage, and later African Americans were able to claim the stage in new, bolder, and more empowered art forms. However, even African American entertainers who were popular among both black and white audiences in the early twentieth century faced severe segregation off stage and after shows, including being denied access to restaurants, hotels, sleeping cars on trains, and theater after-parties.
BEFORE READING
Prereading Focus Questions (Reading Standards, Craft & Structure, Strand 5 and Integration of Knowledge & Ideas, Strand 7)
Before introducing this book to students, you may wish to develop background knowledge and promote anticipation by posing questions such as the following:
1. Take a look at the front and back cover. Take a picture walk. Ask students to make a prediction. Do you think this book will be fiction or nonfiction? What makes you think so? What clues do the author and illustrator give to help you know whether this book will be fiction or nonfiction?
2. What do you know about stories that are biographies? What kinds of things happen in biographies? What are some things that will not happen in biographies? Why do authors write biographies? How do you think their reasons different from authors who write fiction? What are some of the characteristics of a biography?
3. What do you know about segregation in the United States in the early 1900s? How might segregation affect our main character, Baby Flo?
4. What did people do for fun and entertainment in the early 1900s? What is it like to go to a show at a theater? How would you describe the stage and audience’s seats? What plays or performances have you seen? How is a theater where you see plays different from a movie theater?
5. Why do people like to dance? What are some popular dances today? What are some ways your body moves when you dance? Who are some famous dancers today?
6. Have you ever performed in front of an audience? How does it feel to be in front of a group of people? What advice would you give someone who is shy or scared?
7. Why do you think I chose this book for us to read today?
Exploring the Book
(Reading Standards, Craft & Structure, Strand 5, Key Ideas & Details, Strand 1, and Integration of Knowledge & Ideas, Strand 7)
Read and talk about the title of the book. (You might also want to read students the subtitle that appears on the first page of the book.)
Ask students what they think the title means. Then ask them what they think this book will most likely be about and who the book might be about. What places might be talked about in the text? What do you think might happen? What information do you think you might learn? What makes you think that?
Take students on a book walk and draw attention to the following parts of the book: front and back covers, title page, dedications and front author’s note, text, illustrations, and author’s note with photographs.
Setting a Purpose for Reading
(Reading Standards, Key Ideas & Details, Strands 1–3)
Have students read to find out who Baby Flo was, why she was an important person in United States history, and how she became a famous entertainer. Encourage students to consider why the author, Alan Schroeder, would want to share this story with young readers.
VOCABULARY
(Language Standards, Vocabulary Acquisition & Use, Strands 4–6)
The story contains several content-specific and academic words and phrases that may be unfamiliar to students. Based on students’ prior knowledge, review some or all of the vocabulary below. Encourage a variety of strategies to support students’ vocabulary acquisition: look up and record word definitions from a dictionary, write the meaning of the word or phrase in their own words, draw a picture of the meaning of the word, create a specific action for each word, list synonyms and antonyms, and write a meaningful sentence that demonstrates the definition of the word.
AFTER READING
Discussion Questions
After students have read the book, use these or similar questions to generate discussion, enhance comprehension, and develop appreciation for the content. Encourage students to refer to passages and illustrations in the book to support their responses. To build skills in close reading of a text, students should cite evidence with their answers.
Literal Comprehension
(Reading Standards, Key Ideas & Details, Strands 1 and 3)
1. What did you learn about Goat Alley in Washington, DC? What hardships did Florence face at home?
2. Talk about the first time Florence sang in public. For whom did she sing? How did people react? Why did they react that way?
3. What was so important about Florence’s performance at the butcher shop? What idea did her mother get from their visit?
4. How did her parents help Florence?
5. How did Florence help her family?
6. Describe the first time Florence sang at the Bijou Theatre. What happened? How did she feel. How did she handle singing for a crowd? How did the crowd react to her?
7. How does Florence’s father feel about her and her talents? What does he tell her that shows this?
8. What was The Sons of Ham? Why was this dance contest a big deal? What did Florence do to prepare?
9. How did Florence learn the buck-and-wing dance?
10. How did the contest end? How did Florence feel about the result? How did her father feel? How did the theater manager feel? How do you know?
11. How much money was Florence offered to be part of The Sons of Ham? Explain why this was an important accomplishment for Florence.
12. What else happened when Florence got this performance opportunity? Why was this important?
13. What were Baby Flo and her father excited to see at the end of the story? Why?
Extension/Higher Level Thinking
(Reading Standards, Key Ideas & Details, Strand 2 and 3 and Craft & Structure, Strand 6)
1. Why did Florence’s mother agree to wash other people’s clothes? Why did she have to use lots of bleach and scrub hard when she washed clothes like the butchers’ aprons? What does this tell you about Florence’s family?
2. How did Florence’s parents feel about her special talents? How do you know?
3. What did the minister mean when he said, “I believe the devil’s in your feet!”? How do you know? Why would he say that?
4. Why was the shiny bracelet important to Florence? How did she behave that let us know the bracelet was important to her?
5. How did Florence’s parents encourage her? How do you think they felt when they saw her perform? Why would they want her to become an entertainer?
6. What does it mean to have your name on the marquee of a theater? Why was that an important goal for Florence?
7. How did Florence feel about competing in The Sons of Ham contest? How do you know? What did she ask her father that showed she felt this way? How was this contest different from other contests in which she had competed?
8. What is the theme/author’s message of the story? What does the author, Alan Schroeder, want you to learn through Florence Mills’ story? Why is Florence Mills a good role model?
9. Before Florence Mills became a famous entertainer, she had to overcome several obstacles. What obstacles did she face? How did she overcome them?
10. How would the story be different if Florence’s parents weren’t supportive of her dream or weren’t available to help her? Do you think she would have been able to achieve success on her own? Why or why not?
11. Text-to-Text and Text-to-World Connections: Which other characters or real-life famous people does Florence Mills remind you of? How are their stories similar? How are their stories different?
12. Florence’s parents changed her last name from Winfrey to Mills? How did they think the new name would that make a difference? What does this suggest about the times she lived in?
13. Why did the manager pick Florence to perform at The Sons of Ham intermission even though she didn’t win first place in the contest?
14. How does this story teach about persistence and confidence?
Literature Circles
(Speaking & Listening Standards, Comprehension & Collaboration, Strands 1–3 and Presentation of Knowledge & Ideas, Strands 4–6)
If you use literature circles during reading time, students might find the following suggestions helpful in focusing on the different roles of the group members.
• The Questioner might use questions similar to the ones in the Discussion Question section of this guide.
• The Passage Locator might look for lines in the story that explain new vocabulary words.
• The Illustrator might illustrate a scene from Florence Mills’ life that is not already illustrated in the book.
• The Connector might find other books written about famous actors and actresses who were performing during the same time period as Florence Mills.
• The Summarizer might provide a brief summary of each part of the group’s reading and discussion points for each meeting.
• The Investigator might look for information about other African American women performers of the first half of the twentieth century.
*There are many resource books available with more information about organizing and implementing literature circles. Three such books you may wish to refer to are: GETTING STARTED WITH LITERATURE CIRCLES by Katherine L. Schlick Noe and Nancy J. Johnson (Christopher-Gordon, 1999), LITERATURE CIRCLES: VOICE AND CHOICE IN BOOK CLUBS AND READING GROUPS by Harvey Daniels (Stenhouse, 2002), and LITERATURE CIRCLES RESOURCE GUIDE by Bonnie Campbell Hill, Katherine L. Schlick Noe, and Nancy J. Johnson (Christopher-Gordon, 2000).
Reader’s Response
(Writing Standards, Text Types & Purposes, Strands 1–3 and Production & Distribution of Writing, Strands 4–6)
(Reading Standards, Key Ideas & Details, Strands 1–3, Craft & Structure, Strands 4–6, and Integration of Knowledge & Ideas, Strands 7–9)
Use the following questions and writing activities to help students practice active reading and personalize their responses to the book. Suggest that students respond in reader’s response journals, essays, or oral discussion. You may also wish to set aside time for students to share and discuss their written work.
1. Read the author’s note at the end of the story with students. What were some of the challenges Florence Mills faced as an adult trying to make a name for herself in show business? How did some people treat her? Was this fair? Why or why not? Why are Florence Mills’ accomplishments remarkable? Cite evidence from the text to support your answer.
2. Which parts of the story did you connect to the most? Why? Was there ever a time you doubted yourself or were shy? How did you overcome it? Who helped you practice and believe in yourself? What advice would you give to someone who is feeling shy or who doesn’t have confidence in himself or herself?
3. What are some of the challenges that come with being a performer? What are some of the advantages? Is this a career you would want? Why or why not?
4. In the book, Florence’s parents are huge influences and factors in her success as an entertainer. Write about someone in your life who helps, encourages, or practices with you. How does that person make you feel? How have they helped you overcome your fears or a tough situation? Why do you think everyone needs someone who believes in him or her?
5. Have students write a book recommendation for Baby Flo explaining why they would or would not recommend this book to other students.
1. Assign ELL students to partner-read the story with strong English readers/speakers. Students can alternate reading between pages, repeat passages after one another, or listen to the more fluent reader. Students who speak Spanish can help with the pronunciations of the Spanish words and terms in the book.
2. Have each student write three questions about the story. Then let students pair up and discuss the answers to the questions.
3. Depending on students’ level of English proficiency, after the first reading:
• Review the illustrations in order and have students summarize what is happening on each page, first orally, then in writing.
• Have students work in pairs to retell either the plot of the story or key details. Then ask students to write a short summary, synopsis, or opinion about what they have read.
4. Have students give a short talk about what they admire about a character or central figure in the story.
5. The story contains several content-specific and academic words that may be unfamiliar to students. Based on students’ prior knowledge, review some or all of the vocabulary. Expose English Language Learners to multiple vocabulary strategies. Have students make predictions about word meanings, look up and record word definitions from a dictionary, write the meaning of the word or phrase in their own words, draw a picture of the meaning of the word, list synonyms and antonyms, create an action for each word, and write a meaningful sentence that demonstrates the definition of the word.
INTERDISCIPLINARY ACTIVITIES
(Introduction to the Standards, page 7: Student who are college and career ready must be able to build strong content knowledge, value evidence, and use technology and digital media strategically and capably)
Use some of the following activities to help students integrate their reading experiences with other curriculum areas. These can also be used for extension activities, for advanced readers, and for building a home-school connection.
Social Studies
(Reading Standards, Integration of Knowledge & Ideas, Strands 7 and 9)
1. Ask students to research other famous African American performers from that early to mid 1900s. (e.g.: Duke Ellington, Paul Robeson, and Bill “Bojangles” Robinson; they are all referenced in the Author’s Note). What were their lives like? How did they break into show business? What challenges and unfair treatment did they face due to the color of their skin? How did they overcome the challenges they faced? What contributions did they make to United States history and culture?
2. Invite students to research their city, a city near them, or Washington DC (where the story takes place) in the early 1900s. What was the city like a hundred years ago? What did people do for fun? What were popular jobs and occupations? Where did people live? What was school like? How were different groups of people treated? Compare and contrast the daily life and amenities to life today. How has the city changed?
3. Ask students to research current laws governing the employment of children in the United States. At what age are children allowed to work for pay? When are they not permitted to work? How many school absences can they take for work? How much should children get paid for their work? Why are there laws governing children’s employment? Do you think the laws are fair? Why or why not? How do the laws protect children? How are employment laws in entertainment, agricultural work, and other industries different from and similar to each other? To investigate these questions, check out the Department of Labor.
Math
(Mathematics Standards, Grade 2, Measurement & Data, Strand 7–8)
(Mathematics Standards, Grade 2, Operations & Algebraic Thinking, Strand 1)
(Mathematics Standards, Grade 3, Operations & Algebraic Thinking, Strand 3 and 8)
1. This book has several money references because Baby Flo quickly earned money for her street performances and contests. Use some of the scenes in the book to review the values of currency. For example, what is a dime? How much is it worth? How many pennies do you need to equal the value of a dime? How many nickels do you need to equal the value of a dime?
2. After her performance in the butchers’ shop, Florence earned $3.85. How many different ways can you make $3.85 using combinations of quarters, dimes, nickels, and pennies?
3. Baby Flo also lends itself to word problems. For example, Florence was promised twenty-five cents a night to perform at the Empire Theatre. If she performed every night for one week, how much money did she earn? How much would she earn in two weeks? Three weeks? And so on.
4. Today there are laws that protect children who perform onstage to help make sure they aren’t underpaid for their work and aren’t working instead of going to school. If Baby Florence Mills was performing today, she may have earned at least $50.00 an hour. If she performed seven hours in one week, how much money would Florence have earned? How much more money would she earn today than in the early 1900s?
Physical Education/Music
1. Ask students to research the famous dances Florence Mills performed, as mentioned in the book: the cakewalk and the buck-and-wing. Ask students to research music that was popular at the time when Florence Mills was performing, as well as the two songs Florence sang for the butchers. Have students work together in groups to perform these dances to music from the era and to share the songs with their classmates. See the Background section of this teachers’ guide for more information on the cakewalk and buck-and-wing dances. For a sample of the dance steps, the Library Congress has posted a cakewalk performance recorded in 1903. Duke University’s Professor Thomas F. DeFrantz demonstrates the “buck-and-wing” and other nineteenth century African American dances.
2. Have students compare and contrast the cakewalk or buck-and-wing steps to a modern-day dance. Where did the dances originate? What makes them special? How are the dances similar? How are they different?
3. Invite small groups of students to create their own dances, make up new moves, name their dances, and teach them to their classmates. During recess or after school, allow students to host a small talent show for their classmates or other students in the school.
Home-School Connection
(Reading Standards, Integration of Knowledge & Ideas, Strands 7 and 9)
(Speaking & Listening Standards, Comprehension & Collaboration, Strands 1–3)
(Writing Standards, Text Types & Purposes, Strand 2 and Research to Build & Present Knowledge, Strand 7)
1. Encourage students to interview their parents, grandparents, or guardians. When was a time they felt shy or experienced stage fright? What caused them to feel that way? How did they overcome these feelings? What advice do they have for someone who is experiencing stage fright?
2. Encourage students to interview their parents, grandparents, or guardians. Ask them to describe a time when they performed in front of a large crowd. What activity were they doing (sports, art, music, theater, comedy, debate, dance, etc.)? How did they prepare and practice? Who helped them practice? How did their families feel when they performed? How did they themselves feel about the whole experience?
3. Encourage students to interview their parents, grandparents, or guardians. When was there a time they experienced prejudice or witnessed prejudice toward someone else? How did it make them feel? How did they overcome that obstacle? What changes have they seen since they were younger or hope to see in the future to make the world a fairer, more just place?
4. Invite students to work with their parents, grandparents, or guardians to research other famous child entertainers of today or from years past. What was that child star famous for? What challenges did she or he face? What happened to her or him when she or he grew up? What legacy did the entertainer leave behind?
5. Have students document a song or dance on video that their parents, grandparents, or guardians teach them. With the adults’ permission, allow students to share the song or dance in class and teach the steps or words to classmates.
6. In the story, Florence’s parents are huge influences and factors in her success as an entertainer. Have students write about someone in their lives who helps, encourages, or practices with them. How does that person make them feel? How has that person helped them overcome their fears or a tough situation? Ask students why they think everyone needs someone who believes in them?
|
Buck-and-Wing Dance: This is a solo dance style and type of tap dancing that began circa nineteenth century. It was also popular in minstrel and vaudeville performances. Buck-and-wing dancing originated in Europe as British and Irish clogging, but evolved into a distinct dance form in North America. African American dancers fused the clogging steps with African American rhythms and footwork. For a history of tap-dancing in North America, be sure to check out TheatreDance.com. PBS also has a three-part documentary on the impact of African American dance on American culture with a powerful essay, “From Slave Ships to Center Stage,” that chronicles African American dance forms in early American history. Finally, Duke University’s Professor Thomas F. DeFrantz demonstrates the “buck-and-wing” and other nineteenth century African American dances.
Segregation of African American Performers and Audiences: Outright racial segregation occurred in all parts of society in the United States through the 1960s, including housing, education, politics, food service, and entertainment. It was “a system derived from the efforts of white Americans to keep African Americans in a subordinate status by denying them equal access to public facilities and ensuring that blacks lived apart from whites” as defined by Steven F. Lawson of Rutgers University. Minstrel and vaudeville shows opened up pathways for African Americans to perform on stage, and later African Americans were able to claim the stage in new, bolder, and more empowered art forms. However, even African American entertainers who were popular among both black and white audiences in the early twentieth century faced severe segregation off stage and after shows, including being denied access to restaurants, hotels, sleeping cars on trains, and theater after-parties.
BEFORE READING
Prereading Focus Questions (Reading Standards, Craft & Structure, Strand 5 and Integration of Knowledge & Ideas, Strand 7)
Before introducing this book to students, you may wish to develop background knowledge and promote anticipation by posing questions such as the following:
1. Take a look at the front and back cover.
|
no
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
https://en.wikipedia.org/wiki/The_War_of_the_Worlds_(1938_radio_drama)
|
The War of the Worlds (1938 radio drama) - Wikipedia
|
The episode begins with an introductory monologue based closely on the opening of the original novel, after which the program takes on the format of an evening of typical radio programming being periodically interrupted by news bulletins. The first few bulletins interrupt a program of live music and are relatively calm reports of unusual explosions on Mars followed by a seemingly unrelated report of an unknown object falling on a farm in Grovers Mill, New Jersey. The crisis escalates dramatically when a correspondent reporting live from Grovers Mill describes creatures emerging from what is evidently an alien spacecraft. When local officials approach the aliens waving a flag of truce, the "monsters" respond by incinerating them and others nearby with a heat ray which the on-scene reporter describes in a panic until the audio feed abruptly goes dead. This is followed by a rapid series of news updates detailing the beginning of a devastating alien invasion and the military's futile efforts to stop it. The first portion of the episode climaxes with a live report from a rooftop in Manhattan, from where a correspondent describes citizens fleeing in panic from giant Martian "war machines" releasing clouds of poison smoke until he coughs and falls silent. Only then does the program take its first break, about thirty minutes after Welles's introduction.
The second portion of the show shifts to a conventional radio drama format that follows a survivor (played by Welles) dealing with the aftermath of the invasion and the ongoing Martian occupation of Earth. The final segment lasts for about sixteen minutes, and like the original novel, concludes with the revelation that the Martians have been defeated by microbes rather than by humans. The broadcast ends with a brief "out of character" announcement by Welles in which he compares the show to "dressing up in a sheet and jumping out of a bush and saying 'boo!'"
Welles's "War of the Worlds" broadcast has become famous for convincing some of its listeners that a Martian invasion was actually taking place due to the "breaking news" style of storytelling employed in the first half of the show. The illusion of realism was supported by the Mercury Theatre on the Air's lack of commercial interruptions, which meant that the first break in the drama came after all of the alarming "news" reports had taken place. Popular legend holds that some of the radio audience may have been listening to The Chase and Sanborn Hour with Edgar Bergen and tuned in to "The War of the Worlds" during a musical interlude, thereby missing the clear introduction indicating that the show was a work of science fiction. Contemporary research suggests that this happened only in rare instances.[2]: 67–69
In the days after the adaptation, widespread outrage was expressed in the media. The program's news-bulletin format was described as deceptive by some newspapers and public figures, leading to an outcry against the broadcasters and calls for regulation by the FCC. Welles apologized at a hastily-called news conference the next morning, and no punitive action was taken. The broadcast and subsequent publicity brought the 23-year-old Welles to the attention of the general public and gave him the reputation of an innovative storyteller and "trickster".[1][3]
The program's format is a simulated live newscast of developing events. The first two-thirds of the hour-long play is a contemporary retelling of events of the novel, presented as news bulletins interrupting programs of dance music. "I had conceived the idea of doing a radio broadcast in such a manner that a crisis would actually seem to be happening," said Welles, "and would be broadcast in such a dramatized form as to appear to be a real event taking place at that time, rather than a mere radio play."[5] This approach was similar to Ronald Knox's radio hoaxBroadcasting the Barricades that was broadcast by the BBC in 1926,[6] which Welles later said gave him the idea for "The War of the Worlds".[a] A 1927 drama aired by Adelaide station 5CL depicted an invasion of Australia using the same techniques and inspired reactions similar to those of the Welles broadcast.[8]
Welles was also influenced by the Columbia Workshop presentations "The Fall of the City", a 1937 radio play in which Welles played the role of an omniscient announcer, and "Air Raid", an as-it-happens drama starring Ray Collins that aired October 27, 1938.[9]: 159, 165–166 Welles had previously used a newscast format for "Julius Caesar" (September 11, 1938), with H. V. Kaltenborn providing historical commentary throughout the story.[10]: 93
"The War of the Worlds" broadcast used techniques similar to those of The March of Time, the CBS news documentary and dramatization radio series.[11] Welles was a member of the program's regular cast, having first performed on it in March 1935.[12]: 74, 333 The Mercury Theatre on the Air and The March of Time shared many cast members and sound effects chief Ora D. Nichols.[2]: 41, 61, 63
Koch worked on adapting novels and wrote the first drafts for the Mercury Theatre broadcasts "Hell on Ice" (October 9), "Seventeen" (October 16),[9]: 164 and "Around the World in 80 Days" (October 23).[10]: 92 On October 24, he was assigned to adapt The War of the Worlds for broadcast the following Sunday night.[9]: 164
On the night of October 25, 36 hours before rehearsals were to begin, Koch telephoned Houseman in what the producer characterized as "deep distress": Koch said he could not make The War of the Worlds interesting or credible as a radio play, a conviction echoed by his secretary Anne Froelick, a typist and aspiring writer whom Houseman had hired to assist him. With only his own abandoned script for Lorna Doone to fall back on, Houseman told Koch to continue adapting the Wells fantasy. He joined Koch and Froelick to work on the script through the night. On the night of October 26, the first draft was finished on schedule.[4]: 392–393
On October 27, Stewart held a cast reading of the script, with Koch and Houseman making necessary changes. That afternoon, Stewart made an acetate recording without music or sound effects. Welles, immersed in rehearsing the Mercury stage production of Danton's Death scheduled to open the following week, played the record at an editorial meeting that night in his suite at the St. Regis Hotel. After hearing "Air Raid" on the Columbia Workshop earlier that same evening, Welles thought the "War of the Worlds" script was dull, and he advised the writers to add more news flashes and eyewitness accounts to create a stronger sense of urgency and excitement.[9]: 166
Houseman, Koch, and Stewart reworked the script that night,[4]: 393 increasing the number of news bulletins and using the names of real places and people whenever possible. On October 28, the script was sent to Davidson Taylor, executive producer for CBS, and the network legal department. Their response was that the script was "too" credible and its realism had to be toned down. As using the names of actual institutions could be actionable, CBS insisted on about 28 changes in phrasing.[9]: 167 "Under protest and with a deep sense of grievance we changed the Hotel Biltmore to a nonexistent Park Plaza, Transamerica Radio News[16] to Inter-Continental Radio News, the Columbia Broadcasting Building to Broadcasting Building," Houseman wrote.[4]: 393 "The United States Weather Bureau in Washington, D.C." was changed to "The Government Weather Bureau," "Princeton University Observatory" to "Princeton Observatory," "McGill University" in Montreal to "Macmillan University" in Toronto, "New Jersey National Guard" to "State Militia," "United States Signal Corps" to "Signal Corps," "Langley Field" to "Langham Field," and "St. Patrick's Cathedral" to "the cathedral."[9]: 167
On October 29, Stewart rehearsed the show with the sound effects team and gave special attention to crowd scenes, the echo of cannon fire, and the sound of boat horns in New York Harbor.[4]: 393–394
In the early afternoon of October 30, Bernard Herrmann and his orchestra arrived in the studio, where Welles had taken over production of that evening's program.[4]: 391, 398
To create the role of reporter Carl Phillips, Frank Readick went to the record library and repeatedly played the recording of Herbert Morrison's dramatic radio report of the Hindenburg disaster.[4]: 398 Stewart worked with Herrmann and the orchestra to sound like a dance band,[17] and became the person Welles later credited as being largely responsible for the quality of "The War of the Worlds" broadcast.[18]: 195
Welles wanted the music to play for unbearably long stretches of time.[19]: 159 The studio's emergency fill-in, a solo piano playing Debussy and Chopin, was heard several times. "As it played on and on," Houseman wrote, "its effect became increasingly sinister—a thin band of suspense stretched almost beyond endurance. That piano was the neatest trick of the show."[4]: 400 The dress rehearsal was scheduled for 6 pm.[4]: 391
"Our actual broadcasting time, from the first mention of the meteorites to the fall of New York City, was less than forty minutes," wrote Houseman. "During that time, men travelled long distances, large bodies of troops were mobilized, cabinet meetings were held, savage battles fought on land and in the air. And millions of people accepted it—emotionally if not logically."[4]: 401
"The War of the Worlds" begins with a paraphrase of the beginning of the novel, updated to contemporary times. The announcer introduces Orson Welles:
We know now that in the early years of the 20th century, this world was being watched closely by intelligences greater than man's and yet as mortal as his own. We know now that as human beings busied themselves about their various concerns, they were scrutinized and studied, perhaps almost as narrowly as a man with a microscope might scrutinize the transient creatures that swarm and multiply in a drop of water. With infinite complacence, people went to and fro over the earth about their little affairs, serene in the assurance of their dominion over this small spinning fragment of solar driftwood which by chance or design man has inherited out of the dark mystery of Time and Space. Yet across an immense ethereal gulf, minds that are to our minds as ours are to the beasts in the jungle, intellects vast, cool and unsympathetic, regarded this earth with envious eyes and slowly and surely drew their plans against us. In the 39th year of the 20th century came the great disillusionment. It was near the end of October. Business was better. The war scare was over. More men were back at work. Sales were picking up. On this particular evening, October 30th, the Crossley service estimated that 32 million people were listening in on radios...[4]: 394–395 [21]
The radio program begins as a simulation of a normal evening radio broadcast featuring a weather report and music by "Ramon Raquello and His Orchestra" live from a local hotel ballroom. After a few minutes, the music is interrupted by several news flashes about strange gas explosions on Mars. An interview is arranged with reporter Carl Phillips and Princeton-based astronomy professor Richard Pierson, who dismisses speculation about life on Mars. The musical program returns temporarily but is interrupted again by news of a strange meteorite landing in Grovers Mill, New Jersey. Phillips and Pierson are dispatched to the site, where a large crowd has gathered. Philips describes the chaotic atmosphere around the strange cylindrical object, and Pierson admits that he does not know exactly what it is, but that it seems to be made of an extraterrestrial metal. The cylinder unscrews, and Phillips describes the tentacled, horrific "monster" that emerges from inside. Police officers approach the Martian waving a flag of truce, but it and its companions respond by firing a heat ray, which incinerates the delegation and ignites the nearby woods and cars as the crowd screams. Phillips's shouts about incoming flames are cut off mid-sentence, and after a moment of dead air, an announcer explains that the remote broadcast was interrupted due to "some difficulty with [their] field transmission".
After a brief "piano interlude", regular programming breaks down as the studio struggles with casualty and fire-fighting updates. A shaken Pierson speculates about Martian technology. The New Jersey state militia declares martial law and attacks the cylinder; a captain from their field headquarters lectures about the overwhelming force of properly-equipped infantry and the helplessness of the Martians until a tripod rises from the pit, which obliterates the militia. The studio returns and describes the Martians as an invading army. Emergency response bulletins give way to damage and evacuation reports as thousands of refugees clog the highways. Three Martian tripods from the cylinder destroy power stations and uproot bridges and railroads, reinforced by three others from a second cylinder that landed in the Great Swamp near Morristown. The Secretary of the Interior reads a brief statement trying to reassure a panicked nation, after which it is reported that more explosions have been observed on Mars, indicating that more war machines are on the way.
A live connection is established to a field artillery battery in the Watchung Mountains. Its gun crew damages a machine, resulting in a release of poisonous black smoke, before fading into the sound of coughing. The lead plane of a wing of bombers from Langham Field broadcasts its approach and remains on the air as their engines are burned by the heat ray and the plane dives on the invaders in a last-ditch suicide attack. Radio operators go active and fall silent: although the bombers manage to destroy one machine, the remaining five spread black smoke across the Jersey Marshes into Newark.
Eventually, a news reporter broadcasting from atop the Broadcasting Building describes the Martian invasion of New York City – "five great machines" wading the Hudson "like [men] wading through a brook", black smoke drifting over the city, people diving into the East River "like rats", others in Times Square "falling like flies". He reads a final bulletin stating that Martian cylinders have fallen all over the country, then describes the smoke approaching his location until he coughs and apparently collapses, leaving only the sounds of the panicked city in the background. A ham radio operator is heard calling, "2X2L calling CQ, New York. Isn't there anyone on the air? Isn't there anyone on the air? Isn't there... anyone?"
After a few seconds of silence, announcer Dan Seymour broke in with a standard programming statement:
You are listening to a CBS presentation of Orson Welles and the Mercury Theatre on the Air, in an original dramatization of The War of the Worlds by H. G. Wells. The performance will continue after a brief intermission. This is the Columbia Broadcasting System.
After the break, the remainder of the program is performed in a much more conventional radio drama format of dialogue and monologue. It focuses on Professor Pierson, who survives the attack on Grovers Mill and is attempting to make contact with other humans. In Newark, he encounters an opportunistic militiaman who holds fascist ideals and declares his intent to use Martian weaponry to take control of both species; saying that he wants no part of "his world", Pierson leaves the stranger with his delusions. His journey ends in the ruins of New York City, where he discovers that the Martians have died – as with the novel, they fell victim to earthly pathogenicgerms, to which they had no immunity. Life returns to normal, and Pierson finishes writing his recollections of the invasion and its aftermath.
After the conclusion of the play, Welles reassumed his role as host and told listeners that the broadcast was intended to be merely a "holiday offering", the equivalent of the Mercury Theater "dressing up in a sheet, jumping out of a bush and saying, 'Boo!'" and stated that while they had "annihilated the world and utterly destroyed CBS before your very ears... you will be relieved I hope to hear that both institutions are still open for business." He ended the program by assuring listeners that, "If your doorbell rings and there's nobody there, that was no Martian; it's Halloween."[23] Popular mythology holds that the disclaimer was hastily added to the broadcast at the insistence of CBS executives to quell the supposed panic inspired by the program, but it was actually added by Welles at the last minute, and he delivered it over Taylor's objections, who feared that reading it on the air would expose the network to legal liability.[2]: 95–96
Radio programming charts in Sunday newspapers listed "The War of the Worlds". On October 30, 1938, The New York Times included the show in its "Leading Events of the Week" ("Tonight – Play: H. G. Wells' 'War of the Worlds'") and published a photograph of Welles with some of the Mercury players, captioned, "Tonight's show is H. G. Wells' 'War of the Worlds'".[9]: 169
Announcements that "The War of the Worlds" is a dramatization of a work of fiction were made on the full CBS network at four points during the broadcast: at the beginning, before the middle break, after the middle break, and at the end.[24]: 43 The middle break was delayed 10 minutes to accommodate the dramatic content.[10]: 94
Another announcement was repeated on the full CBS network that same evening at 10:30 pm, 11:30 pm, and midnight: "For those listeners who tuned in to Orson Welles's Mercury Theatre on the Air broadcast from 8 to 9 pm Eastern Standard Time tonight and did not realize that the program was merely a modernized adaptation of H. G. Wells' famous novel War of the Worlds, we are repeating the fact which was made clear four times on the program, that, while the names of some American cities were used, as in all novels and dramatizations, the entire story and all of its incidents were fictitious."[24]: 43–44 [25]
The show went on the air shortly after 8:00 pm ET. At 8:32, Houseman noticed Taylor step out of the studio to take a telephone call in the control room, who returned four minutes later looking "pale as death", as he had been ordered to immediately interrupt "The War of the Worlds" broadcast with an announcement of the program's fictional content. By the time the order was given, the fictional news reporter played by Ray Collins was choking on poison gas as the Martians overwhelmed New York and the program was less than a minute away from its first scheduled break, which proceeded as previously planned.[4]: 404
Actor Stefan Schnabel recalled sitting in the anteroom after finishing his on-air performance. "A few policemen trickled in, then a few more. Soon, the room was full of policemen and a massive struggle was going on between the police, page boys, and CBS executives, who were trying to prevent the cops from busting in and stopping the show. It was a show to witness."[26]
During the sign-off theme, the phone began ringing. Houseman picked it up and the furious caller announced he was mayor of a Midwestern town, where mobs were in the streets. Houseman hung up quickly, "[f]or we were off the air now and the studio door had burst open."[4]: 404
The following hours were a nightmare. The building was suddenly full of people and dark-blue uniforms. Hustled out of the studio, we were locked into a small back office on another floor. Here we sat incommunicado while network employees were busily collecting, destroying, or locking up all scripts and records of the broadcast. Finally, the Press was let loose upon us, ravening for horror. How many deaths had we heard of? (Implying they knew of thousands.) What did we know of the fatal stampede in a Jersey hall? (Implying it was one of many.) What traffic deaths? (The ditches must be choked with corpses.) The suicides? (Haven't you heard about the one on Riverside Drive?) It is all quite vague in my memory and quite terrible.[4]: 404
Paul White, head of CBS News, was quickly summoned to the office, "and there bedlam reigned", he wrote:
The telephone switchboard, a vast sea of light, could handle only a fraction of incoming calls. The haggard Welles sat alone and despondent. "I'm through," he lamented, "washed up." I didn't bother to reply to this highly inaccurate self-appraisal. I was too busy writing explanations to put on the air, reassuring the audience that it was safe. I also answered my share of incessant telephone calls, many of them from as far away as the Pacific Coast.[27]: 47–48
After "The War of the Worlds" broadcast, photographers lay in wait for Welles at the all-night rehearsal for Danton's Death at the Mercury Theatre (October 31, 1938)
Because of the crowd of newspaper reporters, photographers, and police, the cast left the CBS building by the rear entrance. Aware of the sensation the broadcast had made, but not its extent, Welles went to the Mercury Theatre where an all-night rehearsal of Danton's Death was in progress. Shortly after midnight, one of the cast, a late arrival, told Welles that news about "The War of the Worlds" was being flashed in Times Square. They immediately left the theatre, and standing on the corner of Broadway and 42nd Street, they read the lighted bulletin that circled the New York Times building: ORSON WELLES CAUSES PANIC.[9]: 172–173
Some listeners heard only a portion of the broadcast and, in the tension and anxiety prior to World War II, mistook it for a genuine news broadcast.[28] Thousands of them shared the false reports with others or called CBS, newspapers, or the police to ask if the broadcast was real. Many newspapers assumed that the large number of phone calls and the scattered reports of listeners rushing about or fleeing their homes proved the existence of a mass panic, but such behavior was never widespread.[2]: 82–90, 98–103 [29][30][31]
Future Tonight Show host Jack Paar had announcing duties that night for Cleveland CBS affiliate WGAR. As panicked listeners called the studio, he attempted to calm them on the phone and on air by saying: "The world is not coming to an end. Trust me. When have I ever lied to you?" When the listeners started to accuse Paar with "covering up the truth", he called WGAR's station manager for help. Oblivious to the situation, the manager advised Paar to calm down and said that it was "all a tempest in a teapot".[32]
In a 1975 interview with radio historian Chuck Schaden, radio actor Alan Reed recalled being one of several actors recruited to answer phone calls at CBS's New York headquarters.[33]
In Concrete, Washington, phone lines and electricity suffered a short circuit at the Superior Portland Cement Company's substation. Residents were unable to call neighbors, family, or friends to calm their fears. Reporters who heard of the coincidental blackout sent the story over the newswire, and Concrete was known worldwide.[34]
Welles takes questions from reporters at a press conference the day after the broadcast, on October 31, 1938
Welles continued with the rehearsal of Danton's Death, leaving shortly after the dawn of October 31. He was operating on three hours of sleep when CBS called him to a press conference. He read a statement that was later printed in newspapers nationwide and took questions from reporters:[9]: 173, 176
Question: Were you aware of the terror such a broadcast would stir up? Welles: Definitely not. The technique I used was not original with me. It was not even new. I anticipated nothing unusual.
Question: Should you have toned down the language of the drama? Welles: No, you don't play murder in soft words.
Question: Why was the story changed to put in names of American cities and government officers? Welles: H. G. Wells used real cities in Europe, and to make the play more acceptable to American listeners we used real cities in America. Of course, I'm terribly sorry now.[9]: 174 [35]
In its October 31, 1938, edition, the Tucson Citizen reported that three Arizona affiliates of CBS (KOY in Phoenix, KTUC in Tucson and KSUN in Bisbee) had originally scheduled a delayed broadcast of "The War of the Worlds" that night; CBS had shifted The Mercury Theater on the Air from Monday nights to Sunday nights on September 11, but the three affiliates preferred to keep the series in its original Monday slot so that it would not compete with NBC's top-rated Chase and Sanborn Hour. However, late that night, CBS contacted KOY and KTUC owner Burridge Butler and instructed him not to air the program the following night.[36]
Within three weeks, newspapers had published at least 12,500 articles about the broadcast and its impact,[24]: 61 [37] but the story dropped from the front pages after a few days.[1]Adolf Hitler referenced the broadcast in a speech in Munich on November 8, 1938.[2]: 161 Welles later remarked that Hitler cited the effect of the broadcast on the American public as evidence of "the corrupt condition and decadent state of affairs in democracy".[38][39]
Bob Sanders recalled looking outside the window and seeing a traffic jam in the normally quiet Grovers Mill, New Jersey, at the intersection of Cranbury and Clarksville Roads.[40][41][42]
Radio Digest reprinted the script of "The War of the Worlds" "as a commentary on the nervous state of our nation after the Pact of Munich" – prefaced by an editorial cartoon by Les Callan of The Toronto Star (February 1939)
Later studies indicate that many people missed the repeated notices about the broadcast being fictional, partly because The Mercury Theatre on the Air, an unsponsored CBS cultural program with a relatively small audience, ran at the same time as the NBC Red Network's popular Chase and Sanborn Hour featuring ventriloquist Edgar Bergen. At the time, many Americans assumed that a significant number of Chase and Sanborn listeners changed stations when the first comic sketch ended and a musical number by Nelson Eddy began, tuning in to "The War of the Worlds" after the opening announcements. Historian A. Brad Schwartz, after studying hundreds of letters from people who heard "The War of the Worlds" as well as contemporary audience surveys, concluded that very few people frightened by Welles's broadcast had tuned out Bergen's program. "All the hard evidence suggests that The Chase & Sanborn Hour was only a minor contributing factor to the Martian hysteria," he wrote. "...in truth, there was no mass exodus from Charlie McCarthy to Orson Welles that night."[2]: 67–69 Because the broadcast was unsponsored, Welles and company could arbitrarily schedule breaks instead of arranging them around advertisements; as a result, the only notices that the broadcast was fictional came at the start of the broadcast and about 40 and 55 minutes into it.[citation needed]
A study by the Radio Project discovered that less than one third of frightened listeners understood the invaders to be aliens; most thought that they were listening to reports of a German invasion or of a natural catastrophe.[2]: 180, 191 [31] "People were on edge", wrote Welles biographer Frank Brady. "For the entire month prior to 'The War of the Worlds', radio had kept the American public alert to the ominous happenings throughout the world. The Munich crisis was at its height.... For the first time in history, the public could tune into their radios every night and hear, boot by boot, accusation by accusation, threat by threat, the rumblings that seemed inevitably leading to a world war."[9]: 164–165
CBS News chief Paul White wrote that he was convinced that the panic induced by the broadcast was a result of the public suspense generated before the Munich Pact. "Radio listeners had had their emotions played upon for days.... Thus they believed the Welles production even though it was specifically stated that the whole thing was fiction".[27]: 47
"The supposed panic was so tiny as to be practically immeasurable on the night of the broadcast. ... Radio had siphoned off advertising revenue from print during the Depression, badly damaging the newspaper industry. So the papers seized the opportunity presented by Welles’ program to discredit radio as a source of news. The newspaper industry sensationalized the panic to prove to advertisers, and regulators, that radio management was irresponsible and not to be trusted."[1]
Historical research suggests the panic was significantly less widespread than newspapers had indicated at the time.[43] "[T]he panic and mass hysteria so readily associated with 'The War of the Worlds' did not occur on anything approaching a nationwide dimension", American University media historian W. Joseph Campbell wrote in 2003. He quoted Robert E. Bartholomew, an authority on mass panic outbreaks, as having said that "there is a growing consensus among sociologists that the extent of the panic... was greatly exaggerated".[31]
Letter of complaint about the broadcast from the city manager of Trenton, New Jersey, to the Federal Communications Commission (October 31, 1938)
That position is supported by contemporary accounts. "In the first place, most people didn't hear [the show]," said Frank Stanton, later president of CBS.[1] Of the nearly 2,000 letters mailed to Welles and the Federal Communications Commission after "The War of the Worlds", currently held by the University of Michigan and the National Archives and Records Administration, roughly 27% came from frightened listeners or people who witnessed any panic. After analyzing those letters, Schwartz concluded that although the broadcast briefly misled a significant portion of its audience, not many of them fled their homes or otherwise panicked. The total number of protest letters sent to Welles and the FCC was also low in comparison with other controversial radio broadcasts of the period, suggesting that the audience was small and the fright severely limited.[2]: 82–93 [29]
Five thousand households were telephoned that night in a survey conducted by the C. E. Hooper company, the main radio ratings service at the time. Two percent of the respondents said they were listening to the radio play, and no one stated they were listening to a news broadcast. About 98% of respondents said they were listening to other radio programming (The Chase and Sanborn Hour was by far the most popular program in that timeslot) or not listening to the radio at all. Further shrinking the potential audience, some CBS network affiliates, including some in large markets such as Boston's WEEI, had pre-empted The Mercury Theatre on the Air, in favor of local commercial programming.[1]
Ben Gross, radio editor for the New York Daily News, wrote in his 1954 memoir that the streets were nearly deserted as he made his way to the studio for the end of the program.[1] Houseman reported that the Mercury Theatre staff was surprised when they were finally released from the CBS studios to find life going on as usual in the streets of New York.[4]: 404 The writer of a letter that The Washington Post published later likewise recalled no panicked mobs in the capital's downtown streets at the time. "The supposed panic was so tiny as to be practically immeasurable on the night of the broadcast", media historians Jefferson Pooley and Michael J. Socolow wrote in Slate on its 75th anniversary in 2013; "Almost nobody was fooled".[1]
According to Campbell, the most common response said to indicate a panic was calling the local newspaper or police to confirm the story or seek additional information. That, he writes, is an indicator that people were not generally panicking or hysterical. "The call volume perhaps is best understood as an altogether rational response..."[31] Some New Jersey media and law enforcement agencies received up to 40% more telephone calls than normal during the broadcast.[44]AT&T Corporation telephone operators in New York City recalled in 1988 that "every light" on the "half block long" switchboard lit up after the broadcast stated that the Martians were crossing the George Washington Bridge, while operators in Princeton and Missoula, Montana were asked what the invaders looked like. They described callers "crying and screaming", asking whether dead bodies were near the operators, "begging us to get connections to their families ... before the world came to an end". "The people believed it. They really believed it that night", one concluded.[45]
What a night. After the broadcast, as I tried to get back to the St. Regis where we were living, I was blocked by an impassioned crowd of news people looking for blood, and the disappointment when they found I wasn't hemorrhaging. It wasn't long after the initial shock that whatever public panic and outrage there was vanished. But, the newspapers for days continued to feign fury.
As it was late on a Sunday night in the Eastern Time Zone, where the broadcast originated, few reporters and other staff were present in newsrooms. Most newspaper coverage thus took the form of Associated Press stories, which were largely anecdotal aggregates of reporting from its various bureaus, giving the impression that panic had indeed been widespread. Many newspapers led with the Associated Press's story the next day.[31]
The Twin City Sentinel of Winston-Salem, North Carolina, pointed out that the situation could have been even worse if most people had not been listening to Bergen's show: "Charlie McCarthy last night saved the United States from a sudden and panicky death by hysteria."[47]
On November 2, 1938, the Australian newspaper The Age characterized the incident as "mass hysteria" and stated that "never in the history of the United States had such a wave of terror and panic swept the continent". Unnamed observers quoted by The Age commented that "the panic could have only happened in America."[48]
Editorialists chastised the radio industry for allowing that to happen. The response may have reflected newspaper publishers' fears that radio, to which they had lost some of the advertising revenue that was scarce enough during the Great Depression, would render them obsolete. In "The War of the Worlds", they saw an opportunity to cast aspersions on the newer medium: "The nation as a whole continues to face the danger of incomplete, misunderstood news over a medium which has yet to prove that it is competent to perform the news job," wrote Editor & Publisher, the newspaper industry's trade journal.[1][49]
William Randolph Hearst's papers called on broadcasters to police themselves, lest the government step in, as Iowa senator Clyde L. Herring proposed a bill that would have required all programming to be reviewed by the FCC prior to broadcast – it was never introduced. Others blamed the radio audience for its gullibility. Noting that any intelligent listener would have realized the broadcast was fictional, the Chicago Tribune opined, "it would be more tactful to say that some members of the radio audience are a trifle retarded mentally, and that many a program is prepared for their consumption." Other newspapers noted that anxious listeners had called their offices to learn if Martians were really attacking.[31]
Few contemporary accounts exist outside newspaper coverage of the mass panic and hysteria supposedly induced by the broadcast. Justin Levine, a producer at KFI in Los Angeles, wrote that "the anecdotal nature of such reporting makes it difficult to objectively assess the true extent and intensity of the panic".[50] Bartholomew saw it as more evidence that the panic was predominantly a creation of the newspaper industry.[51]
In a study, published as The Invasion from Mars (1940), Princeton professor Hadley Cantril calculated around six million people heard "The War of the Worlds" broadcast.[24]: 56 He estimated that 1.7 million listeners believed the broadcast was an actual news bulletin and, of those, 1.2 million people were frightened or disturbed.[24]: 58 However, Pooley and Socolow have concluded that Cantril's study had serious flaws. Its estimate of the program's audience is more than twice as high as any other at the time. Cantril himself conceded that, but argued that unlike Hooper, his estimate had attempted to capture the significant portion of the audience that did not have home telephones at that time. Since those respondents were contacted only after the media frenzy, Cantril admitted that their recollections could have been influenced by what they read in the newspapers. Claims that Chase and Sanborn listeners, who missed the disclaimer at the beginning when they turned to CBS during a commercial break or musical performance on that show and thus mistook "The War of the Worlds" for a real broadcast inflated the show's audience and the ensuing panic, are impossible to substantiate.[1]
Apart from his imperfect methods of estimating the audience and assessing the authenticity of their response, Pooley and Socolow found Cantril made another error in typing audience reaction. Respondents had indicated a variety of reactions to the program, among them "excited", "disturbed", and "frightened". However, he included all of them with "panicked", failing to account for the possibility that despite their reaction, they were still aware the broadcast was staged. "[T]hose who did hear it, looked at it as a prank and accepted it that way", recalled researcher Frank Stanton.[1]
Bartholomew admitted that hundreds of thousands were frightened, but called evidence of people taking action based on their fear "scant" and "anecdotal".[52] Contemporary news articles indicated that police received hundreds of calls in numerous locations, but stories of people doing anything more than calling authorities involved mostly only small groups; such stories were often reported by people who were panicking themselves.[31]
Later investigations found many of the panicked responses to have been exaggerated or mistaken. Cantril's researchers found that contrary to what had been claimed, no admissions for shock were made at a Newark hospital during the broadcast; hospitals in New York City similarly reported no spike in admissions that night. A few suicide attempts seem to have been prevented when friends or family intervened, but no record of a successful one exists. A Washington Post claim that a man died of a heart attack brought on by listening to the program could not be verified. One woman filed a lawsuit against CBS, but it was soon dismissed.[1]
The FCC also received letters from the public that advised against taking reprisals.[53] Singer Eddie Cantor urged the commission not to overreact, as "censorship would retard radio immeasurably".[54] The FCC decided to not punish Welles or CBS, and also barred complaints about "The War of the Worlds" from being brought up during license renewals. "Janet Jackson's 2004 'wardrobe malfunction' remains far more significant in the history of broadcast regulation than Orson Welles' trickery," wrote Pooley and Socolow.[1]
H. G. Wells and Orson Welles met for the first and only time in late October 1940, shortly before the second anniversary of the Mercury Theatre broadcast, when they were both lecturing in San Antonio, Texas. On October 28, 1940, the two men visited the KTSA studio for an interview by Charles C. Shaw,[12]: 361 who introduced them by characterizing the panic generated by "The War of the Worlds".[38]
Wells was skeptic about the actual extent of the panic caused by "this sensational Halloween spree", saying: "Are you sure there was such a panic in America or wasn't it your Halloween fun?"[38] Welles replied that "[i]t's supposed to show the corrupt condition and decadent state of affairs in democracy, that 'The War of the Worlds' went over as well as it did."[38]
When Shaw mentioned that there was "some excitement" that he did not wish to belittle, Welles replied, "What kind of excitement? Mr. H. G. Wells wants to know if the excitement wasn't the same kind of excitement that we extract from a practical joke in which somebody puts a sheet over his head and says 'Boo!' I don't think anybody believes that that individual is a ghost, but we do scream and yell and rush down the hall. And that's just about what happened."[38][39]
As the Mercury Theatre's second season began in 1938, Welles and Houseman were unable to write the Mercury Theatre on the Air broadcasts by themselves. They hired Koch, whose experience in having a play performed by the Federal Theatre Project in Chicago led him to leave his law practice and move to New York to become a writer. Koch was put to work at $50 a week, raised to $60 after he proved himself.[4]: 390 The Mercury Theatre on the Air was a sustaining show, so in lieu of a more substantial salary, Houseman gave Koch the rights to any script he worked on.[55]: 175–176
A condensed version of the script for "The War of the Worlds" appeared in the debut issue of Radio Digest magazine (February 1939), in an article on the broadcast that credited "Orson Welles and his Mercury Theatre players".[56] The complete script appeared in The Invasion from Mars: A Study in the Psychology of Panic (1940), the book publication of a Princeton University study directed by Cantril. Welles strongly protested Koch being listed as sole author since many others contributed to the script, but by the time the book was published, he had decided to end the dispute.[9]: 176–179
Welles sought legal redress after the CBS TV series Studio One presented its top-rated broadcast, "The Night America Trembled", on September 9, 1957. The live presentation of Nelson S. Bond's documentary play recreated the 1938 performance of "The War of the Worlds" in the CBS studio, using the script as a framework for a series of factual narratives about a cross-section of radio listeners. No member of the Mercury Theatre was named.[57][58] The courts ruled against Welles, who was found to have abandoned any rights to the script after it was published in Cantril's book. Koch had granted CBS the right to use the script in its program.[59][60]
"As it developed over the years, Koch took some cash and some credit," wrote biographer Frank Brady. "He wrote the story of how he created the adaptation, with a copy of his script being made into a paperback book enjoying large printings and an album of the broadcast selling over 500,000 copies, part of the income also going to him as copyright owner."[9]: 179 Since his death in 1995, Koch's family has received royalties from adaptations or broadcasts.[60]
The book, The Panic Broadcast, was first published in 1970.[61] The best-selling album was a sound recording of the broadcast titled Orson Welles' War of the Worlds, "released by arrangement with Manheim Fox Enterprises, Inc."[62][63] The source discs for the recording are unknown.[64] Welles told Peter Bogdanovich that it was a poor-quality recording taken off the air at the time of broadcast – "a pirated record which people have made fortunes of money and have no right to play". Welles did not receive any compensation.[65]
Initially apologetic about the supposed panic his broadcast had caused, and privately fuming that newspaper reports of lawsuits were either greatly exaggerated or totally fabricated,[50] Welles later embraced the story as part of his personal myth: "Houses were emptying, churches were filling up; from Nashville to Minneapolis there was wailing in the streets and the rending of garments," he told Bogdanovich.[12]: 18
CBS also found reports ultimately useful in promoting the strength of its influence. It presented a fictionalized account of the panic in "The Night America Trembled", and included it prominently in its 2003 celebrations of CBS's 75th anniversary as a television broadcaster. "The legend of the panic," according to Jefferson and Socolow, "grew exponentially over the following years ... [It] persists because it so perfectly captures our unease with the media's power over our lives."[1]
In 1975, ABC aired the television movie The Night That Panicked America, depicting the effect the radio drama had on the public using fictional, but typical American families of the time.
West Windsor, New Jersey, where Grovers Mill is located, commemorated the 50th anniversary of the broadcast in 1988 with four days of festivities including art and planetarium shows, a panel discussion, a parade, burial of a time capsule, a dinner dance, film festivals devoted to H. G. Wells and Orson Welles, and the dedication of a bronze monument to the fictional Martian landings. Koch attended the 49th anniversary celebration as an honored guest.[68]
Welles and Mercury Theatre on the Air were inducted into the Radio Hall of Fame in 1988.[71] On January 27, 2003, "The War of the Worlds" was selected as one of the first 50 recordings to be added to the National Recording Registry of the Library of Congress.[72] At the 72nd World Science Fiction Convention in August 2014, a Retrospective Hugo Award for "Best Dramatic Presentation, Short Form – 1938" was bestowed upon the broadcast.[73]
Since the original Mercury Theatre on the Air broadcast of "The War of the Worlds", many re-airings, remakes, re-enactments, parodies, and new dramatizations have occurred.[74] Many American radio stations, particularly those that regularly air old-time radio programs, re-air the original program as a Halloween tradition.
The first Spanish language version was produced and aired on November 12, 1944, by William Steele, and Raúl Zenteno in Radio Cooperativa Vitalicia, a radio station in Santiago, Chile.[75] Even though the fictional nature of the drama was reported twice during the broadcast and once again in the end, Newsweek reported that an electrician named José Villaroel was so frightened that he died of a heart attack.[76]
A second Spanish-language version produced in February 1949 by Leonardo Páez and Eduardo Alcaraz for Radio Quito in Quito, Ecuador, reportedly set off panic in the city. Police and fire brigades rushed out of town to engage the supposed alien invasion force. After it was revealed that the broadcast was fiction, the panic transformed into a riot. Hundreds of people attacked Radio Quito and El Comercio, a local newspaper owner of the radio station that had participated in the hoax by publishing false reports of unidentified objects in the skies above Ecuador in the days preceding the broadcast. The riot resulted in at least seven deaths, including those of Paez's girlfriend and nephew. Radio Quito was off the air for two years until 1951. After the incident, Paez self-exiled to Venezuela, where he lived in Mérida until his death in 1991.[77][78][79][80][81][82][83]
A Brazilian Portuguese version was aired in October 1971, by Rádio Difusora, from the Northeast state of Maranhão. This version remained faithful to Welles' adaptation, changing several American city names to Brazilian state capitals. Also, foreign cities such as Los Angeles and Chicago were reported as engulfed by a poisonous smoke after several cylinders have fallen and tripods were defeating all human resistance.
During the transmission, the director of the radio station (also performing) proceeded to explain that many of the station employees were allowed to go home and join their families, but his speech is frequently interrupted by strange noises, which he explains as being result of a worldwide radio interference that was disturbing all transmissions on Earth (presumably caused by Martian machines).
Finally, a street reporter announces that gigantic machines were crossing Rio de Janeiro, before the city is also attacked by the poison fog. Like in 1938, some listeners took the broadcast for a real news bulletin and shortly after, the Brazilian Army (the event took place during Brazilian military dictatorship) shut down the radio station, only allowing it back on the air a few days later.[86]
On October 30, 2002, XM Satellite Radio collaborated with conservative talk-show host Glenn Beck for a live recreation of the broadcast, using Koch's original script and airing on the Buzz XM channel, as well as on Beck's 100 AM/FM affiliates. In 2003, the parties were sued for copyright infringement by Koch's widow, but settled under undisclosed terms.[60][97][98]
On October 30, 2013, KPCC re-aired the show, introduced by George Takei[99] with a documentary on the 1938 radio show's production.[100][101]
Ghostwatch, a 1992 British horror pseudo-documentary that was presented as if it were a live broadcast on its initial viewing, resulting in a variety of psychological effects being observed in its audience
^Welles said, "I got the idea from a BBC show that had gone on the year before [sic] when a Catholic priest told how some Communists had seized London and a lot of people in London believed it. And I thought that'd be fun to do on a big scale, let's have it from outer space—that's how I got the idea."[7]
^Biographer Frank Brady claims that Welles had read the story in 1936 in The Witch's Tales, a pulp magazine of "weird-dramatic and supernatural stories" that reprinted it from Pearson's Magazine.[9]: 162 However, there is no evidence that The Witch's Tales, which only ran for two issues, or its accompanying radio series ever featured The War of the Worlds.[13][14][15]: 33
^Ashley, Mike (2000). The time machines : the story of the science-fiction pulp magazines from the beginning to 1950 : the history of the science-fiction magazine. Liverpool: Liverpool University Press. pp. 104–105. ISBN0-85323-855-3.
^Gosling, John (2009). Waging The war of the worlds : a history of the 1938 radio broadcast and resulting panic, including the original script. Jefferson, N.C.: McFarland & Co. ISBN978-0-7864-4105-1.
^ abcdeCantril, Hadley, Hazel Gaudet, and Herta Herzog, The Invasion from Mars: A Study in the Psychology of Panic: with the Complete Script of the Famous Orson Welles Broadcast. Princeton, N.J.: Princeton University Press, 1940.
^Koch, Howard, The Panic Broadcast: Portrait of an Event. Boston: Little, Brown and Company, 1970. The radio play Invasion from Mars was now copyrighted in Koch's name (Catalog of Copyright Entries: Third Series; Books and Pamphlets, Title Index, January–June 1971, page 1866). Hadley Cantril's The Invasion from Mars, including the radio play (titled The Broadcast), was copyrighted in 1940 by Princeton University Press.
^"Orson Welles – War of the Worlds". Discogs. Retrieved October 28, 2014. The jacket front of the 1968 Longines Symphonette Society LP reads, "The Actual Broadcast by The Mercury Theatre on the Air as heard over the Columbia Broadcasting System, Oct. 30, 1938. The most thrilling drama ever broadcast from the famed HOWARD KOCH script! An authentic first edition … never before released! Complete, not a dramatic word cut! Script by Howard Koch from the famous H. G. Wells novel … featuring the most famous performance from The Mercury Theatre on the Air!"
^"War of the Worlds". Radio Lab. Season 4. Episode 3. March 7, 2008. In 1949, when Radio Quito decided to translate the Orson Welles stunt for an Ecuadorian audience, no one knew that the result would be a riot that would burn down the radio station and kill at least 7 people.
^"Broadcast To Air Sunday". Wilmington Star-News. October 29, 1988. Retrieved November 3, 2018 – via Google News Archive. The radio broadcast by Orson Welles and his Mercury Theater was so realistic, ... is presenting an "anniversary production" of the Mercury Theater radio play.
|
"Radio listeners had had their emotions played upon for days.... Thus they believed the Welles production even though it was specifically stated that the whole thing was fiction".[27]: 47
"The supposed panic was so tiny as to be practically immeasurable on the night of the broadcast. ... Radio had siphoned off advertising revenue from print during the Depression, badly damaging the newspaper industry. So the papers seized the opportunity presented by Welles’ program to discredit radio as a source of news. The newspaper industry sensationalized the panic to prove to advertisers, and regulators, that radio management was irresponsible and not to be trusted. "[1]
Historical research suggests the panic was significantly less widespread than newspapers had indicated at the time.[43] "[T]he panic and mass hysteria so readily associated with 'The War of the Worlds' did not occur on anything approaching a nationwide dimension", American University media historian W. Joseph Campbell wrote in 2003. He quoted Robert E. Bartholomew, an authority on mass panic outbreaks, as having said that "there is a growing consensus among sociologists that the extent of the panic... was greatly exaggerated".[31]
Letter of complaint about the broadcast from the city manager of Trenton, New Jersey, to the Federal Communications Commission (October 31, 1938)
That position is supported by contemporary accounts. "In the first place, most people didn't hear [the show]," said Frank Stanton, later president of CBS.[1] Of the nearly 2,000 letters mailed to Welles and the Federal Communications Commission after "The War of the Worlds", currently held by the University of Michigan and the National Archives and Records Administration, roughly 27% came from frightened listeners or people who witnessed any panic.
|
no
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
https://www.snopes.com/fact-check/war-of-the-worlds/
|
Did the 1938 Radio Broadcast of 'War of the Worlds' Cause a ...
|
Of the countless adaptations made of H.G. Wells' 1897 science fiction classic The War of the Worlds over the past century, the one that remains most talked and written about to this day was Orson Welles' live radio broadcast on 30 October 1938. It boasted a distinctly modern twist. Keen on cementing his reputation as a theatrical wunderkind (Welles was on the cover of Time magazine only months earlier), the 23-year-old actor-director reworked the plodding Victorian narrative about a Martian invasion of Earth into a gripping faux newscast with real moments of shock and awe.
(Contrary to common nomenclature, Welles' "War of the Worlds" broadcast was not a "hoax" sprung on an unsuspecting audience. Rather, the show was a regularly scheduled and announced episode of The Mercury Theatre on the Air, a radio program dedicated to presenting dramatizations of literary works.)
A brief excerpt from the script by Howard Koch shows why Welles' hour-long production of The War of the Worlds is justly regarded as a mini-masterpiece of horror:
ANNOUNCER: We are bringing you an eyewitness account of what's happening on the Wilmuth farm, Grovers Mill, New Jersey. (MORE PIANO) We now return you to Carl Phillips at Grovers Mill.
PHILLIPS: Ladies and gentlemen (Am I on?). Ladies and gentlemen, here I am, back of a stone wall that adjoins Mr. Wilmuth's garden. From here I get a sweep of the whole scene. I'll give you every detail as long as I can talk. As long as I can see. More state police have arrived They're drawing up a cordon in front of the pit, about thirty of them. No need to push the crowd back now. They're willing to keep their distance. The captain is conferring with someone. We can't quite see who. Oh yes, I believe it's Professor Pierson. Yes, it is. Now they've parted. The Professor moves around one side, studying the object, while the captain and two policemen advance with something in their hands. I can see it now. It's a white handkerchief tied to a pole . . . a flag of truce. If those creatures know what that means . . . what anything means!. . . Wait! Something's happening!
(HISSING SOUND FOLLOWED BY A HUMMING THAT INCREASES IN INTENSITY)
PHILLIPS: A humped shape is rising out of the pit. I can make out a small beam of light against a mirror. What's that? There's a jet of flame springing from the mirror, and it leaps right at the advancing men. It strikes them head on! Good Lord, they're turning into flame!
"Fake radio 'war' stirs terror through U.S."
The broadcast was legendary overnight for supposedly having been too realistic and frightening for its audience. Morning papers from coast to coast reveled in the "mass hysteria" it had caused — even the staid New York Times, whose front-page headline blared, "Radio Listeners in Panic,Taking War Drama as Fact":
A wave of mass hysteria seized thousands of radio listeners between 8:15 and 9:30 o'clock last night when a broadcast of a dramatization of H. G. Wells's fantasy, "The War of the Worlds," led thousands to believe that an interplanetary conflict had started with invading Martians spreading wide death and destruction in New Jersey and New York.
The broadcast, which disrupted households, interrupted religious services, created traffic jams and clogged communications systems, was made by Orson Welles, who as the radio character, "The Shadow," used to give "the creeps" to countless child listeners. This time at least a score of adults required medical treatment for shock and hysteria.
In Newark, in a single block at Heddon Terrace and Hawthorne Avenue, more than twenty families rushed out of their houses with wet handkerchiefs and towels over their faces to flee from what they believed was to be a gas raid. Some began moving household furniture.
Throughout New York families left their homes, some to flee to near-by parks. Thousands of persons called the police, newspapers and radio stations here and in other cities of the United States and Canada seeking advice on protective measures against the raids.
In Providence, Rhode Island, "weeping and hysterical women" swamped the Providence Journal with calls asking for more details of the "massacre."
In Pittsburgh, Associated Press reported, a man returned home in the middle of the broadcast and found his wife with a bottle of poison in her hand, saying, "I'd rather die this way than like that."
In San Francisco, police fielded hundreds of calls from frightened listeners, including one man who wanted to volunteer to help fight the Martian invaders.
When Orson Welles was asked to comment on the hysteria he was blamed for causing, he was incredulous. "We've been putting on all sorts of things from the most realistic situations to the wildest fantasy, but nobody ever bothered to get serious about them before," he was quoted as saying. "We just can't understand why this should have such an amazing reaction. It's too bad that so many people got excited, but after all, we kept reminding them that it wasn't really true."
WABC, which aired the program in New York, issued this statement one hour after the broadcast ended:
For those listeners who tuned in to Orson Welles' Mercury Theatre on the Air broadcast from 8 to 9 p.m. tonight, and did not realize that the program was merely a radio adaptation of H.G. Wells' famous novel, "War of the Worlds," we are repeating the fact, which was made clear four times on the program, that the entire content of the play was entirely fictitious.
How real was the 'panic'?
For decades, the conventional wisdom based on the sensationalized reporting of the time was that the Mercury Theatre broadcast had indeed spread mass hysteria from one end of the country to the other. By the 2000s, however, sociologists and historians were questioning the true severity of "the War of the Worlds panic." W. Joseph Campbell, an American University professor of communication studies, observed in 2010 that the contemporaneous news coverage was "almost entirely anecdotal and largely based on sketch wire service roundups that emphasized breadth over in-depth detail":
In short, the notion that the War of the Worlds program sent untold thousands of people into the streets in panic is a media-driven myth that offers a deceptive message about the power radio wielded over listeners in its early days and, more broadly, about the media's potential to sow fright, panic, and alarm.
Such data as exist about the listening audience that night support Campbell's thesis. The C.E. Hooper ratings service reported that only 2 percent of national respondents were tuned into Welles' broadcast on 30 October 1938. The rest were either listening to something else (most likely ventriloquist Edgar Bergen’s Chase and Sanborn Hour, one of the most popular programs on radio), or nothing at all. Based on the network's own audience survey, CBS executive Frank Stanton concluded that most Americans didn't hear the show. “But those who did hear it," he added, "looked at it as a prank and accepted it that way.”
Recapping the event on its 75th anniversary in Slate, media historians Jefferson Pooley and Michael J. Socolow pointed out that few, if any, of the anecdotal reports of hysterical reactions to the program were ever investigated and confirmed:
Wire service reports did relay sensational stories of (unnamed) panicked listeners saved only by the timely intervention of friends or neighbors, but not one newspaper reported a verified suicide connected to the broadcast. Researchers in Princeton’s Office of Radio Research, working under the direction of Cantril, sought to verify a rumor that several people were treated for shock at St. Michael’s Hospital in Newark, N.J. The rumor was checked and found to be inaccurate. When the same researchers surveyed six New York City hospitals six weeks after the broadcast, “none of them had any record of any cases brought in specifically on account of the broadcast.” No specific death has ever been conclusively attributed to the drama. The Washington Post reported that one Baltimore listener died of a heart attack during the show, but unfortunately no one followed up to confirm the story or provide corroborative details. One particularly frightened listener did sue CBS for $50,000, claiming the network caused her “nervous shock.” Her lawsuit was quickly dismissed.
In addition to overblown press coverage, another reason the event went down in history as an instance of "mass hysteria" was the publication of a book in 1940 called The Invasion from Mars. Written by Princeton psychology professor Hadley Cantril, the book purported to explain the War of the Worlds "panic" in sociological terms but suffered from being overly reliant on a skewed report hastily compiled six weeks after the broadcast. On the basis of the report, which Jefferson and Socolow say was "tainted by the sensationalistic newspaper publicity," Cantril estimated that one million listeners had been "frightened" by the show — an impossible number, based on every other known measure of the size of the listening audience. "Worse," Jefferson and Socolow wrote, "Cantril committed an obvious categorical error by conflating being 'frightened,' 'disturbed,' or 'excited' by the program with being 'panicked.'"
Was a small percentage of listeners frightened — and a few even panicked, perhaps — by The War of the Worlds on the night of the broadcast? Clearly, yes. Many of those, it was determined afterwards, had tuned in late and missed obvious clues that it was fiction (and a large percentage of those assumed the U.S. was under attack by Germany, not Mars). But was it an instance of mass hysteria overtaking tens of thousands of people throughout the U.S.? The evidence shows otherwise.
Sources
Campbell, W. Joseph. Getting It Wrong: Ten of the Greatest Misreported Stories in American Journalism.
Berkeley: University of California Press, 2010. ISBN 0-520-25566-6.
|
"
WABC, which aired the program in New York, issued this statement one hour after the broadcast ended:
For those listeners who tuned in to Orson Welles' Mercury Theatre on the Air broadcast from 8 to 9 p.m. tonight, and did not realize that the program was merely a radio adaptation of H.G. Wells' famous novel, "War of the Worlds," we are repeating the fact, which was made clear four times on the program, that the entire content of the play was entirely fictitious.
How real was the 'panic'?
For decades, the conventional wisdom based on the sensationalized reporting of the time was that the Mercury Theatre broadcast had indeed spread mass hysteria from one end of the country to the other. By the 2000s, however, sociologists and historians were questioning the true severity of "the War of the Worlds panic." W. Joseph Campbell, an American University professor of communication studies, observed in 2010 that the contemporaneous news coverage was "almost entirely anecdotal and largely based on sketch wire service roundups that emphasized breadth over in-depth detail":
In short, the notion that the War of the Worlds program sent untold thousands of people into the streets in panic is a media-driven myth that offers a deceptive message about the power radio wielded over listeners in its early days and, more broadly, about the media's potential to sow fright, panic, and alarm.
Such data as exist about the listening audience that night support Campbell's thesis. The C.E. Hooper ratings service reported that only 2 percent of national respondents were tuned into Welles' broadcast on 30 October 1938. The rest were either listening to something else (most likely ventriloquist Edgar Bergen’s Chase and Sanborn Hour, one of the most popular programs on radio), or nothing at all. Based on the network's own audience survey, CBS executive Frank Stanton concluded that most Americans didn't hear the show.
|
no
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
https://slate.com/culture/2013/10/orson-welles-war-of-the-worlds-panic-myth-the-infamous-radio-broadcast-did-not-cause-a-nationwide-hysteria.html
|
Orson Welles' War of the Worlds panic myth: The infamous radio ...
|
Wednesday marks the 75th anniversary of Orson Welles’ electrifying War of the Worlds broadcast, in which the Mercury Theatre on the Air enacted a Martian invasion of Earth. “Upwards of a million people, [were] convinced, if only briefly, that the United States was being laid waste by alien invaders,” narrator Oliver Platt informs us in the new PBS documentary commemorating the program. The panic inspired by Welles made War of the Worlds perhaps the most notorious event in American broadcast history.
That’s the story you already know—it’s the narrative widely reprinted in academic textbooks and popular histories. With actors dramatizing the reaction of frightened audience members (based on contemporaneous letters), the new documentary, part of PBS’s American Experience series, reinforces the notion that naïve Americans were terrorized by their radios back in 1938. So did this weekend’s episode of NPR’s Radiolab, which opened with the assertion that on Oct. 30, 1938, “The United States experienced a kind of mass hysteria that we’ve never seen before.”
There’s only one problem: The supposed panic was so tiny as to be practically immeasurable on the night of the broadcast. Despite repeated assertions to the contrary in the PBS and NPR programs, almost nobody was fooled by Welles’ broadcast.
Advertisement
Advertisement
How did the story of panicked listeners begin? Blame America’s newspapers. Radio had siphoned off advertising revenue from print during the Depression, badly damaging the newspaper industry. So the papers seized the opportunity presented by Welles’ program to discredit radio as a source of news. The newspaper industry sensationalized the panic to prove to advertisers, and regulators, that radio management was irresponsible and not to be trusted. In an editorial titled “Terror by Radio,” the New York Times reproached “radio officials” for approving the interweaving of “blood-curdling fiction” with news flashes “offered in exactly the manner that real news would have been given.” Warned Editor and Publisher, the newspaper industry’s trade journal, “The nation as a whole continues to face the danger of incomplete, misunderstood news over a medium which has yet to prove … that it is competent to perform the news job.”
Advertisement
Advertisement
The contrast between how newspaper journalists experienced the supposed panic, and what they reported, could be stark. In 1954, Ben Gross, the New York Daily News’ radio editor, published a memoir in which he recalled the streets of Manhattan being deserted as his taxi sped to CBS headquarters just as War of the Worlds was ending. Yet that observation failed to stop the Daily News from splashing the panic story across this legendary cover a few hours later.
Advertisement
Advertisement
New York Daily News front page from Oct. 31, 1938.
Photo by New York Daily News Archive via Getty Images
Advertisement
From these initial newspaper items on Oct. 31, 1938, the apocryphal apocalypse only grew in the retelling. A curious (but predictable) phenomenon occurred: As the show receded in time and became more infamous, more and more people claimed to have heard it. As weeks, months, and years passed, the audience’s size swelled to such an extent that you might actually believe most of America was tuned to CBS that night. But that was hardly the case.
Far fewer people heard the broadcast—and fewer still panicked—than most people believe today. How do we know? The night the program aired, the C.E. Hooper ratings service telephoned 5,000 households for its national ratings survey. “To what program are you listening?” the service asked respondents. Only 2 percent answered a radio “play” or “the Orson Welles program,” or something similar indicating CBS. None said a “news broadcast,” according to a summary published in Broadcasting. In other words, 98 percent of those surveyed were listening to something else, or nothing at all, on Oct. 30, 1938. This miniscule rating is not surprising. Welles’ program was scheduled against one of the most popular national programs at the time—ventriloquist Edgar Bergen’s Chase and Sanborn Hour, a comedy-variety show.
Advertisement
Advertisement
Advertisement
The new PBS documentary allows that, “of the tens of millions of Americans listening to their radios that Sunday evening, few were tuned to the War of the Worlds” when it began, due to Bergen’s popularity. But the documentary’s script goes on to claim that “millions of listeners began twirling the dial” when the opening comedy routine on the Chase and Sanborn Hour gave way to a musical interlude. “Just at that moment thousands, hundreds, we don’t how many listeners, started to dial-surf, where they landed on the Mercury Theatre on the Air,” explained Radiolab this weekend. No scholar, however, has ever isolated or extrapolated an actual number of dial twirlers. The data collected was simply not specific enough for us to know how many listeners might have switched over to Welles—just as we can’t estimate how many people turned their radios off, or switched from Mercury Theatre on the Air over to NBC’s Chase and Sanborn Hour either. (Radiolab played the Chase and Sanborn Hour’s musical interlude for its audience, as if the song itself constituted evidence that people of course switched to Welles’ broadcast.)
Advertisement
Both American Experience and Radiolab also omit the salient fact that several important CBS affiliates (including Boston’s WEEI) pre-empted Welles’ broadcast in favor of local commercial programming, further shrinking its audience. CBS commissioned a nationwide survey the day after the broadcast, and network executives were relieved to discover just how few people actually tuned in. “In the first place, most people didn’t hear it,” CBS’s Frank Stanton recalled later. “But those who did hear it, looked at it is as a prank and accepted it that way.”
Advertisement
Advertisement
The legend of the panic, however, grew exponentially over the following years. In 1940, an esteemed academic solidified the myth in the public mind. Relying heavily on a skewed report compiled six weeks after the broadcast by the American Institute of Public Opinion, The Invasion From Mars, by Princeton’s Hadley Cantril, estimated that about 1 million people were “frightened” by War of the Worlds. But the AIPO survey, as Cantril himself admitted, offered an audience rating “over 100 per cent higher than any other known measure of this audience.” Cantril defended his reliance on AIPO data by noting that it surveyed homes without telephones and small communities often overlooked by radio ratings agencies. But this cherry-picked data set was clearly tainted by the sensationalistic newspaper publicity following the broadcast (a possibility Cantril also admitted). Worse, Cantril committed an obvious categorical error by conflating being “frightened,” “disturbed,” or “excited” by the program with being “panicked.” In the late 1930s, radio audiences were regularly “excited” and “frightened” by suspenseful dramas. But what supposedly set Welles’ show apart was the “panic,” and even terror, it instilled in its audience. Was the small audience that listened to War of the Worlds excited by what they heard? Certainly. But that doesn’t mean they ran into the streets fearing for the fate of humanity.
Advertisement
Advertisement
Advertisement
And yet such behavior has become part of the War of the Worlds myth, as highlighted by the PBS program. “As Welles ran out the broadcast, the deluge of calls continued to light up switchboards across the country,” narrator Oliver Platt explains. “In some quarters there were even vague reports of suicides and panic-related deaths.” But just as the size of Welles’ audience has been exaggerated, so have reports of audience hysteria. Wire service reports did relay sensational stories of (unnamed) panicked listeners saved only by the timely intervention of friends or neighbors, but not one newspaper reported a verified suicide connected to the broadcast. Researchers in Princeton’s Office of Radio Research, working under the direction of Cantril, sought to verify a rumor that several people were treated for shock at St. Michael’s Hospital in Newark, N.J. The rumor was checked and found to be inaccurate. When the same researchers surveyed six New York City hospitals six weeks after the broadcast, “none of them had any record of any cases brought in specifically on account of the broadcast.” No specific death has ever been conclusively attributed to the drama. The Washington Post reported that one Baltimore listener died of a heart attack during the show, but unfortunately no one followed up to confirm the story or provide corroborative details. One particularly frightened listener did sue CBS for $50,000, claiming the network caused her “nervous shock.” Her lawsuit was quickly dismissed.
Advertisement
Advertisement
Advertisement
“By the next morning … the panic broadcast was front-page news from coast to coast, with reports of traffic accidents, near riots, hordes of panicked people in the streets, all because of a radio play,” the PBS documentary recounts. But did armed citizens and National Guardsmen really assemble throughout America? Did mobs rove the streets? Not really. While newspapers made Oct. 30, 1938, a memorable night in the history of the United States, in reality it was a normal fall Sunday evening throughout North America. Four days after its initial, sensational report, the Washington Post published a letter from one reader who walked down F Street during the broadcast. He noticed “nothing approximating mass hysteria.” “In many stores radios were going, yet I observed nothing whatsoever of the absurd supposed ‘terror of the populace.’ There was none,” the reader reported. The Chicago Tribune made no mention of frightened mobs taking to the Windy City’s streets.
Advertisement
Advertisement
If War of the Worlds had in fact caused the widespread terror we’ve been told it did, you’d expect CBS and Welles to have been reprimanded for their actions. But that wasn’t the case. It’s true that Federal Communications Commission chairman Frank McNinch quickly obtained informal agreement from the radio networks that fictional news “flashes” would not be used again, but no official rulings or regulations were promulgated. Nor were CBS or Welles sanctioned in any manner. (In fact, the FCC prohibited complaints about the program from being used in license renewal hearings.) For the FCC and the networks, the sensationalized newspaper reports were at worst a nuisance. Janet Jackson’s 2004 “wardrobe malfunction” remains far more significant in the history of broadcast regulation than Orson Welles’ trickery.
Advertisement
Advertisement
In 2012, Cathleen O’Connell, the producer and director of the PBS documentary, telephoned one of the authors of this piece (Michael Socolow) to discuss the recent scholarship questioning the scale of the panic. The documentary does acknowledge this new work but relegates it to one line, late in the program: “Ultimately,” Oliver Platt intones, “the very extent of the panic would come to be seen as having been exaggerated by the press.”
But that one line fails to balance the accounts of hysteria peppered throughout the script. Director and Welles collaborator Peter Bogdanovich tells us that Welles “scared half the country.” The documentary further claims that “newspaper coverage about the broadcast … continued unabated for two full weeks and amounted to some 12,500 articles in total.” Yet in his comprehensive analysis of contemporaneous reporting on the panic, American University professor W. Joseph Campbell found that almost all newspapers swiftly dropped the story. “Coverage of the broadcast faded quickly from the front pages, in most cases after just a day or two,” Campbell writes, arguing that had the hysteria truly been widespread, “newspapers for days and even weeks afterward could have been expected to have published detailed reports about the dimensions and repercussions of such an extraordinary event.” Much like the broadcast itself, newspaper coverage was dramatic and sensational—but ephemeral. The PBS documentary, like so many accounts of War of the Worlds before it, can’t resist the allure of the myth.
Advertisement
Advertisement
Advertisement
Advertisement
* * *
Why is this myth so alluring—why does it persist? The answer is complicated, most likely reflecting everything from the structure of our commercial broadcasting system and of our federal regulation, as well as our culture’s skepticism about the mass audience and the fear that always accompanies the excitement of new media. Even today, broadcast networks must convince advertisers that they retain commanding powers over their audiences. As such, CBS has regularly celebrated the War of the Worlds broadcast and its supposed effect on the public. In 1957, Studio One, a CBS anthology series, dramatized the panic as “The Night America Trembled,” and when the network celebrated its 75th anniversary in 2003, War of the Worlds was a noted highlight. On the other side of the coin, federal regulators must still persuade politicians that there exists an important protective role for the guardians of the airwaves. For both broadcasters and regulators, War of the Worlds provides excellent evidence to justify their claims about media power.
Advertisement
Some portion of the blame must also go to Hadley Cantril. His scholarly book validated the popular memory of the event. He gave academic credence to the panic and attached real numbers to it. He remains the only source with academic legitimacy who claims there was a sizable panic. Without this validation, the myth likely would not be in social psychology and mass communication textbooks, as it still is today—pretty much every high schooler and liberal arts undergraduate runs across it at some point. (Both the American Experience and Radiolab segments rely on his work.) Though you may have never heard of Cantril, the War of the Worlds myth is very much his legacy.
Advertisement
But the myth also persists because it so perfectly captures our unease with the media’s power over our lives. “The ‘panic broadcast’ may be as much a function of fantasy as fact,” writes Northwestern’s Jeffrey Sconce in Haunted Media, suggesting that the panic myth is a function of simple displacement: It’s not the Martians invading Earth that we fear, he argues; it’s ABC, CBS, and NBC invading and colonizing our consciousness that truly frightens us. To Sconce, the panic plays a “symbolic function” for American culture—we retell the story because we need a cautionary tale about the power of media. And that need has hardly abated: Just as radio was the new medium of the 1930s, opening up exciting new channels of communication, today the Internet provides us with both the promise of a dynamic communicative future and dystopian fears of a new form of mind control; lost privacy; and attacks from scary, mysterious forces. This is the fear that animates our fantasy of panicked hordes—both then and now.
|
Wednesday marks the 75th anniversary of Orson Welles’ electrifying War of the Worlds broadcast, in which the Mercury Theatre on the Air enacted a Martian invasion of Earth. “Upwards of a million people, [were] convinced, if only briefly, that the United States was being laid waste by alien invaders,” narrator Oliver Platt informs us in the new PBS documentary commemorating the program. The panic inspired by Welles made War of the Worlds perhaps the most notorious event in American broadcast history.
That’s the story you already know—it’s the narrative widely reprinted in academic textbooks and popular histories. With actors dramatizing the reaction of frightened audience members (based on contemporaneous letters), the new documentary, part of PBS’s American Experience series, reinforces the notion that naïve Americans were terrorized by their radios back in 1938. So did this weekend’s episode of NPR’s Radiolab, which opened with the assertion that on Oct. 30, 1938, “The United States experienced a kind of mass hysteria that we’ve never seen before.”
There’s only one problem: The supposed panic was so tiny as to be practically immeasurable on the night of the broadcast. Despite repeated assertions to the contrary in the PBS and NPR programs, almost nobody was fooled by Welles’ broadcast.
Advertisement
Advertisement
How did the story of panicked listeners begin? Blame America’s newspapers. Radio had siphoned off advertising revenue from print during the Depression, badly damaging the newspaper industry. So the papers seized the opportunity presented by Welles’ program to discredit radio as a source of news. The newspaper industry sensationalized the panic to prove to advertisers, and regulators, that radio management was irresponsible and not to be trusted. In an editorial titled “Terror by Radio,” the New York Times reproached “radio officials” for approving the interweaving of “blood-curdling fiction” with news flashes “offered in exactly the manner that real news would have been given.” Warned Editor and Publisher, the newspaper industry’
|
no
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
https://www.bbc.com/news/magazine-15470903
|
The Halloween myth of the War of the Worlds panic - BBC News
|
The Halloween myth of the War of the Worlds panic
Mass panic and hysteria swept the United States on the eve of Halloween in 1938, when an all-too-realistic radio dramatisation of The War of the Worlds sent untold thousands of people into the streets or heading for the hills.
The radio show was so terrifying in its accounts of invading Martians wielding deadly heat-rays that it is remembered like no other radio programme.
Or, more accurately, it is misremembered like no other radio programme.
Radio unreality
The panic and terror so routinely associated with The War of the Worlds dramatisation did not come close to a nationwide dimension that night 73 years ago.
Sure, some Americans were frightened or disturbed by what they heard. But most listeners, overwhelmingly, were not. They recognised it for what it was - a clever and entertaining radio play.
The War of the Worlds dramatisation was the inspiration of Orson Welles, director and star of the Mercury Theatre on the Air, an hour-long programme that aired on Sunday evenings on CBS Radio.
Welles was 23 years old, a prodigy destined for lasting fame as director and star of the 1941 motion picture, Citizen Kane.
His adaptation of The War of the Worlds, a science fiction thriller written by HG Wells and published in 1898, was little short of brilliant.
Image caption,
Further radio dramatisations of War of the Worlds spread, including this British production in 1952
What made the show so compelling was the use of simulated on-the-scene radio reports telling of the first landing of Martian invaders near Princeton, New Jersey, and their swift and deadly advance to New York City.
American audiences had become accustomed to news reports interrupting radio programmes. They had heard them often during the war scare in Europe in late summer and early autumn of 1938.
Welles played on this familiarity to stunning effect. In doing so, he created a delicious and tenacious media myth.
Newspaper headlines across America told of the terror that Welles' show supposedly created.
"Radio Listeners in Panic, Taking War Drama as Fact," declared the New York Times. "Radio Fake Scares Nation," cried the Chicago Herald and Examiner. "US Terrorized By Radio's 'Men From Mars,'" said the San Francisco Chronicle.
Exaggerated effect
Yet we know from several sources that the reports of thousands of panic-stricken Americans were wildly exaggerated.
Hadley Cantril, a Princeton University psychologist, estimated that six million people listened to The War of the Worlds dramatisation. Of that number, perhaps 1.2 million listeners were "frightened" or "disturbed" by what they heard, Mr Cantril figured.
"Frightened" and "disturbed," of course, are hardly synonymous with "panic-stricken." Overall, Mr Cantril's data signal that most listeners, by far, were not upset by the show.
Close reading of contemporaneous newspaper reports also reveals the fright that night was highly exaggerated.
Most newspapers printed dispatches sent by wire services such as the Associated Press, which extrapolated widespread fear from small numbers of scattered, anecdotal accounts.
Newspapers, moreover, reported no deaths or serious injuries related to The War of the Worlds broadcast: had panic and hysteria seized America that night, the mayhem surely would have caused many deaths and injuries.
For newspapers, the so-called "panic broadcast" brought newspapers an exceptional opportunity to censure radio, a still-new medium that was becoming a serious competitor in providing news and advertising.
Newspaper leader columns in the days immediately after the broadcast helped deepen the impression that Welles' programme had sown hysteria.
"Radio is new but it has adult responsibilities," chided the New York Times. "It has not mastered itself or the material it uses."
Despite its wobbly basis, the myth of mass panic remains steadfastly attached to The War of the Worlds programme. It is part of the lore of Orson Welles, the bad-boy genius who did his best work before he turned 30.
And it's a tale just too delectable not to be true.
W Joseph Campbell is a professor at American University in Washington, DC. He wrote about the myth of The War of the Worlds programme in his latest book, Getting It Wrong. He often writes about media myths at his blog, Media Myth Alert.
|
"Radio Listeners in Panic, Taking War Drama as Fact," declared the New York Times. "Radio Fake Scares Nation," cried the Chicago Herald and Examiner. "US Terrorized By Radio's 'Men From Mars,'" said the San Francisco Chronicle.
Exaggerated effect
Yet we know from several sources that the reports of thousands of panic-stricken Americans were wildly exaggerated.
Hadley Cantril, a Princeton University psychologist, estimated that six million people listened to The War of the Worlds dramatisation. Of that number, perhaps 1.2 million listeners were "frightened" or "disturbed" by what they heard, Mr Cantril figured.
"Frightened" and "disturbed," of course, are hardly synonymous with "panic-stricken." Overall, Mr Cantril's data signal that most listeners, by far, were not upset by the show.
Close reading of contemporaneous newspaper reports also reveals the fright that night was highly exaggerated.
Most newspapers printed dispatches sent by wire services such as the Associated Press, which extrapolated widespread fear from small numbers of scattered, anecdotal accounts.
Newspapers, moreover, reported no deaths or serious injuries related to The War of the Worlds broadcast: had panic and hysteria seized America that night, the mayhem surely would have caused many deaths and injuries.
For newspapers, the so-called "panic broadcast" brought newspapers an exceptional opportunity to censure radio, a still-new medium that was becoming a serious competitor in providing news and advertising.
Newspaper leader columns in the days immediately after the broadcast helped deepen the impression that Welles' programme had sown hysteria.
"Radio is new but it has adult responsibilities," chided the New York Times. "It has not mastered itself or the material it uses. "
Despite its wobbly basis, the myth of mass panic remains steadfastly attached to The War of the Worlds programme.
|
no
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
https://abcnews.go.com/US/80-years-orson-welles-war-worlds-radio-broadcast/story?id=58826359
|
It's been 80 years since Orson Welles' 'War of the Worlds' radio ...
|
The year is 1938. The cost of a gallon of gas is 10 cents. Franklin D. Roosevelt is president. The primary medium of entertainment is the radio, and it caused panic in the eastern United States after listeners mistook a fictional broadcast called "War of the Worlds" as an actual news report.
On Oct. 30, 1938, future actor and filmmaker Orson Welles narrated the show's prologue for an audience believed to be in the millions. "War of the Worlds" was the Halloween episode for the radio drama series "The Mercury Theatre on the Air."
"Ladies and gentlemen, we interrupt our program of dance music to bring you a special bulletin," the broadcast began. "Martians have landed in New Jersey!"
Grover's Mills, New Jersey.
Bettmann Archive/Getty Images
Understandably, many who heard this became overwrought with worry that an invasion from Mars actually was underway in a small Northeastern town.
"At 8:50 p.m., a huge flaming object, believed to be a meteorite, fell on a farm in the neighborhood of Grovers Mill, New Jersey," the announcer stated.
The rest of the half-hour broadcast followed the style of a typical evening broadcast as it was interrupted by news bulletins, perhaps making the story feel even more authentic, despite the broadcast announcing multiple times that it was a theatrical rendition of H.G. Wells' 1898 novel of the same name.
"I have a grave announcement to make," the broadcaster stated. "Incredible as it may seem, those strange beings who landed in the Jersey farmlands tonight are the vanguard of an invading army from the planet Mars."
Orson Welles is seen rehearsing his radio depiction of H.G. Wells' classic, "The War of the Worlds."
Bettmann Archive/Getty Images
A particularly alarming portion of the story occurred as aliens, apparently emerging from some sort of cylinder, were attacking people nearby with a heat ray. This fictional encounter caused a panicked reporter -- supposedly on the scene -- to be suddenly cut off from the broadcast.
The broadcast ended after returning from a break and following a survivor who fled with the alien invasion. At this point, the Martians had been defeated by microbes.
Erika Dowell, associate director and curator of modern books and manuscripts at Iowa University's Lilly Library, said Welles' first-person narratives was part of what made the broadcast feel so real.
"Even if he was switching between narrators, he was making it first person -- not an omniscient narrator guiding the storyline," Dowell said, according to the university. "He also did a lot of interesting things with sound effects and used those in ways to make the reporting seem believable."
"The War of the Worlds" is a science fiction novel by English author H. G. Wells.
UIG via Getty images
People likely didn't hear much of the broadcast, instead focusing in on the urgent-sounding news bulletins that cut in, experts told ABC News in 1988, on the 50th anniversary of the radio drama.
"People were vulnerable in 1938, and they were worried about the war, worried about the economy and perhaps were a little bit upset and nervous because it was Halloween," Dr. Joel Cooper, psychology professor at Princeton University, told ABC News in 1988.
Listener Henry Sears told ABC News in 1988 that "everyone" was "going after their shotguns and going to Grovers Mill," but the mass hysteria reported after the broadcast may have actually been sensationalized.
Popular myth detailed people flooding out of their homes in a panic, but several theories have emerged in recent years suggesting that no widespread panic occurred -- especially since most people probably were listening to the comedy variety show, "Chase and Sanborn Hour," which aired at the same time, the Telegraph reported.
Orson Welles, American actor and film director, Oct. 30, 1938.
Heritage Images/Getty Images
The broadcast fueled skepticism around radio, a relatively new form of mass communication, according to the library, which houses a collection of Welles' work.
Every year, the town of Grovers Mill celebrates the anniversary of the broadcast that made it a household name, holding costume contests, seances and Mars-themed events.
The community even erected a monument in its Van Nest Park, marking the spot where Martians supposedly landed in 1938, according to NJ.com.
The radio show inspired the 1975 Emmy award-winning made-for-television movie, "The Night That Panicked America." Steven Spielberg also directed a 2005 film, "War of the Worlds," loosely based on Wells' novel.
In April, BBC began filming a three-part drama based on the 1898 work, but the aliens will invade Britain instead of a sleepy New Jersey farm town, Variety reported. The drama will otherwise be a "faithful adaption" of Wells' book, according to the network.
|
The year is 1938. The cost of a gallon of gas is 10 cents. Franklin D. Roosevelt is president. The primary medium of entertainment is the radio, and it caused panic in the eastern United States after listeners mistook a fictional broadcast called "War of the Worlds" as an actual news report.
On Oct. 30, 1938, future actor and filmmaker Orson Welles narrated the show's prologue for an audience believed to be in the millions. "War of the Worlds" was the Halloween episode for the radio drama series "The Mercury Theatre on the Air. "
"Ladies and gentlemen, we interrupt our program of dance music to bring you a special bulletin," the broadcast began. "Martians have landed in New Jersey!"
Grover's Mills, New Jersey.
Bettmann Archive/Getty Images
Understandably, many who heard this became overwrought with worry that an invasion from Mars actually was underway in a small Northeastern town.
"At 8:50 p.m., a huge flaming object, believed to be a meteorite, fell on a farm in the neighborhood of Grovers Mill, New Jersey," the announcer stated.
The rest of the half-hour broadcast followed the style of a typical evening broadcast as it was interrupted by news bulletins, perhaps making the story feel even more authentic, despite the broadcast announcing multiple times that it was a theatrical rendition of H.G. Wells' 1898 novel of the same name.
"I have a grave announcement to make," the broadcaster stated. "Incredible as it may seem, those strange beings who landed in the Jersey farmlands tonight are the vanguard of an invading army from the planet Mars. "
Orson Welles is seen rehearsing his radio depiction of H.G. Wells' classic, "The War of the Worlds. "
Bettmann Archive/Getty Images
A particularly alarming portion of the story occurred as aliens, apparently emerging from some sort of cylinder, were attacking people nearby with a heat ray.
|
yes
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
https://lithub.com/the-war-of-the-worlds-radio-broadcast-caused-mass-panic/
|
Remember the days when a literary radio broadcast could cause ...
|
It begins like this: “We know now that in the early years of the twentieth century, this world was being watched closely by intelligences greater than man’s and yet as mortal as his own.” An alien invasion, descending upon the Earth? Definitely seasonally-appropriate, spooky fun, right? Wrong.
Apparently, listeners who turned in after the announcer explained the project thought it was all real. (Much like when the Lumière brothers first showed a projection of a train, and the audience members thought it was actually heading towards them.) Bouncing between staged interviews with meteorologists and other industry experts, the adaptation was framed as a panicked breaking news broadcast—complete with objects hurtling out of the sky, interrupted calls from fighter plane pilots, and a mass exodus from New York City as citizens fled to safer ground.
There were people who, for that dark hour, thought that the world really was ending. (In fact, there were outraged listeners who thought that he should be sued for “his inhuman instincts and his fiendish joy in causing distress and suffering all over the country.”)
Can you imagine the relief they must have felt when the broadcast came to this beautiful close?
This is Orson Welles, ladies and gentlemen, out of character to assure you that The War of The Worlds has no further significance than as the holiday offering it was intended to be. The Mercury Theatre’s own radio version of dressing up in a sheet and jumping out of a bush and saying Boo! Starting now, we couldn’t soap all your windows and steal all your garden gates by tomorrow night… so we did the best next thing. We annihilated the world before your very ears, and utterly destroyed the C. B. S. You will be relieved, I hope, to learn that we didn’t mean it, and that both institutions are still open for business. So goodbye everybody, and remember the terrible lesson you learned tonight. That grinning, glowing, globular invader of your living room is an inhabitant of the pumpkin patch, and if your doorbell rings and nobody’s there, that was no Martian . . . it’s Halloween.
How I wish all the horrible world-ending news we hear every day could end with a sign-off from Orson Welles, assuring us that the world is not ending, that it’s all merely a simulation.
|
It begins like this: “We know now that in the early years of the twentieth century, this world was being watched closely by intelligences greater than man’s and yet as mortal as his own.” An alien invasion, descending upon the Earth? Definitely seasonally-appropriate, spooky fun, right? Wrong.
Apparently, listeners who turned in after the announcer explained the project thought it was all real. (Much like when the Lumière brothers first showed a projection of a train, and the audience members thought it was actually heading towards them.) Bouncing between staged interviews with meteorologists and other industry experts, the adaptation was framed as a panicked breaking news broadcast—complete with objects hurtling out of the sky, interrupted calls from fighter plane pilots, and a mass exodus from New York City as citizens fled to safer ground.
There were people who, for that dark hour, thought that the world really was ending. (In fact, there were outraged listeners who thought that he should be sued for “his inhuman instincts and his fiendish joy in causing distress and suffering all over the country.”)
Can you imagine the relief they must have felt when the broadcast came to this beautiful close?
This is Orson Welles, ladies and gentlemen, out of character to assure you that The War of The Worlds has no further significance than as the holiday offering it was intended to be. The Mercury Theatre’s own radio version of dressing up in a sheet and jumping out of a bush and saying Boo! Starting now, we couldn’t soap all your windows and steal all your garden gates by tomorrow night… so we did the best next thing. We annihilated the world before your very ears, and utterly destroyed the C. B. S. You will be relieved, I hope, to learn that we didn’t mean it, and that both institutions are still open for business. So goodbye everybody, and remember the terrible lesson you learned tonight. That grinning, glowing, globular invader of your living room is an inhabitant of the pumpkin patch, and if your doorbell rings and nobody’s there, that was no Martian . . . it’s Halloween.
|
yes
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
https://www.telegraph.co.uk/radio/what-to-listen-to/the-war-of-the-worlds-panic-was-a-myth/
|
The War of the Worlds panic was a myth
|
The War of the Worlds panic was a myth
Orson Welles broadcasts his radio show of HG Wells' science fiction novel The War of the Worlds in New York in October 1938
Credit: AP
The story that mass panic broke out because of an Orson Welles radio show became part of modern folklore. The idea that hysteria swept America on October 30, 1938, when a 62-minute radio dramatisation of The War of the Worlds, remained unchallenged for nearly eight decades. Even those who had never heard Welles reading the HG Wells story about invading Martians wielding deadly heat-rays later claimed to have been terrified. Welles, who was born on May 6, 1915, used simulated on-the-scene radio reports about aliens advancing on New York City to pep up the story by Wells, who died on August 13 1946. But what is the truth about that historic Halloween eve CBS Radio show from the Mercury Theatre in New York?
DON'T PANIC . . . According to popular myth, thousands of New Yorkers fled their homes in panic, with swarms of terrified citizens crowding the streets in different American cities to catch a glimpse of a “real space battle”. In 1954, Ben Gross, radio editor for the New York Daily News, wrote in his memoir that New York's streets were "nearly deserted" that October night in 1938. In the Orson Welles broadcast, part of the hoax involved the town of Grover’s Mill, near Princeton in New Jersey, being taken over by aliens. Welles and scriptwriter Howard E Koch (who went on to co-write the film Casablanca) skillfully ratcheted up the tension with fake radio reports from the US infantry and air force. The true extent of the panic seems to have been that a small band of Grover's Mill locals, believing the town's water tower on Grover's Mill Road had been turned into a “giant Martian war machine”, fired guns filled with buckshot in an attack on the water tower. In 1998, residents held a tongue-in-cheek "Martian Ball" to commemorate the 60th anniversary of the incident.
WHAT ABOUT PEOPLE JUMPING OFF BUILDINGS AND HAVING NERVOUS BREAKDOWNS? In the immediate aftermath of the broadcast, analysts in Princeton’s Office of Radio Research, working under the direction of Professor Hadley Cantril, sought to verify a rumour that several people had been treated for shock at St Michael’s Hospital in Newark, NJ after the programme. The rumour was found to be false. In addition, when they surveyed six New York City hospitals in December 1938, they found that “none of them had any record of any cases brought in specifically on account of the broadcast”. A Washington Post claim that a man died of a heart attack brought on by listening to the programme was never verified. Police records for New Jersey did show an increase in calls on the night of the show. However, in the preface to his textbook Introduction to Collective Behaviour, academic David Miller points out that: "Some people called to find out where they could go to donate blood. Some callers were simply angry that such a realistic show was allowed on the air, while others called CBS to congratulate Mercury Theatre for the exciting Halloween programme".
How the newspapers reported the broadcast
AND IN FACT NOT MANY PEOPLE HEARD THE SHOW . . . On the evening of October 30, 1938, most people tuning into radio were in fact listening to the highly popular Chase and Sanborn Hour, a comedy variety show hosted by the ventriloquist Edgar Bergin, which was airing at the same time as War of the Worlds on competing radio station, NBC. The radio ratings survey firm CE Hopper Company were, coincidentally, conducting a telephone poll that night of approximately five thousand households. They asked: "To what programme are you listening?” Only two per cent of people said they were listening to The War of the Worlds. In addition, several key CBS affiliate radio stations (including Boston’s WEEI) decided to carry local commercial shows rather than Welles's programme, further shrinking its audience. Frank Stanton, later president of CBS, said that CBS were never censored for The War of the Worlds, admitting: "In the first place, most people didn't hear the show."
AND THE SHOW HAD CARRIED A WARNING THAT IT WAS MADE UP . . . Welles, who went on to have such a glittering career as a film director (Citizen Kane, The Magnificent Ambersons, Othello) and actor (The Third Man, Compulsion) knew what he was doing with such artful radio mischief-making. He played recordings of Herbert Morrison's radio reports of the Hindenburg disaster for actor Frank Readick and the rest of the cast, to demonstrate the mood he wanted. He said: "We wanted people to understand that they shouldn’t take any opinion predigested, and they shouldn’t swallow everything that came through the tap whether it was radio or not. But as I say it was only a partial experiment, we had no idea the extent of the thing." To mitigate any possible fallout from the hoax, CBS made him carry warnings that it was a fictional show at the start of the show and again at 40 and 55 minutes into the broadcast.
ANY YET THE MYTH OF MASS PANIC TOOK HOLD? Research published six weeks after the broadcast by the American Institute of Public Opinion was skewed. They later admitted that figures of one million people listening to the programme were wildly inaccurate. In addition, where people surveyed had said they were “frightened”, “disturbed”, or “excited” by show, these terms were conflated into the description that they had felt “panicked” by The War of the Worlds. Such was the initial publicity that Adolf Hitler even got in on the act, citing the supposed panic as "evidence of the decadence and corrupt condition of democracy".
MAINLY BECAUSE IT WAS FUELLED BY NEWSPAPER COVERAGE Newspaper headlines about the event were lurid. 'Radio Listeners in Panic, Taking War Drama as Fact' was the front page headline on The New York Times. 'Radio Fake Scares Nation', said the Chicago Herald and Examiner. 'US Terrorised By Radio's Men From Mars' said the San Francisco Chronicle. There were also front page stories in the The Boston Daily Globe and The Detroit News. One repeated claim was that within a month, 12,500 articles had been published throughout the world on the alien mass panic. Yet in his comprehensive analysis of contemporaneous reporting in a book called Getting it Wrong, American University professor W Joseph Campbell found that almost all newspapers swiftly dropped the story. “Coverage of the broadcast faded quickly from the front pages, in most cases after just a day or two," he wrote.
WHICH GAVE NEWSPAPERS THE CHANCE TO ATTACK RADIO The newspapers had a clear agenda. An editorial in The New York Times, headlined In the Terror by Radio, was used to censure the relatively new medium of radio, which was becoming a serious competitor in providing news and advertising. "Radio is new but it has adult responsibilities. It has not mastered itself or the material it uses,” said the editorial leader comment on November 1 1938. In an excellent piece in Slate magazine in 2013, Jefferson Pooley (associate professor of media and communication at Muhlenberg College) and Michael J Socolow (associate professor of communication and journalism at the University of Maine) looked at the continuing popularity of the myth of mass panic and they took to task NPR's Radiolab programme about the incident and the Radiolab assertion that “The United States experienced a kind of mass hysteria that we’ve never seen before.” Pooley and Socolow wrote: "How did the story of panicked listeners begin? Blame America’s newspapers. Radio had siphoned off advertising revenue from print during the Depression, badly damaging the newspaper industry. So the papers seized the opportunity presented by Welles’s programme, perhaps to discredit radio as a source of news. The newspaper industry sensationalised the panic to prove to advertisers, and regulators, that radio management was irresponsible and not to be trusted."
Orson Welles was 23 when he recorded The War of the Worlds and 26 when he made Citizen Kane (left)
Credit: Rex Features
BUT IT DIDN'T DO WELLES'S SHOW COMMERCIAL HARM In fact, the notoriety of the broadcast led the Campbell Soup Company to sponsor the The Mercury Theatre on the Air, and the show was renamed The Campbell Playhouse.
AND THERE WERE NO LEGAL REPERCUSSIONS One frightened listener tried to sue CBS for $50,000, claiming the network caused her “nervous shock” with the broadcast. Her lawsuit was quickly dismissed. Only one claim was ever successful, for a pair of black men's shoes (size 9B) by a Massachusetts man who said he had spent the money he had saved to buy shoes on a train ticket to escape the Martians. Welles reportedly paid for the man's shoes.
EVEN HG WELLS MOCKED THE PRETENCE The War of the Worlds was originally published as a novel in 1898 (in the story it is Leatherhead, Woking and Weybridge in Surrey that are attacked by aliens). When HG Wells was asked about the supposed mass panic in America 40 years after his book came out, he was ironic about the whole incident. HG Wells was questioned during a joint radio interview with Welles on KTSA in San Antonio in 1949 and replied: “In England we had articles about it, and people said, ‘Have you never heard of Halloween in America, when everybody pretends to see ghosts?’”
THOUGH THE PUBLICITY SUITED ORSON WELLES In Getting it Wrong, Professor Campbell said that Welles was happy in later decades to encourage the myth of the panic because it was a "tale just too delectable not to be true". Campbell added that: "It is part of the lore of Orson Welles, the bad-boy genius who did his best work before he turned 30."
IT REMAINS A POTENT FANTASY
There have been lots of dramatisations of the events of that night, including a 1975 made-for-television movie broadcast on the ABC network called The Night That Panicked America. Some film and TV makers treated the incident with more humour. In a The Simpsons parody, Homer is tricked into believing a Martian has eaten the President of the United States. In Woody Allen's 1987 film Radio Days, the broadcast prompts a character to abandon his date Aunt Bea (Dianne Wiest) in the car and run away in panic, leaving Bea to walk six miles home. The next day, he calls her for another date. She turns it down claiming she has "married a Martian".
WHAT HAPPENED TO THE SCRIPT
Welles's directorial copy of the broadcast was auctioned in 1994, at Christie's in New York, and bought for £24,000 by filmmaker Steven Spielberg. He went on to make a version of The War of the Worlds in 2005, starring Tom Cruise.
THE WAR OF THE WORLDS WAS NOT THE FIRST RADIO HOAX England actually beat America to that trick, because the first radio hoax was broadcast on 16 January 1926, on the BBC. A talk on 18th-century British literature was interrupted by a 12-minute series of fictitious news bulletins about a riot in London, in which Big Ben was blown up by mortars, the Savoy Hotel burnt down and a politician lynched on a tramway post. The show, curiously, was written by Father Ronald Knox, a Catholic priest.
Listen to the War of the Worlds broadcast
AND IT'S NOT A GOOD IDEA TO COPY ORSON WELLES . . . In February 1949, Leonardo Paez and Eduardo Alcaraz produced a Spanish-language version of Welles's 1938 script for Radio Quito in Ecuador. The broadcast set off panic. Quito police and fire brigades rushed out of town to fight the supposed alien invasion force. After it was revealed that the broadcast was fiction, the panic transformed into a riot. The riot resulted in at least seven deaths, including those of Paez's girlfriend and nephew. The offices Radio Quito, and El Comercio, a local newspaper that had participated in the hoax by publishing false reports of unidentified flying objects in the days preceding the broadcast, were both burned to the ground.
|
The War of the Worlds panic was a myth
Orson Welles broadcasts his radio show of HG Wells' science fiction novel The War of the Worlds in New York in October 1938
Credit: AP
The story that mass panic broke out because of an Orson Welles radio show became part of modern folklore. The idea that hysteria swept America on October 30, 1938, when a 62-minute radio dramatisation of The War of the Worlds, remained unchallenged for nearly eight decades. Even those who had never heard Welles reading the HG Wells story about invading Martians wielding deadly heat-rays later claimed to have been terrified. Welles, who was born on May 6, 1915, used simulated on-the-scene radio reports about aliens advancing on New York City to pep up the story by Wells, who died on August 13 1946. But what is the truth about that historic Halloween eve CBS Radio show from the Mercury Theatre in New York?
DON'T PANIC . . . According to popular myth, thousands of New Yorkers fled their homes in panic, with swarms of terrified citizens crowding the streets in different American cities to catch a glimpse of a “real space battle”. In 1954, Ben Gross, radio editor for the New York Daily News, wrote in his memoir that New York's streets were "nearly deserted" that October night in 1938. In the Orson Welles broadcast, part of the hoax involved the town of Grover’s Mill, near Princeton in New Jersey, being taken over by aliens. Welles and scriptwriter Howard E Koch (who went on to co-write the film Casablanca) skillfully ratcheted up the tension with fake radio reports from the US infantry and air force. The true extent of the panic seems to have been that a small band of Grover's Mill locals, believing the town's water tower on Grover's Mill Road had been turned into a “giant Martian war machine”, fired guns filled with buckshot in an attack on the water tower.
|
no
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
https://theworld.org/stories/2013-10-29/war-worlds-turns-75-could-it-happen-again
|
'War of the Worlds' turns 75. Could it happen again? | The World ...
|
'War of the Worlds' turns 75. Could it happen again?
75 years ago, a Martian invasion in a small New Jersey town caused mass panic throughout America. We all know now that it wasn't real, of course.
It was Orson Welles’ legendary radio broadcast of The War of the Worlds. You may not know, though, that its impact continued to sow fear and havoc around the world for years.
The War of the Worlds made international news the day after it aired on CBS Radio in 1938. But those headlines didn’t prevent people from being fooled by copycat productions many years later.
In retrospect, it was an unbelievable story. Yet people across the US believed that monsters from Mars had crushed and burned nearly 7,000 people within minutes.
Writer John Gosling is an expert in all things War of the Worlds. He says during the Golden Age of Radio, Orson Welles captured a moment of vulnerability in the country, with the Great Depression still underway and the fear of Nazis growing.
“The basic story is scary enough,” Gosling said. “Martians are coming and they are going to destroy the earth and suck our blood and poison us — you know, it’s a scary concept, but you can localize it so well.”
He says Welles created a formula that could be easily replicated, which is exactly what happened, again and again — in Chile, Brazil, Portugal.... Gosling called it lightning in a bottle. In his book, Waging the War of the Worlds, he examines the history of the original broadcast and its many takeoffs, the deadliest of which happened in Quito, Ecuador, on February 12, 1949.
The popular music of Gonzalo Benitez started the Quito broadcast. And then, just as in the original, the music was interrupted by someone rushing to the microphone with the news that Martians were invading, smothering neighboring cities with gas. Soon, other radio stations began repeating the story.
Priests were leading people in prayer on the streets. When the radio station realized what was happening, they issued on-air disclaimers, but that only made some people angry. About 300 people marched on the station, which was housed on the upper floor of the local newspaper.
“They blocked all the exits. They hurled flaming torches into the basement, setting the presses on fire and all of the chemicals down below on fire. They prevented the police from intervening. So the building’s on fire, the exits are blocked. One of the actors sort-of broke character and pleaded on air for help from the authorities. But, by all accounts, the authorities weren’t there to help because the authorities — the police and the military — had left town to go and fight the Martians!” Gosling said.
At least 6 people died in the fire. And the broadcast tape was lost.
There were other remakes of The War of the Worlds that caused minor panic, like the 1958 remake that aired in Portugal.
The Lisbon War of the Worlds aired on a Catholic radio station. When people started getting worried, the police called the station and ordered the broadcast stopped. But the team of actors thought the police were joking. Who would be so gullible to fall for this again?
“What ended up happening was two military guys entered the station with guns ordering them to stop. They were literally stopped at gunpoint,” Gosling remembered.
Portugal is a deeply religious country, and since the broadcast was coming from a Catholic radio station, Gosling said a tale of impending apocalypse wasn’t so farfetched.
But could it happen again? Gosling thinks if we aren’t careful, it could.
In this new golden age of social media, he said, people often don’t know what to believe.
Sign up for our daily newsletter
Sign up for The Top of the World, delivered to your inbox every weekday morning.
|
'War of the Worlds' turns 75. Could it happen again?
75 years ago, a Martian invasion in a small New Jersey town caused mass panic throughout America. We all know now that it wasn't real, of course.
It was Orson Welles’ legendary radio broadcast of The War of the Worlds. You may not know, though, that its impact continued to sow fear and havoc around the world for years.
The War of the Worlds made international news the day after it aired on CBS Radio in 1938. But those headlines didn’t prevent people from being fooled by copycat productions many years later.
In retrospect, it was an unbelievable story. Yet people across the US believed that monsters from Mars had crushed and burned nearly 7,000 people within minutes.
Writer John Gosling is an expert in all things War of the Worlds. He says during the Golden Age of Radio, Orson Welles captured a moment of vulnerability in the country, with the Great Depression still underway and the fear of Nazis growing.
“The basic story is scary enough,” Gosling said. “Martians are coming and they are going to destroy the earth and suck our blood and poison us — you know, it’s a scary concept, but you can localize it so well.”
He says Welles created a formula that could be easily replicated, which is exactly what happened, again and again — in Chile, Brazil, Portugal.... Gosling called it lightning in a bottle. In his book, Waging the War of the Worlds, he examines the history of the original broadcast and its many takeoffs, the deadliest of which happened in Quito, Ecuador, on February 12, 1949.
The popular music of Gonzalo Benitez started the Quito broadcast. And then, just as in the original, the music was interrupted by someone rushing to the microphone with the news that Martians were invading, smothering neighboring cities with gas. Soon, other radio stations began repeating the story.
Priests were leading people in prayer on the streets. When the radio station realized what was happening, they issued on-air disclaimers, but that only made some people angry.
|
yes
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
yes_statement
|
the "war" of the worlds "radio" "broadcast" "caused" "mass" "panic".. "mass" "panic" was "caused" by the "war" of the worlds "radio" "broadcast".
|
http://natedsanders.com/original___war_of_the_worlds___radio_broadcast_scr-lot42941.aspx
|
Lot Detail - Original ''War of The Worlds'' Radio Broadcast Script ...
|
Original ''War of The Worlds'' Radio Broadcast Script Draft, as Read by Orson Welles in 1938 -- This Broadcast Famously Caused Mass Panic of an Alien Landing
Original typewritten draft of ''The War of the Worlds'', as famously read by Orson Welles on his radio series, Mercury Theater. Airing on CBS on 30 October 1938, the episode (titled ''An Attack by the Men of Mars'' on the script) is known for its realistic depiction which many duped listeners took as fact after tuning in past the introduction. Welles was then forced to give a press conference in which he apologized for the panic he caused, stating it was not intentional, even though the story was read as a news bulletin. 17pp. script is typewritten on cream paper with numerous misspellings, corrections and incomplete sentences with one staple at top left and extra page inserted as 12-A. Comes with provenance from previous owner who purchased script from the estate of the radio pioneer James Jewell. Measures 8.5'' x 11''. Last page is detached, minor holing at top left of first page, otherwise near fine condition.
Original ''War of The Worlds'' Radio Broadcast Script Draft, as Read by Orson Welles in 1938 -- This Broadcast Famously Caused Mass Panic of an Alien Landing
|
Original ''War of The Worlds'' Radio Broadcast Script Draft, as Read by Orson Welles in 1938 -- This Broadcast Famously Caused Mass Panic of an Alien Landing
Original typewritten draft of ''The War of the Worlds'', as famously read by Orson Welles on his radio series, Mercury Theater. Airing on CBS on 30 October 1938, the episode (titled ''An Attack by the Men of Mars'' on the script) is known for its realistic depiction which many duped listeners took as fact after tuning in past the introduction. Welles was then forced to give a press conference in which he apologized for the panic he caused, stating it was not intentional, even though the story was read as a news bulletin. 17pp. script is typewritten on cream paper with numerous misspellings, corrections and incomplete sentences with one staple at top left and extra page inserted as 12-A. Comes with provenance from previous owner who purchased script from the estate of the radio pioneer James Jewell. Measures 8.5'' x 11''. Last page is detached, minor holing at top left of first page, otherwise near fine condition.
Original ''War of The Worlds'' Radio Broadcast Script Draft, as Read by Orson Welles in 1938 --
|
yes
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
no_statement
|
the "war" of the worlds "radio" "broadcast" did not "cause" "mass" "panic".. "mass" "panic" was not "caused" by the "war" of the worlds "radio" "broadcast".
|
https://www.cnet.com/culture/could-the-war-of-the-worlds-scare-happen-today/
|
Could the 'War of the Worlds' scare happen today? - CNET
|
Could the 'War of the Worlds' scare happen today?
In 1938, a Halloween-themed radio broadcast sent Americans into panic over a fictional martian invasion. A lot's changed since then. Photos: Hoaxes, mess-ups and pranks--oh my
Caroline McCarthyFormer Staff writer, CNET News
Caroline McCarthy, a CNET News staff writer, is a downtown Manhattanite happily addicted to social-media tools and restaurant blogs. Her pre-CNET resume includes interning at an IT security firm and brewing cappuccinos.
In 1938, a convincing actor with a good script and access to a national media outlet could cause mass panic in a matter of minutes.
For a Halloween special aired the night before the holiday, the CBS radio series Mercury Theatre on the Air broadcast a documentary-style adaptation of H.G. Wells' science fiction novel The War of the Worlds directed and narrated by actor Orson Welles. The now-legendary premise was that martians had landed in rural New Jersey, midway between the metropolises of New York and Philadelphia, and were wreaking havoc with poison gas and heat rays.
Like any fictional radio broadcast of the time, the Mercury Theatre Halloween special had opening and closing credits. Unfortunately, a sizable number of listeners seemed to miss that part. Reports detailed stories of people fleeing their homes, flooding their local police stations with telephone calls, and the rumor mill running wild. The front page of The New York Times the next day featured a story with the headline "Radio listeners in panic, taking war drama as fact."
Could The War of the Worlds panic happen again? While it's hard to imagine a scripted performance causing people to arm themselves against an alien invasion today (and a radio show certainly wouldn't do it), misunderstanding and misinformation can still lead to mass hysteria, as the city of Boston learned nearly seven decades later.
A passenger on the city subway alerted authorities to a "suspicious device" near the Interstate 93 highway on January 31, 2007. Soon, other people started spotting more of them around the city. After subway station closings, transportation delays, a halt to bridge and river traffic, and anxious mayoral press conferences, officials started to realize the threat was actually a marketing campaign for the cartoon show Aqua Teen Hunger Force, and the "suspicious devices" in question were light-up images of the program's "Mooninite" characters.
But this time around, the paranoia that ensued wasn't a national crisis, but a national joke with Boston as the punchline. The New York Times (in the form of a blog post) filed it under "the major 'oops' department."
The War of the Worlds era is long over. We're no less gullible than we were seven decades ago, but it's more difficult to fool a huge number of people in a short span of time. "There is (now) a suspicion and a cynicism toward information that was not the case in 1938," said Robert Thompson, a professor of media and culture at Syracuse University.
We're too cynical
Concern over the plausibility of mainstream media reports, from allegedly rampant shark attacks to that incident involving the letters "W," "M," and "D," has made news consumers understandably skeptical about what's on TV or the Web. Entire Web sites, like Snopes.com, exist to debunk the content of those "warning" e-mails that have been forwarded around since the dawn of Hotmail. Eagle-eyed Wikipedia loyalists keep tabs on exactly what alterations are made to the "open" encyclopedia.
And the cutthroat competition of media outlets, from blogs to cable news channels, has made it even more appealing for one network or publication to catch another in an embarrassing faux pas. "For most of the country, by the time we heard the news about it, it was already being debunked," Thompson said of the Mooninite incident. "That's the big difference. The first news I heard about the whole Aqua Teen Hunger Force scare was about the big hand-wringing after the fact."
Most misinformation these days can be quickly debunked, like the hoax Apple memo that ever-so-briefly caused the company stock to dive before it was exposed as a fake minutes later.
"There is (now) a suspicion and a cynicism toward information that was not the case in 1938."
--Robert Thompson, professor of media and culture, Syracuse University
"I don't think something quite like the Orson Welles thing could happen again in this day and age, probably because of the Internet," said Charlie Todd, founder of the New York-based Improv Everywhere, a troupe of "undercover agents" that "causes scenes of chaos and joy in public places." Todd says he still hears about people on the Internet who see YouTube footage of Improv Everywhere pranks--like a staged protest of several dozen redheads picketing a Wendy's fast-food restaurant over its allegedly "racist" mascot--and think they're real. But he dismisses such viewers. "If anyone was intelligent enough to read the description of the video right next to the video, they'd see it was a fake protest."
Likewise, a fake news story that accidentally gets circulated as a real one tends to spawn less hysteria from gullible believers, and more ridicule directed at the erring news outlet. Gregory Galant, founder of fake news outlet News Groper, was surprised when an MSNBC story quoted his site's over-the-top "fake Al Sharpton" blog as a blog that was actually written by the civil rights advocate. "We were sitting around the office one afternoon, and all of a sudden I noticed we were getting all this traffic from MSNBC, and we were kind of baffled," Galant said. "We were just speechless. It's just like, one of these moments where you never think that's going to happen." Because, in the age of rapid-fire fact-checking thanks to a quick Google search, something like that just isn't supposed to occur.
Within a few hours, of course, MSNBC had corrected its error. Just as with the Mooninite incident, the most extensive coverage came from snarky bloggers taking shots at gullible mainstream media.
But on the other hand, even if the rapid spread of information on the Web has meant that legitimate mass hysteria and major misconceptions are restricted to niche interest groups or metro areas (say, Boston) rather than entire countries, the sociological reverberations remain the same. Both The War of the Worlds radio announcement and the Mooninite scare, for example, have deep roots in paranoia over national security.
"In the late '30s, at the very time when that War of the Worlds thing played, we were constantly listening to our radios and hearing 'We interrupt this program with breaking news,'" Thompson said. "And it would be bad news, as the prelude to the Second World War was going on. Orson Welles used that idiom of radio talk, that interrupting the message or dangerous talk that we were not only completely used to, but used to taking very seriously."
And then there's the possibility of (figurative) planetary alignment: if governments and major media outlets promote something as truth, the populace both online and offline will likely follow suit.
Thompson raised the well-documented Y2K paranoia as an example of such. Despite extensive corporate and government measures put into place to specifically make sure the world's technological backbone didn't collapse on January 1, there was still anxiety over an impending technological meltdown in the last few days of 1999. "That was probably one of the great War of the Worlds types of moments because it stretched over a long period of time and then, of course, nothing happened," Thompson said. "If you went as far as to get some cash from the cash machine a couple of days before New Year's, that's a sign that this story did enough to alter behavior."
Get the CNET TVs, Streaming and Audio newsletter
Get CNET's comprehensive coverage of home entertainment tech delivered to your inbox.
Yes, I also want to receive the CNET Insider newsletter, keeping me up to date with all things CNET.
By signing up, you will receive newsletters and promotional content and agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
|
Could the 'War of the Worlds' scare happen today?
In 1938, a Halloween-themed radio broadcast sent Americans into panic over a fictional martian invasion. A lot's changed since then. Photos: Hoaxes, mess-ups and pranks--oh my
Caroline McCarthyFormer Staff writer, CNET News
Caroline McCarthy, a CNET News staff writer, is a downtown Manhattanite happily addicted to social-media tools and restaurant blogs. Her pre-CNET resume includes interning at an IT security firm and brewing cappuccinos.
In 1938, a convincing actor with a good script and access to a national media outlet could cause mass panic in a matter of minutes.
For a Halloween special aired the night before the holiday, the CBS radio series Mercury Theatre on the Air broadcast a documentary-style adaptation of H.G. Wells' science fiction novel The War of the Worlds directed and narrated by actor Orson Welles. The now-legendary premise was that martians had landed in rural New Jersey, midway between the metropolises of New York and Philadelphia, and were wreaking havoc with poison gas and heat rays.
Like any fictional radio broadcast of the time, the Mercury Theatre Halloween special had opening and closing credits. Unfortunately, a sizable number of listeners seemed to miss that part. Reports detailed stories of people fleeing their homes, flooding their local police stations with telephone calls, and the rumor mill running wild. The front page of The New York Times the next day featured a story with the headline "Radio listeners in panic, taking war drama as fact. "
Could The War of the Worlds panic happen again? While it's hard to imagine a scripted performance causing people to arm themselves against an alien invasion today (and a radio show certainly wouldn't do it), misunderstanding and misinformation can still lead to mass hysteria, as the city of Boston learned nearly seven decades later.
A passenger on the city subway alerted authorities to a "suspicious device" near the Interstate 93 highway on January 31, 2007.
|
yes
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
no_statement
|
the "war" of the worlds "radio" "broadcast" did not "cause" "mass" "panic".. "mass" "panic" was not "caused" by the "war" of the worlds "radio" "broadcast".
|
https://www.otrcat.com/war-of-the-worlds-those-pesky-aliens
|
War of the Worlds: Those Pesky Aliens | Old Time Radio
|
War of the Worlds: Those Pesky Aliens
There is arguably no more important book in the history of science fiction than The War of the Worlds. It has influenced and informed generations of writers, artists and filmmakers, but when H.G. Wells set out to chide the hypocrisy and complacency of the British Empire with his pointedly observed tale of alien invasion, surely not even he in his wildest dreams could have predicted how insidiously the story would worm its way into the public psyche, nor could he have possibly foreseen how a marvelous new invention was to become intimately entwined with his creation. In 1898 when The War of the Worlds was first published, Marconi had only just opened his first factory in Britain, yet just 40 years later The War of the Worlds would reach out via a radio drama to intrude directly into the lives of a huge number of frightened Americans.
That now infamous 1938 broadcast by Orson Welles’ Mercury Theatre of the Air was of course a landmark in the history of American radio, but it was not the last time a Martian invasion of the airwaves would trigger pandemonium. Surprisingly, almost the exact same scenario has been played out time and time again across the world, with the most recent event occurring a scant 10 years ago. From the backwoods of New Jersey to the streets of Lisbon, the Martians have had little trouble in recruiting willing collaborators to their cause, radio mavericks like Welles who sought to recapture lightening in a bottle, but who more often than not, found themselves badly burnt.
In 1974 Orson Welles alluded to his brush with infamy in his tongue in check documentary film F for Fake, observing dryly that someone in South America had ended up in jail for imitating his War of the Worlds broadcast. It’s not at all clear how much Welles really knew of those events in South America, but the continent has certainly suffered more than its fair share of attention from the Martians, and some people did see the inside of a prison cell for their troubles. There are two well known instances of a War of the Worlds broadcast getting out of hand in the area, the first in 1944 in Chile and the second in 1949 in Ecuador. The Chilean broadcast of November 12th was certainly not lacking in thrills; with Martian war machines descending on the country suspended from giant parachutes and considerable destruction of familiar landmarks reported. Chaos was said to have enveloped the capital Santiago, with one local newspaper reporting that people had flooded onto the streets, but on the scale of things, this was as nothing compared to the tragedy that was to befall Ecuador a few years later.
The disaster that engulfed the capital city of Quito on February 12th 1949, in which at least 6 people lost their lives, has long been held to be the fault of one man, a writer and producer with the El Comercio radio station called Leonardo Páez, but evidence strongly contradicting this has recently come to light. Páez has routinely been painted in almost Machiavellian terms, with claims made that he withheld his intentions from the station owners, planted false stories about UFOs in local papers to drum up public unease prior to the broadcast, and most damningly of all, locked the doors to the studio to prevent his own actors from realising the enormity of the situation that was rapidly unfolding in the city. And by all accounts, this was by every conceivable measure, a full-blown mass panic, with priests offering absolution to the faithful, crowds of terrified residents taking to the streets and army trucks setting out to confront the invaders.
If Páez had really engineered this, then his is surely one of the most successful and deadliest confidence tricks in history, but the truth is that he was simply another victim of the incredible plasticity of Wells story, which time and time again has proven its ability to adapt to prevailing conditions and exploit local undercurrents of tension and anxiety. As testified by his daughter, Páez was horrified when protestations over the air that it was only a play brought forth a mob of some 300 people to the doors of the radio station. Enraged that they had been so thoroughly fooled, and armed with rocks and blazing copies of the El Comercio newspaper (the owner of the radio station with which they shared premises), the shutters to the basement printing presses were forced open, and the building callously torched.
Cut off by the fire, Páez and his colleagues were forced to risk a perilous escape across the rooftops of nearby buildings, but tragically many of his friends did not reach safety. It has been suggested that Páez then vanished into the night, never to be seen again, but though many involved in the broadcast were briefly arrested in the coming days, Páez had retained documents proving his innocence - primarily a copy of his contract that proved the station management was well aware of the broadcast. Entirely exonerated of any blame for the tragedy, Páez even received the keys to the city of Quito in later life for his contribution to Ecuadorian culture, hardly the sort of honour likely to be bestowed on a mass murderer.
Yet this was not the end of the Martian interest in South America. Far less well known is the fact that Brazil has fallen prey on several occasions to rapacious invaders from space. On one occasion in 1954, a series of messages alleging an alien invasion were transmitted by a bored telegraph operator and enthusiastically taken up by local radio stations, prompting Air Force planes to take to the air. Then in 1971, a radio station seeking to boost ratings engineered a version of the 1938 broadcast that culminated in the occupation of the radio station by a battalion of troops from a nearby military base. Only quick thinking by the station staff got them off the hook, hastily doctoring a recording of the broadcast before the troops arrived to make it seem as if numerous careful disclaimers had been broadcast during the course of the production.
Brazil at the time was under a military dictatorship, so the staff of the station had taken quite a risk, but a radio producer in Portugal called Matos Maia had also found himself in serious trouble with the authorities some years earlier, when his production of The War of the Worlds (itself closely modelled on the Howard Koch script of 1938) fell foul of a dictatorial government. Maia had set his 1958 Production in and around Lisbon and half way through his broadcast, suffered the indignity of having his studio invaded by armed police and the production unceremoniously yanked off the air as panic engulfed the city. An invitation to the sinister secret police headquarters in Lisbon followed, with dire threats made to his safety if he ever dared repeat his offence. Later Maia was to discover that the order for his arrest had come directly from the head of state himself.
But just in case you are thinking that post war America would be far too sophisticated to fall for this sort of thing anymore, let me quickly relieve you of that rash preconception. America has in fact faced several more Martian attacks, with significant scares occurring in the vicinities of Buffalo and Rhode Island, once in 1968 and again in 1974. Both cleverly re-imagined the 1938 broadcast with modern music and production values, convincing listeners that they were hearing a contemporary account of a Martian invasion.
It is easy in hindsight to laugh at the antics of people who should surely have known better, but there is a sinister side to the phenomenon, pointing to an all too obvious readiness on the part of listeners to be caught up in the moment and cast aside all common sense. In 1938, war jitters in America played a large part in the panic, as of course did the skill of Orson Welles and his Mercury players, but careful analysis of panics in other countries reveal a wide range of triggers. In Brazil in 1954, the populous was already stressed by a rash of UFO sightings, while in Portugal, deeply held religious beliefs confused Martians with apocalyptic visions. But no matter the underlying cause, the one extraordinary constant is the Martians, a malignant force brought to life by radio that seems able to awake in us an almost primordial fear of the unknown. The last recorded panic caused by a War of the Worlds radio broadcast was in Portugal in 1998. The very pertinent question arises, have we yet seen the last of the Martians?
About the Author: John Gosling has written for a number of British science fiction magazines and is an avid researcher of War of the World radio broadcasts. His first book, "Waging the War of the Worlds", tells the in-depth story of the 1938 broadcast and uncovers nine other panic broadcasts that have terrified listeners all over the globe.
|
An invitation to the sinister secret police headquarters in Lisbon followed, with dire threats made to his safety if he ever dared repeat his offence. Later Maia was to discover that the order for his arrest had come directly from the head of state himself.
But just in case you are thinking that post war America would be far too sophisticated to fall for this sort of thing anymore, let me quickly relieve you of that rash preconception. America has in fact faced several more Martian attacks, with significant scares occurring in the vicinities of Buffalo and Rhode Island, once in 1968 and again in 1974. Both cleverly re-imagined the 1938 broadcast with modern music and production values, convincing listeners that they were hearing a contemporary account of a Martian invasion.
It is easy in hindsight to laugh at the antics of people who should surely have known better, but there is a sinister side to the phenomenon, pointing to an all too obvious readiness on the part of listeners to be caught up in the moment and cast aside all common sense. In 1938, war jitters in America played a large part in the panic, as of course did the skill of Orson Welles and his Mercury players, but careful analysis of panics in other countries reveal a wide range of triggers. In Brazil in 1954, the populous was already stressed by a rash of UFO sightings, while in Portugal, deeply held religious beliefs confused Martians with apocalyptic visions. But no matter the underlying cause, the one extraordinary constant is the Martians, a malignant force brought to life by radio that seems able to awake in us an almost primordial fear of the unknown. The last recorded panic caused by a War of the Worlds radio broadcast was in Portugal in 1998. The very pertinent question arises, have we yet seen the last of the Martians?
About the Author: John Gosling has written for a number of British science fiction magazines and is an avid researcher of War of the World radio broadcasts.
|
yes
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
no_statement
|
the "war" of the worlds "radio" "broadcast" did not "cause" "mass" "panic".. "mass" "panic" was not "caused" by the "war" of the worlds "radio" "broadcast".
|
https://www.theguardian.com/tv-and-radio/2022/dec/16/pokemon-explosion-tv-japan-children-hospital
|
'There was an explosion, and I had to close my eyes': how TV left ...
|
Twenty-five years ago, at precisely 6.51pm on 16 December 1997, hundreds of children across Japan experienced seizures. In total, 685 – 310 boys and 375 girls – were taken by ambulance to hospital. Within two days, 12,000 children had reported symptoms of illness. The common factor in this sudden mass outbreak was an unlikely culprit: an episode of the Pokémon cartoon series.
The instalment in question, Dennō Senshi Porygon (Electric Soldier Porygon), was the 38thin the Pokémon anime’s first season – and initially, at least, it sparked a medical mystery. Twenty minutes into the cartoon, an explosion took place, illustrated by an animation technique known as paka paka, which broadcast alternating red and blue flashing lights at a rate of 12Hz for six seconds. Instantly, hundreds of children experienced photosensitive epileptic seizures – accounting for some, but far from all, of the hospitalisations.
Ten-year-old Takuya Sato said: “Towards the end of the programme there was an explosion, and I had to close my eyes because of an enormous yellow light like a camera flash.” A 15-year-old girl from Nagoya reported: “As I was watching blue and red lights flashing on the screen, I felt my body becoming tense. I do not remember what happened afterwards.”
The condition is perhaps best understood as the placebo effect in reverse. People can make themselves ill from an idea
The phenomenon, headlined “Pokémon Shock” by Japanese media, became big news – it was reported around the world. The cartoon’s producers were questioned by police, while the ministry of health, labour and welfare held an emergency meeting. The share price of Nintendo, the company behind the Pokémon games, dropped by 3.2%.
To medical experts, the figure of 12,000 children requiring medical treatment made no sense. The programme had been watched by 4.6 million households. About one in 5,000 people has photosensitive epilepsy: 0.02%. The number of children reporting symptoms seemed out of all proportion.
The mystery persisted for four years, until it piqued the attention of Benjamin Radford, a research fellow at the Committee for Skeptical Inquiry in the US, and co-host of the podcast Squaring the Strange. “The investigation had just stalled, the mystery sort of faded away without an explanation,” he says. “I wanted to see if I could solve the case.”
Along with Robert Bartholomew, a medical sociologist, he set about examining the timeline of events, and unearthed a key detail. “What people missed was that it wasn’t just a one-night event but instead unfolded over several days, and the contagion occurred in schools and over the news media.”
Orson Welles rehearsing The War of the Worlds in 1938. Photograph: World History Archive/Alamy
What Radford and Bartholomew discovered was that the vast majority of affected children had become ill after hearing about the programme’s effects. Although the cartoon’s transmission on 16 December did indeed cause hundreds of children to experience symptoms resulting from photosensitive epilepsy, something else was at play in the subsequent cases. The next day, in playgrounds and classrooms, in news bulletins and at breakfast tables, all the talk was of Pokémon Shock. At which point, more children began to feel unwell. This was exacerbated when, astonishingly, some news shows actually screened the offending clip. But this time, the symptoms (headaches, dizziness, vomiting) were, says Radford, “much more characteristic of mass sociogenic illness [MSI] than photosensitive epilepsy”.
MSI, also known as mass psychogenic illness (MPI), and more colloquially as mass hysteria, is a well-documented phenomenon with cases spread throughout history, from meowing nuns and dancing epidemics in the middle ages to an outbreak of uncontrollable laughter in Tanzania in 1962. According to Radford: “MSI is complex and often misunderstood, but basically it’s when anxiety manifests itself in physical symptoms that can be spread through social contact. It is often found in closed social units such as factories and schools, where there is a strong social hierarchy. The symptoms are real – the victims are not faking or making them up – but the cause is misattributed.” The condition is perhaps best understood as the placebo effect in reverse. People can make themselves ill from nothing more than an idea.
The Pokémon Shock event wasn’t the only case of a broadcast programme triggering an outbreak of MSI. In May 2006, the Padre António Vieira secondary school in Lisbon reported 22 cases of an unknown virus spreading rapidly in its halls. Students complained of difficulty breathing, rashes, dizziness and fainting. The school shut down as news of the virus spread. Before long, it had affected more than 300 students in 15 Portuguese schools, many of which closed.
Students complained of difficulty breathing, rashes, dizziness and fainting. The school shut down
Doctors were baffled, and could find no evidence of the virus, beyond the students’ symptoms. One medic, Dr Mario Almeida, said at the time: “I know of no disease which is so selective that it only attacks schoolchildren.”
Then the strange truth began to emerge. Just before the outbreak, the popular teen soap Morangos com Açúcar (Strawberries with Sugar) had aired a storyline in which a terrible disease had struck a school. While working on an experiment with a virus (not a noted part of the high-school syllabus, one imagines) a character inadvertently released it and students were immediately struck down, the sickness spreading mercilessly through the fictional school of Colegio Da Barra.
Back in the real world, with the end of the academic year approaching, and many students stressed about exams, the story simply had a more dramatic effect on its young audience than had been intended.
The spoof Ghostwatch documentary from 1992. Photograph: BBC
It is not only schoolchildren who are susceptible, however. On 31 October 1992, a Halloween broadcast caused mass panic across the UK. Ghostwatch adopted many of the tropes of Orson Welles’ The War of the Worlds – a radio drama that caused mass panic in the US when it aped a news report of a Martian invasion. Ghostwatch involved a supposedly live factual broadcast (actually recorded and scripted) of events as they took a terrifyingly sinister turn. Featuring familiar faces including Michael Parkinson, Sarah Greene, Mike Smith and Craig Charles, the programme purported to be an investigation into paranormal activity in a house in Northolt, west London.
The show began slowly, before ramping up the tension with a series of ever more chilling incidents, culminating in Greene, reporting live from the house, being dragged through a cellar door. The in-studio paranormal expert reported that the poltergeist, nicknamed Mr Pipes, was using the broadcast to create a nationwide séance circle, invading the public’s homes. The show concluded with Pipes taking over the studio, and the crew all fleeing, leaving Parkinson wandering around, seemingly possessed by the spirit.
In the immediate aftermath, more than 30,000 terrified or angry callers – including Parkinson’s elderly mother – bombarded the BBC’s switchboard. The following day’s newspapers featured heavy criticism of the show. Six cases of children aged 10-14 exhibiting symptoms of PTSD were recorded, and the BBC was later criticised by the Broadcasting Standards Commission for involving children’s presenters Greene and Smith, whose presence “took some parents off-guard in deciding whether their children could continue to view”.
The cases of Ghostwatch and The War of the Worlds may not exactly meet the textbook definition of mass sociogenic illness, as they do not involve people developing symptoms. But, says Radford, they are in the same ballpark. “The panics were not, strictly speaking, MSI, but they are related. That is, there was an element of social contagion, where fears were legitimised and compounded in the context of uncertainty. Many people, quite sincerely, reported seeing and experiencing all sorts of strange phenomena that simply were not happening. Like mass hysteria, these are classic examples of when mundane events were reinterpreted as extraordinary within a certain context.”
Most people assume they would react differently in such circumstances. For them, Radford has this salutary message: “It’s important to recognise that the people affected weren’t stupid, very gullible, or crazy – any of us might react in the same way.” In other words, we are all capable of succumbing to MSI. Bear that in mind next time you are deciding what to watch. Countryfile all round, then?
This article was amended on 19 December 2022. A figure of 920 audience members potentially at risk of photosensitive epileptic reactions was removed because it was mistakenly based on 4.6 million viewers of the Pokémon episode. The audience was, as stated, 4.6 million households.
|
Orson Welles’ The War of the Worlds – a radio drama that caused mass panic in the US when it aped a news report of a Martian invasion. Ghostwatch involved a supposedly live factual broadcast (actually recorded and scripted) of events as they took a terrifyingly sinister turn. Featuring familiar faces including Michael Parkinson, Sarah Greene, Mike Smith and Craig Charles, the programme purported to be an investigation into paranormal activity in a house in Northolt, west London.
The show began slowly, before ramping up the tension with a series of ever more chilling incidents, culminating in Greene, reporting live from the house, being dragged through a cellar door. The in-studio paranormal expert reported that the poltergeist, nicknamed Mr Pipes, was using the broadcast to create a nationwide séance circle, invading the public’s homes. The show concluded with Pipes taking over the studio, and the crew all fleeing, leaving Parkinson wandering around, seemingly possessed by the spirit.
In the immediate aftermath, more than 30,000 terrified or angry callers – including Parkinson’s elderly mother – bombarded the BBC’s switchboard. The following day’s newspapers featured heavy criticism of the show. Six cases of children aged 10-14 exhibiting symptoms of PTSD were recorded, and the BBC was later criticised by the Broadcasting Standards Commission for involving children’s presenters Greene and Smith, whose presence “took some parents off-guard in deciding whether their children could continue to view”.
The cases of Ghostwatch and The War of the Worlds may not exactly meet the textbook definition of mass sociogenic illness, as they do not involve people developing symptoms. But, says Radford, they are in the same ballpark. “The panics were not, strictly speaking, MSI, but they are related. That is, there was an element of social contagion, where fears were legitimised and compounded in the context of uncertainty. Many people, quite sincerely, reported seeing and experiencing all sorts of strange phenomena that simply were not happening.
|
yes
|
Radio
|
Did the War of the Worlds radio broadcast cause mass panic?
|
no_statement
|
the "war" of the worlds "radio" "broadcast" did not "cause" "mass" "panic".. "mass" "panic" was not "caused" by the "war" of the worlds "radio" "broadcast".
|
https://www.zocalopublicsquare.org/2018/02/20/push-big-nuclear-button-consider-source/ideas/essay/
|
Before You Push That Big Nuclear Button, Consider the Source ...
|
Before You Push That Big Nuclear Button, Consider the Source
From Intentional Hoaxes to Accidental Alerts, Our Interconnectedness Makes Us Vulnerable to Fear
The command center at the Hawaii Emergency Management Agency, shown here, was the source of a mistakenly sent alert warning of an incoming ballistic missile on January 13, 2018. Photo courtesy of Caleb Jones/Associated Press.
By Robert Bartholomew |February 20, 2018
Shortly after 8 a.m. on January 13, 2018, the Hawaii Emergency Management Agency sent out a chilling alert to residents across the state of Hawaii: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.”
Thousands of frightened people flocked to shelters; some even climbed down manholes to save themselves. Hawaii State Representative Matthew LoPresti told CNN: “I was sitting in the bathtub with my children, saying our prayers.” It was not until 38 minutes later that a second message made it clear that that the first had been a false alarm.
Such episodes are not new. Since the advent of mass communications, similar scares have taken place. From intentional hoaxes to accidental alerts, we have become susceptible to reports of terrifying events that never come to pass.
Canadian philosopher and media scholar Marshall McLuhan famously observed that we all live in a global village. Where it once took months to relay news from the other side of the world, it now takes less than a second. The problem is, with so many people now reliant on the internet and mobile phones, the potential for technology-driven hoaxes, panics, and scares has never been greater. And such events, while usually short-lived, have tremendous power to wreak widespread fear and chaos.
The most famous example of mass panic in response to an announcement is the 1938 “War of the Worlds” radio drama produced by Orson Welles at WABC’s studios in New York City. Broadcast across the United States and in parts of Canada, the play narrated a fictitious Martian invasion of the New York-New Jersey metropolitan area. In his bestselling book The Invasion From Mars, American psychologist Hadley Cantril wrote that of the estimated 6 million listeners, about 1.5 million were frightened or panicked.
While contemporary sociologists who have re-examined the episode believe that the number of those who panicked was much smaller, no doubt many were frightened. Some people living near the epicenter of the play—tiny Grover’s Mill, New Jersey—tried to flee the Martian “gas raids” and heat rays.” During the hour-long broadcast, The New York Times fielded 875 phone calls about the broadcast. Curiously, Cantril found that about one-fifth of listeners thought that America was under attack by a foreign power—most likely Germany, using advanced weapons. Historical context often plays a major role in shaping the mindset of listeners.
While perhaps the best-known incident, the 1938 broadcast did not have the most serious consequences. That distinction goes to another radio play, broadcast in 1949, that caused pandemonium in Quito, Ecuador. That highly realistic production mentioned real people and places, and included impersonations of government leaders. A correspondent for The New York Times in Quito at the time described a tumultuous scene as the drama “drove most of the population of Quito into the streets” to escape the Martian “gas raids.”
After realizing that the broadcast was a play, an enraged mob marched on the radio station, surrounded the building, and burned it to the ground. At least 20 died in the rioting and chaos. Authorities had trouble restoring order, as most of the police and military had been sent to the nearby town of Cotocallao to repel the Martians.
These “War of the Worlds” broadcasts were not the first to provoke false alarms. One of the earliest recorded scares occurred in the United Kingdom in 1926, when the BBC reported that the government was under siege and could fall amid a bloody uprising by disgruntled workers. In reality, the streets of London were calm; people had only been listening to a radio play airing on the regularly scheduled Saturday evening radio program.
The BBC broadcast announcements throughout the evening, apologizing and reassuring listeners. But the story was plausible, given deep tensions with labor unions at the time. And, four months after the broadcast, a historic General Workers Strike rocked the government, as 1.5 million workers took to the streets over 10 days to call for higher wages and better working conditions.
Decades later, on March 20, 1983, hundreds of Americans were frightened by a broadcast of the NBC Sunday Night Movie Special Bulletin, about news coverage of a group of terrorists who were threatening to detonate a nuclear bomb in South Carolina. The program began like any other Sunday night movie but was quickly “interrupted” by breaking news bulletins from well-known news outlets such as Reuters and the Associated Press, showing scenes of devastation. The film even recreated a White House press briefing. Special bulletins throughout the show were interrupted by “live feeds” from the terrorists. Conveniently, a reporter and a cameraman happened to be among the “hostages.”
While many false alarms have apparently been unintentional, there are egregious examples of deliberate attempts to cause widespread fear.
The realism of the special broadcast was instrumental in creating the panic. The fictional NBC affiliate broadcasting the siege in Charleston, WPIV, was similar to the real affiliate, WCIV. The real-world WCIV-TV received 250 calls; others rang the police. WDAF in Kansas City fielded 37 calls. At San Francisco’s KRON-TV, 50 calls were logged in the first 30 minutes. One woman was so convinced that she called to complain that too much air time was being given to the terrorists. The program began with an advisory that what viewers were about to see was a “realistic depiction of fictional events” but was not actually happening. Unfortunately, the next advisory did not run until 15 minutes later. The movie won an Emmy, but across the country viewers mistook it for a live news coverage and thought a nuclear catastrophe was imminent.
It wouldn’t be the last nuke scare. On November 9, 1982, WSSR-FM in Springfield, Illinois, reported that there had been an accident at a nearby nuclear power plant. The program began by claiming “that a nuclear cloud was headed for Springfield.” Concerned residents immediately deluged police with phone calls, prompting the station, which was operated by Sangamon State University, to pull the plug on the half-hour drama after just two and a half minutes. The Illinois Emergency Services and Disaster Agency was not amused. Director Chuck Jones said: “I’m still shocked that someone out at that station let that get on the air.” The nuclear power plant depicted in the program was located 25 miles northeast of the city, and while it was not operating at the time, people didn’t have any way of knowing this.
While many false alarms have apparently been unintentional, there are egregious examples of deliberate attempts to cause widespread fear. On January 29, 1991, DJ John Ulett of radio station KSHE-FM in Crestwood, Missouri, decided to protest America’s involvement in the Persian Gulf War by airing the following announcement: “Attention, attention. This is an official civil defense warning. This is not a test. The United States is under nuclear attack.” Worried listeners flooded the station with phone calls. While Ulett’s statement did not trigger a mass panic, the Federal Communications Commission fined the station $25,000. Ulett was suspended but managed to save his job.
Poorly timed jokes have also triggered dramatic results. On August 11, 1984, just before his weekly radio address, President Ronald Reagan tested his microphone by saying: “My fellow Americans, I am pleased to tell you today that I’ve signed legislation that will outlaw Russia forever. We begin bombing in five minutes.” Americans didn’t panic, because the broadcasters knew he was joking, but the Russians, nervous at a time of considerable distrust between the two countries, placed their armed forces on standby.
Even fleeting scares can have long-term consequences. The 1938 Martian scare resulted in jammed phone lines. In Trenton, emergency services were knocked out for six hours. In Quito, damage from the rioting was estimated at $350,000—an enormous sum at the time.
In 1597, English philosopher Francis Bacon famously observed, “Knowledge is power.” But in today’s world, knowledge comes with risk. The most educated and technologically adept generation in the history of the world is also the most vulnerable.
|
It was not until 38 minutes later that a second message made it clear that that the first had been a false alarm.
Such episodes are not new. Since the advent of mass communications, similar scares have taken place. From intentional hoaxes to accidental alerts, we have become susceptible to reports of terrifying events that never come to pass.
Canadian philosopher and media scholar Marshall McLuhan famously observed that we all live in a global village. Where it once took months to relay news from the other side of the world, it now takes less than a second. The problem is, with so many people now reliant on the internet and mobile phones, the potential for technology-driven hoaxes, panics, and scares has never been greater. And such events, while usually short-lived, have tremendous power to wreak widespread fear and chaos.
The most famous example of mass panic in response to an announcement is the 1938 “War of the Worlds” radio drama produced by Orson Welles at WABC’s studios in New York City. Broadcast across the United States and in parts of Canada, the play narrated a fictitious Martian invasion of the New York-New Jersey metropolitan area. In his bestselling book The Invasion From Mars, American psychologist Hadley Cantril wrote that of the estimated 6 million listeners, about 1.5 million were frightened or panicked.
While contemporary sociologists who have re-examined the episode believe that the number of those who panicked was much smaller, no doubt many were frightened. Some people living near the epicenter of the play—tiny Grover’s Mill, New Jersey—tried to flee the Martian “gas raids” and heat rays.” During the hour-long broadcast, The New York Times fielded 875 phone calls about the broadcast. Curiously, Cantril found that about one-fifth of listeners thought that America was under attack by a foreign power—most likely Germany, using advanced weapons. Historical context often plays a major role in shaping the mindset of listeners.
|
yes
|
Ancient Civilizations
|
Did the ancient Egyptians have electricity?
|
yes_statement
|
the "ancient" egyptians had "electricity".. "electricity" was present in "ancient" egypt.
|
https://www.usatoday.com/story/news/factcheck/2022/11/30/fact-check-false-claim-ancient-egyptians-had-electricity/10730634002/
|
Fact check: False claim that Ancient Egyptians had electricity
|
Fact check: False claim that ancient Egyptians had electricity
The claim: Ancient Egyptians had modern-day electricity
Ancient Egyptians are credited with many technological advancements, but a widely-circulating Facebook post says they developed something particularly impressive: Electricity.
"Past civilizations had electricity including the Egyptians," the Nov. 16 Facebook post claims. "Archeologists have always wondered why there weren't smoke marks from torches in deep Egyptian tunnels and catacombs."
The post includes three images. The image on the top left shows a modern-day electrical transformer, and the photo on the top right shows the almshouse outside the Hagia Sophia in Istanbul, Turkey. Beneath those images is one that shows carvings on the walls of the Temple of Hathor in Dendera, Egypt. Red circles have been drawn around aspects of each image to highlight elements of the ancient photos that are similar in appearance to the modern transformer.
But the post's claim is false. Ancient Egyptians did not have modern electricity. Egyptologists say the objects circled in these photos are religious symbols, not evidence of modern-day electric technology.
Modern-day electricity was invented in the late 19th Century, thousands of years after the ancient Egyptian civilization had ended.
Variations of this claim have been circulating online since 2013, when the History Chanel series ''Ancient Aliens'' aired an episode about the same carvings in the Temple of Hathor.
USA TODAY reached out to the social media user who shared the post for comment.
Objects circled in photos are religious symbols, not modern technology
An episode of the History Channel's ''Ancient Aliens'' popularized the claim that the carvings of the Temple of Hathor prove the ancient Egyptians had electricity.
But Egyptologists say that these carvings simply show religious symbols, not proof of electrical technology.
The relief featured in the post actually depicts the ancient Egyptians' creation myth, according to Rita Lucarelli, an associate professor of Egyptology at the University of California, Berkeley.
“The idea that the scene under discussion represents a light bulb and that ancient Egyptians had electricity does not correspond to what the texts accompanying that and other mythological scenes of creation from Egyptian temples say," Lucarelli told USA TODAY in an email. “The ‘bulb’ is ... the blossom of the lotus flower, and the snake represented in it is the god Hor-sema-tawy – Horus the uniter of the Two Lands – born from the lotus.”
The lotus bulb and snake motifs are repeated throughout the temple.
The poster also circled another object depicted in the relief, comparing its shape to that of an electrical transformer.
But that object is actually a djed pillar, according to Carol Redmount, an archaelogist and professor of Near Eastern Studies at the University of California, Berkeley. Djed pillars can be understood to represent the backbone of Osiris, the ancient Egyptian god of the deceased, according to the Brooklyn Museum.
Craig Cantello, manager of the Edison Tech Center,said the similarities between the shapes of objects in the relief and electrical transformers are just that – similarities in shape, not evidence of modern technology.
"The post is wrong, misleading and illogical," Cantello told USA TODAY in an email. "Hieroglyphs might look similar to modern devices, but that doesn't mean that modern devices existed in ancient times."
The top right image in the post depicts the almshouse outside the Hagia Sophia in Istanbul. The mosque, once a Greek orthodox church and an icon of the Ottoman Empire, is more than 2,000 years old. However, it is not associated with the ancient Egyptians.
The power grid as we know it began with isolated power generation systems across the world starting in the 1870s, according to the Edison Tech Center.
Our rating: False
Based on our research, we rate FALSE the claim that ancient Egyptians had modern-day electricity. Archaeologists say the objects circled in the social media posts are ancient religious symbols, not proof of electricity in ancient Egypt. Electricity was developed thousands of years after the ancient Egyptian civilization.
|
Fact check: False claim that ancient Egyptians had electricity
The claim: Ancient Egyptians had modern-day electricity
Ancient Egyptians are credited with many technological advancements, but a widely-circulating Facebook post says they developed something particularly impressive: Electricity.
"Past civilizations had electricity including the Egyptians," the Nov. 16 Facebook post claims. "Archeologists have always wondered why there weren't smoke marks from torches in deep Egyptian tunnels and catacombs."
The post includes three images. The image on the top left shows a modern-day electrical transformer, and the photo on the top right shows the almshouse outside the Hagia Sophia in Istanbul, Turkey. Beneath those images is one that shows carvings on the walls of the Temple of Hathor in Dendera, Egypt. Red circles have been drawn around aspects of each image to highlight elements of the ancient photos that are similar in appearance to the modern transformer.
But the post's claim is false. Ancient Egyptians did not have modern electricity. Egyptologists say the objects circled in these photos are religious symbols, not evidence of modern-day electric technology.
Modern-day electricity was invented in the late 19th Century, thousands of years after the ancient Egyptian civilization had ended.
Variations of this claim have been circulating online since 2013, when the History Chanel series ''Ancient Aliens'' aired an episode about the same carvings in the Temple of Hathor.
USA TODAY reached out to the social media user who shared the post for comment.
Objects circled in photos are religious symbols, not modern technology
An episode of the History Channel's ''Ancient Aliens'' popularized the claim that the carvings of the Temple of Hathor prove the ancient Egyptians had electricity.
But Egyptologists say that these carvings simply show religious symbols, not proof of electrical technology.
The relief featured in the post actually depicts the ancient Egyptians' creation myth, according to Rita Lucarelli, an associate professor of Egyptology at the University of California, Berkeley.
|
no
|
Ancient Civilizations
|
Did the ancient Egyptians have electricity?
|
yes_statement
|
the "ancient" egyptians had "electricity".. "electricity" was present in "ancient" egypt.
|
https://en.wikipedia.org/wiki/Ancient_Egyptian_technology
|
Ancient Egyptian technology - Wikipedia
|
The city of Alexandria retained preeminence for its records and scrolls with its library. This ancient library was damaged by fire when it fell under Roman rule,[5] and was destroyed completely by 642 CE.[6][7] With it, a vast supply of antique literature, history, and knowledge was lost.
Some of the older materials used in the construction of Egyptian housing included reeds and clay. According to Lucas and Harris, "reeds were plastered with clay in order to keep out of heat and cold more effectually".[8] Tools that were used included "limestone, chiseled stones, wooden mallets, and stone hammers", but also some more sophisticated hand tools.[9]
For instance, the ancient Egyptians were apparently using core drills in stonework at least as long ago as the Fourth Dynasty, probably made of copper and used in conjunction with a harder abrasive substance. There has been some dispute among archaeologists over whether the abrasive was quartz sand or a harder mineral such as corundum, emery or diamond, and whether it was loose or embedded in the metal.[10][11][12]
Many Egyptian temples are not standing today. Some are in ruin from wear and tear, while others have been lost entirely. The Egyptian structures are among the largest constructions ever conceived and built by humans. They constitute one of the most potent and enduring symbols of ancient Egyptian civilization. Temples and tombs built by a pharaoh famous for her projects, Hatshepsut, were massive and included many colossal statues of her. Pharaoh Tutankamun's rock-cut tomb in the Valley of the Kings was full of jewelry and antiques. In some late myths, Ptah was identified as the primordial mound and had called creation into being, he was considered the deity of craftsmen, and in particular, of stone-based crafts. Imhotep, who was included in the Egyptian pantheon, was the first documented engineer.[13]
In Hellenistic Egypt, lighthouse technology was developed, the most famous example being the Lighthouse of Alexandria. Alexandria was a port for the ships that traded the goods manufactured in Egypt or imported into Egypt. A giant cantilevered hoist lifted cargo to and from ships. The lighthouse itself was designed by Sostratus of Cnidus and built in the 3rd century BC (between 285 and 247 BC) on the island of Pharos in Alexandria, Egypt, which has since become a peninsula. This lighthouse was renowned in its time and knowledge of it was never lost.
The most famous pyramids are the Egyptian pyramids—huge structures built of brick or stone, some of which are among the largest constructions by humans. Pyramids functioned as tombs for pharaohs. In Ancient Egypt, a pyramid was referred to as mer, literally "place of ascendance." The Great Pyramid of Giza is the largest in Egypt and one of the largest in the world. The base is over 13 acres (53,000 m2) in area. It is one of the Seven Wonders of the Ancient World and the only one of the seven to survive into modern times. The ancient Egyptians capped the peaks of their pyramids with gold plated pyramidions and covered their faces with polished white limestone, although many of the stones used for the finishing purpose have fallen or been removed for use on other structures over the millennia.
The ancient Egyptians had some of the first monumental stone buildings (such as in Saqqara). How the Egyptians worked the solid granite is still a matter of debate. Archaeologist Patrick Hunt[15] has postulated that the Egyptians used emery shown to have higher hardness on the Mohs scale. Regarding construction, of the various methods possibly used by builders, the lever moved and uplifted obelisks weighing more than 100 tons.
Obelisks were a prominent part of the Ancient Egyptian architecture, placed in pairs at the entrances of various monuments and important buildings such as temples. In 1902, Encyclopædia Britannica wrote: "The earliest temple obelisk still in position is that of Senusret I of the XIIth Dynasty at Heliopolis (68 feet high)". The word obelisk is of Greek rather than Egyptian origin because Herodotus, the great traveler, was the first writer to describe the objects. Twenty-nine ancient Egyptian obelisks are known to have survived, plus the Unfinished obelisk being built by Hatshepsut to celebrate her sixteenth year as pharaoh. It broke while being carved out of the quarry and was abandoned when another one was begun to replace it. The broken one was found at Aswan and provides some of the only insight into the methods of how they were hewn.
The obelisk symbolized the sky deity Ra and during the brief religious reformation of Akhenaten was said to be a petrified ray of the Aten, the sun disk. It is hypothesized by New York UniversityEgyptologist Patricia Blackwell Gary and Astronomy senior editor Richard Talcott that the shapes of the ancient Egyptian pyramid and obelisk were derived from natural phenomena associated with the sun (the sun-god Ra being the Egyptians' greatest deity).[16] It was also thought that the deity existed within the structure. The Egyptians also used pillars extensively.
It is unknown whether the ancient Egyptians had kites, but a team led by Maureen Clemmons and Mory Gharib raised a 5,900-pound, 15-foot (4.6 m) obelisk into vertical position with a kite, a system of pulleys, and a support frame.[17] Maureen Clemmons developed the idea that the ancient Egyptians used kites for work.[17][18]
Ramps have been reported as being widely used in Ancient Egypt. A ramp is an inclined plane, or a plane surface set at an angle (other than a right angle) against a horizontal surface. The inclined plane permits one to overcome a large resistance by applying a relatively small force through a longer distance than the load is to be raised. In civil engineering the slope (ratio of rise/run) is often referred to as a grade or gradient. An inclined plane is one of the commonly-recognized simple machines. Maureen Clemmons subsequently led a team of researchers demonstrating a kite made of natural material and reinforced with shellac (which according to their research pulled with 97% the efficiency of nylon), in a 9 mph wind, would easily pull an average 2-ton pyramid stone up the 1st two courses of a pyramid (in collaboration with Cal Poly, Pomona, on a 53-stone pyramid built in Rosamond, CA).
The ancient Egyptians had knowledge to some extent of sail construction. This is governed by the science of aerodynamics.[19] The earliest Egyptian sails were simply placed to catch the wind and push a vessel.[20] Later Egyptian sails dating to 2400 BC were built with the recognition that ships could sail against the wind using the lift of the sails.[20][21] Queen Hatshepsut oversaw the preparations and funding of an expedition of five ships, each measuring seventy feet long, and with several sails.[dubious – discuss][citation needed]Various others exist, also.
Egyptian ship, 1250 BC Egyptian ship on the Red Sea, showing a board truss being used to stiffen the beam of this ship
Egyptian ship with a loose-footed sail, similar to a longship. From the 5th dynasty (around 2700 BC)
Although quarter rudders were the norm in Nile navigation, the Egyptians were the first to use also stern-mounted rudders (not of the modern type but center mounted steering oars).
The first warships of Ancient Egypt were constructed during the early Middle Kingdom and perhaps at the end of the Old Kingdom, but the first mention and a detailed description of a large enough and heavily armed ship dates from 16th century BC.
And I ordered to build twelve warships with rams, dedicated to Amun or Sobek, or Maat and Sekhmet, whose image was crowned best bronze noses. Carport and equipped outside rook over the waters, for many paddlers, having covered rowers deck not only from the side, but and top. and they were on board eighteen oars in two rows on the top and sat on two rowers, and the lower – one, a hundred and eight rowers were. And twelve rowers aft worked on three steering oars. And blocked Our Majesty ship inside three partitions (bulkheads) so as not to drown it by ramming the wicked, and the sailors had time to repair the hole. And Our Majesty arranged four towers for archers – two behind, and two on the nose and one above the other small – on the mast with narrow loopholes. they are covered with bronze in the fifth finger (3.2mm), as well as a canopy roof and its rowers. and they have (carried) on the nose three assault heavy crossbow arrows so they lit resin or oil with a salt of Seth (probably nitrate) tore a special blend and punched (?) lead ball with a lot of holes (?), and one of the same at the stern. and long ship seventy five cubits (41m), and the breadth sixteen, and in battle can go three-quarters of iteru per hour (about 6.5 knots)...
When Thutmose III achieved warships displacement up to 360 tons and carried up to ten new heavy and light to seventeen catapults based bronze springs, called "siege crossbow" – more precisely, siege bows. Still appeared giant catamarans that are heavy warships and times of Ramesses III used even when the Ptolemaic dynasty.[32]
According to the Greek historian Herodotus, Necho II sent out an expedition of Phoenicians, which reputedly, at some point between 610 and before 594 BC, sailed in three years from the Red Sea around Africa to the mouth of the Nile. Some Egyptologists dispute that an Egyptian pharaoh would authorize such an expedition,[33] except for the reason of trade in the ancient maritime routes.
The belief in Herodotus' account, handed down to him by oral tradition,[34] is primarily because he stated with disbelief that the Phoenicians "as they sailed on a westerly course round the southern end of Libya (Africa), they had the sun on their right – to northward of them" (The Histories 4.42) – in Herodotus' time it was not generally known that Africa was surrounded by an ocean (with the southern part of Africa being thought connected to Asia[35]). So fantastic an assertion is this of a typical example of some seafarers' story and Herodotus therefore may never have mentioned it at all, had it not been based on facts and made with the according insistence.[36]
This early description of Necho's expedition as a whole is contentious, though; it is recommended that one keep an open mind on the subject,[37] but Strabo, Polybius, and Ptolemy doubted the description. Egyptologist A. B. Lloyd suggests that the Greeks at this time understood that anyone going south far enough and then turning west would have the Sun on their right but found it unbelievable that Africa reached so far south. He suggests that "It is extremely unlikely that an Egyptian king would, or could, have acted as Necho is depicted as doing" and that the story might have been triggered by the failure of Sataspes' attempt to circumnavigate Africa under Xerxes the Great.[38] Regardless, it was believed by Herodotus and Pliny.[39]
Irrigation as the artificial application of water to the soil was used to some extent in ancient Egypt, a hydraulic civilization (which entails hydraulic engineering).[45] In crop production it is mainly used to replace missing rainfall in periods of drought, as opposed to reliance on direct rainfall (referred to as dryland farming or as rainfed farming). Before technology advanced, the people of Egypt relied on the natural flow of the Nile River to tend to the crops. Although the Nile provided sufficient watering for the survival of domesticated animals, crops, and the people of Egypt, there were times where the Nile would flood the area wreaking havoc across the land.[46] There is evidence of the ancient Egyptian pharaohAmenemhet III in the Twelfth Dynasty (about 1800 BC) using the natural lake of the Fayûm as a reservoir to store surpluses of water for use during the dry seasons, as the lake swelled annually with the flooding of the Nile.[47] Construction of drainage canals reduced the problems of major flooding from entering homes and areas of crops; but because it was a hydraulic civilization, much of the water management was controlled in a systematic way.[48]
The earliest known glass beads from Egypt were made during the New Kingdom around 1500 BC and were produced in a variety of colors. They were made by winding molten glass around a metal bar and were highly prized as a trading commodity, especially blue beads, which were believed to have magical powers. The Egyptians made small jars and bottles using the core-formed method. Glass threads were wound around a bag of sand tied to a rod. The glass was continually reheated to fuse the threads together. The glass-covered sand bag was kept in motion until the required shape and thickness was achieved. The rod was allowed to cool, then finally the bag was punctured and the sand poured out and reused . The Egyptians also created the first colored glass rods which they used to create colorful beads and decorations. They also worked with cast glass, which was produced by pouring molten glass into a mold, much like iron and the more modern crucible steel.[49]
The Egyptians were a practical people and this is reflected in their astronomy[50] in contrast to Babylonia where the first astronomical texts were written in astrological terms.[51] Even before Upper and Lower Egypt were unified in 3000 BC, observations of the night sky had influenced the development of a religion in which many of its principal deities were heavenly bodies. In Lower Egypt, priests built circular mudbrick walls with which to make a false horizon where they could mark the position of the sun as it rose at dawn, and then with a plumb-bob note the northern or southern turning points (solstices). This allowed them to discover that the sun disc, personified as Ra, took 365 days to travel from his birthplace at the winter solstice and back to it. Meanwhile, in Upper Egypt, a lunar calendar was being developed based on the behavior of the moon and the reappearance of Sirius in its heliacal rising after its annual absence of about 70 days.[52]
After unification, problems with trying to work with two calendars (both depending upon constant observation) led to a merged, simplified civil calendar with twelve 30-day months, three seasons of four months each, plus an extra five days, giving a 365-year day but with no way of accounting for the extra quarter day each year. Day and night were split into 24 units, each personified by a deity. A sundial found on Seti I's cenotaph with instructions for its use shows us that the daylight hours were at one time split into 10 units, with 12 hours for the night and an hour for the morning and evening twilights.[53] However, by Seti I's time day and night were normally divided into 12 hours each, the length of which would vary according to the time of year.
Key to much of this was the motion of the sun god Ra and his annual movement along the horizon at sunrise. Out of Egyptian myths such as those around Ra and the sky goddess Nut came the development of the Egyptian calendar, time keeping, and even concepts of royalty. An astronomical ceiling in the burial chamber of Ramesses VI shows the sun being born from Nut in the morning, traveling along her body during the day and being swallowed at night.
During the Fifth Dynasty six kings built sun temples in honour of Ra. The temple complexes built by Niuserre at Abu Gurab and Userkaf at Abusir have been excavated and have astronomical alignments, and the roofs of some of the buildings could have been used by observers to view the stars, calculate the hours at night and predict the sunrise for religious festivals.[citation needed]
The Dendera Zodiac was on the ceiling of the Greco-Roman temple of Hathor at Dendera
Claims have been made that precession of the equinoxes was known in ancient Egypt prior to the time of Hipparchus.[54] This has been disputed however on the grounds that pre-Hipparchus texts do not mention precession and that "it is only by cunning interpretation of ancient myths and images, which are ostensibly about something else, that precession can be discerned in them, aided by some pretty esoteric numerological speculation involving the 72 years that mark one degree of shift in the zodiacal system and any number of permutations by multiplication, division, and addition."[55]
Note however that the Egyptian observation of a slowly changing stellar alignment over a multi-year period does not necessarily mean that they understood or even cared what was going on. For instance, from the Middle Kingdom onwards they used a table with entries for each month to tell the time of night from the passing of constellations. These went in error after a few centuries because of their calendar and precession, but were copied (with scribal errors) long after they lost their practical usefulness or the possibility of understanding and use of them in the current years, rather than the years in which they were originally used.
Plates vi & vii of the Edwin Smith Papyrus (around the 17th century BC), among the earliest medical texts
The Edwin Smith Papyrus is one of the first medical documents still extant, and perhaps the earliest document which attempts to describe and analyze the brain: given this, it might be seen as the very beginnings of neuroscience. However, medical historians believe that ancient Egyptian pharmacology was largely ineffective.[56] According to a paper published by Michael D. Parkins, 72% of 260 medical prescriptions in the Hearst papyrus had no curative elements.[57] According to Michael D. Parkins, sewage pharmacology first began in ancient Egypt and was continued through the Middle Ages,[56] and while the use of animal dung can have curative properties,[58] it is not without its risk. Practices such as applying cow dung to wounds, ear piercing, tattooing, and chronic ear infections were important factors in developing tetanus.[59] Frank J. Snoek wrote that Egyptian medicine used fly specks, lizard blood, swine teeth, and other such remedies which he believes could have been harmful.[60]
Mummification of the dead was not always practiced in Egypt. Once the practice began, an individual was placed at a final resting place through a set of rituals and protocol. The Egyptian funeral was a complex ceremony including various monuments, prayers, and rituals undertaken in honor of the deceased. The poor, who could not afford expensive tombs, were buried in shallow graves in the sand, and because of the arid environment they were often naturally mummified.
The Egyptians developed a variety of furniture. There in the lands of ancient Egypt is the first evidence for stools, beds, and tables (such as from the tombs similar to Tutankhamun's). Recovered Ancient Egyptian furniture includes a third millennium BC bed discovered in the Tarkhan Tomb, a c.2550 BC. gilded set from the tomb of Queen Hetepheres I, and a c. 1550 BC. stool from Thebes.
Some have suggested that the Egyptians had some form of understanding electricphenomena from observing lightning and interacting with electric fish (such as Malapterurus electricus) or other animals (such as electric eels).[65] The comment about lightning appears to come from a misunderstanding of a text referring to "high poles covered with copper plates" to argue this[66] but Dr. Bolko Stern has written in detail explaining why the copper covered tops of poles (which were lower than the associated pylons) do not relate to electricity or lightning, pointing out that no evidence of anything used to manipulate electricity had been found in Egypt and that this was a magical and not a technical installation.[67]
Recent scholarship suggests that the water wheel originates from Ptolemaic Egypt, where it appeared by the 3rd century BC.[68][69] This is seen as an evolution of the paddle-driven water-lifting wheels that had been known in Egypt a century earlier.[68] According to John Peter Oleson, both the compartmented wheel and the hydraulic noria may have been invented in Egypt by the 4th century BC, with the Sakia being invented there a century later. This is supported by archeological finds at Faiyum, Egypt, where the oldest archeological evidence of a water-wheel has been found, in the form of a Sakia dating back to the 3rd century BC. A papyrus dating to the 2nd century BC also found in Faiyum mentions a water wheel used for irrigation, a 2nd-century BC fresco found at Alexandria depicts a compartmented Sakia, and the writings of Callixenus of Rhodes mention the use of a Sakia in Ptolemaic Egypt during the reign of Ptolemy IV in the late 3rd century BC.[69]
Ancient Greek technology was often inspired by the need to improve weapons and tactics in war. Ancient Roman technology is a set of artifacts and customs which supported Roman civilization and made the expansion of Roman commerce and Roman military possible over nearly a thousand years.
^Erlikh, Ḥagai; Erlikh, Hạggai; Gershoni, I. (2000). The Nile: Histories, Cultures, Myths. Lynne Rienner Publishers. pp. 80–81. ISBN978-1-55587-672-2. Retrieved 9 January 2020. The Nile occupied an important position in Egyptian culture; it influenced the development of mathematics, geography, and the calendar; Egyptian geometry advanced due to the practice of land measurement "because the overflow of the Nile caused the boundary of each person's land to disappear."
^Georges Ifrah, The Universal History of Numbers. Page 162 (cf., "As we have seen, Sumer used a sexagesimal base; whereas the system of Ancient Egypt was strictly decimal.")
^A primary feature of a properly designed sail is an amount of "draft", caused by curvature of the surface of the sail. When the sail is oriented into the wind, this curvature induces lift, much like the wing of an airplane.
^Anzovin, item # 5393, page 385 Reference to a ship with a name appears in an inscription of 2613 BCE that recounts the shipbuilding achievements of the fourth-dynasty Egyptian pharaoh Sneferu. He was recorded as the builder of a cedarwood vessel called "Praise of the Two Lands."
^Nelson Harold Hayden, Allen Thomas George and Dr Raymond O. Faulkner.
«Tuthmosis III. First Emperor in the History of Mankind. His Regal companions
and a Great assistants» Oxford UNV Publishing, 1921 p.127.
^For instance, the Egyptologist Alan Lloyd wrote "Given the context of Egyptian thought, economic life, and military interests, it is impossible for one to imagine what stimulus could have motivated Necho in such a scheme and if we cannot provide a reason which is sound within Egyptian terms of reference, then we have good reason to doubt the historicity of the entire episode." Lloyd, Alan B. (1977). "Necho and the Red Sea: Some Considerations". Journal of Egyptian Archaeology. 63: 149. doi:10.2307/3856314. JSTOR3856314.
^A convenient table of sea peoples in hieroglyphics, transliteration and English is given in the dissertation of Woodhuizen, 2006, who developed it from works of Kitchen cited there
^ As noted by Gardiner V.1 p.196, other texts have "foreign-peoples"; both terms can refer to the concept of "foreigners" as well. Zangger in the external link below expresses a commonly held view that "Sea Peoples" does not translate this and other expressions but is an academic innovation. The Woudhuizen dissertation and the Morris paper identify Gaston Maspero as the first to use the term "peuples de la mer" in 1881.
^Heinrich Karl Brugsch-Bey and Henry Danby Seymour, "A History of Egypt Under the Pharaohs". J. Murray, 1881. Page 422. (cf., [... the symbol of a] 'serpent' is rather a fish, which still serves, in the Coptic language, to designate the electric fish [...])
|
There in the lands of ancient Egypt is the first evidence for stools, beds, and tables (such as from the tombs similar to Tutankhamun's). Recovered Ancient Egyptian furniture includes a third millennium BC bed discovered in the Tarkhan Tomb, a c.2550 BC. gilded set from the tomb of Queen Hetepheres I, and a c. 1550 BC. stool from Thebes.
Some have suggested that the Egyptians had some form of understanding electricphenomena from observing lightning and interacting with electric fish (such as Malapterurus electricus) or other animals (such as electric eels).[65] The comment about lightning appears to come from a misunderstanding of a text referring to "high poles covered with copper plates" to argue this[66] but Dr. Bolko Stern has written in detail explaining why the copper covered tops of poles (which were lower than the associated pylons) do not relate to electricity or lightning, pointing out that no evidence of anything used to manipulate electricity had been found in Egypt and that this was a magical and not a technical installation.[67]
Recent scholarship suggests that the water wheel originates from Ptolemaic Egypt, where it appeared by the 3rd century BC.[68][69] This is seen as an evolution of the paddle-driven water-lifting wheels that had been known in Egypt a century earlier.[68] According to John Peter Oleson, both the compartmented wheel and the hydraulic noria may have been invented in Egypt by the 4th century BC, with the Sakia being invented there a century later. This is supported by archeological finds at Faiyum, Egypt, where the oldest archeological evidence of a water-wheel has been found, in the form of a Sakia dating back to the 3rd century BC.
|
no
|
Ancient Civilizations
|
Did the ancient Egyptians have electricity?
|
yes_statement
|
the "ancient" egyptians had "electricity".. "electricity" was present in "ancient" egypt.
|
https://www.solarquotes.com.au/blog/ancient-egypt-solar-power/
|
Archeologists Uncover Ancient Egyptian Solar Power
|
Archeologists Uncover Ancient Egyptian Solar Power
A set of papers published in the International Journal of Antiquity last month has revealed ancient Egyptians may have had access to a technology that, in our modern world, has only taken off over the last few decades. While it has been known since the 1930s that simple chemical batteries were used for gold electroplating in Egypt thousands of years ago, until now it was thought these could only have been recharged by replacing the chemicals and copper rods inside. But thanks to literally groundbreaking research, it’s now known ancient Egyptians had access to a primitive form of solar power.
Using only simple tools, they were able to use obsidian — a type of volcanic glass composed mostly of silicon — with high levels of naturally occurring boron to construct simple solar cells using hand drawn copper wire. While modern solar panels are over 100 times as efficient, power from these very basic cells would have been sufficient for electroplating and potentially other uses.
A Ground Breaking Discovery
The team that made the discovery was led by Professor Anna Kumar of Jones University. They were called in to investigate after pieces of etched obsidian stone were found when ground was broken for a new shopping complex, only hundreds of metres from the great pyramids.
The stones were dated to the Old Kingdom of Egypt, which ruled the Nile Valley from 4,706 to 4,201 years ago. They were etched on both sides with shallow grooves containing traces of copper. Similar, smaller pieces of obsidian with intact inlaid copper had been found in the past, but were believed to be jewellery – a type of Ancient Egyptian bling. But these pieces were much too large for that. One suggestion was they were used as decoration in a temple, but one team member had a different idea that led to a shocking discovery.
On this breakthrough, Professor Kumar says:
“One of my students thought, if the larger obsidian shards had been inlaid with copper it would resemble the fine wiring on the solar panels his family uses to cope with Cairo’s blackouts. We took a piece already in our collection with the copper lines intact and discovered that, when placed in sunlight, it generated a small but measurable current.”
“This means the old Kingdom of Egypt had a basic solar cell over 4,500 years before the common age and over 600 years before their first use of the wheel.”
The Right Kind Of Obsidian
Obsidian is a dark volcanic glass mostly made of silicon. It was highly valued and widely traded in the ancient world for use as jewellery and because it could be used to make flakes with sharp cutting edges. It was previously known that all obsidian found in Old Kingdom sites came from one particular location near Timna on the Sinai Peninsula, a considerable distance from the civilization of the Nile Valley; but until now it wasn’t known why.
It turns out this particular obsidian is rich in naturally occurring boron, an element used in silicon solar cell manufacture today.
A Fortunate Discovery
By inlaying this boron rich obsidian stone with copper wires and exposing it to sunlight a simple solar cell could be created, but for it to work one side would first need to be infused with phosphorus.
Ghyr Sahih of the La ‘Usdiquh Museum offers an explanation of how this could have occurred:
“The only metal these people had to work with was copper and they only had the simplest of tools. But you can still see people using the ancient methods today and, in my father’s time, it was a common job for the young apprentice — because they still had their teeth — to use them to draw a fine copper wire. It’s not hard to imagine, thousands of years ago, a craftsman sitting in the sun and making jewellery from a piece of obsidian and copper wire. Because he’s using his mouth he could have discovered, when the stone he was working on is in sunlight, an electrical tingle could be felt on his tongue. Worship of the sun was an integral part of Egyptian life and so this could have seemed very significant and encouraged them to investigate further.”
“While one side of the obsidian would need to have been exposed to phosphorous this could have happened if, at some point, it had lain in hot ashes or been exposed to sufficient smoke. As long as there was more phosphorous on one side than the other it would be possible for current to flow.”
Making Ancient Solar Cells
To create ancient solar cells it is thought that obsidian containing boron would first be etched with shallow grooves and then coated on one side with clay. After the clay dried they would be placed obsidian side down in hot ashes from burned wheat husks for 24 hours. This ash would be rich in phosphorous and at high temperatures enough of the element would move into the exposed obsidian for it to function as a solar cell. After cooling, the baked clay would be broken off and the etched grooves inlaid with fine copper wire.
Ancient Solar Lighting
We’ve known of the existence of Ancient Egyptian electroplating since the 1930s when Walter Konig discovered what is now called the Baghdad Battery. (Yes, there were Germans running around Egypt and digging up artifacts in the 1930s.)
But it is now thought electroplating is a development that came long after the original uses of ancient solar electricity. It’s thought one of these was a form of primitive electric lighting.
While most people are aware solar panels turn light into electricity, very few know that if the current is run the other way, a solar panel will give off light. This feature is used to test solar panels to check for microcracks and other defects in what is known as electroluminescence testing:
It is the defects in a solar panel that produce most of the light and, since a piece of obsidian is a very defective material for solar cells by today’s standards, it’s better at producing light in this way than modern panels. While the amount of light given off is very dim and almost impossible to see in the day, in a darkened room it would have been very impressive to people whose only other artificial lighting would have come from candles and oil lanterns.
Building Pyramids Without The Wheel
The great pyramids of Egypt were built hundreds of years before the Egyptians had the wheel. If a movie shows the pyramids under construction and the pharaoh going past on a chariot, then that is more anachronistic than showing Napoleon driving a tank at the Battle of Waterloo.
Never rode a tank in the General’s rank, while the cannons blazed and the French navy sank…
So the question is, how did the Ancient Egyptians manage to build the Great Pyramids without even having the wheel? While most experts until now subscribed to the “very large whips” theory, Professor Kumar and her team believe they have found the solution.
If obsidian solar cells were laid out in a path and a copper bottomed sled was given the same positive charge as the surface layer of obsidian, because like charges repel, the sled could hover millimeters above the path while carrying a considerable, but not extreme, load.
Professor Kumar says:
“Many people are under the impression the stone blocks used to build the pyramids were massive and almost impossible to move without modern machinery. The truth is, while they are not light, they are around the size of a household refrigerator with an average weight of 2.3 tonnes. To drag a stone like this using log rollers would require a team of 30 men. But our calculations show an Egyptian solar roadway could support a maximum weight of a little over 2.5 tonnes and that the average stone block could be moved along a gentle incline by just two men.”
If correct, this would reduce the amount of labour required to build the great pyramids to a reasonable level. It has been suggested this would also make it possible for the upper portions of the pyramids to be seasonally dismantled and rebuilt.
Storing Ancient Solar Energy
Because obsidian solar cells are not sealed against the elements, they won’t work when wet and even high humidity will reduce their output. But when sand is wet and compressed, it will supply a small amount of current due to the Piezoelectric effect. This is caused by crystals being placed under mechanical strain and sand is mostly composed of quartz crystals. This effect could have been used to provide power for electroluminesent light at night for the Pharaohs and temples of the Old Kingdom of Egypt.
Professor Kumar believes this explains much of what we currently don’t understand about the Great Pyramids and why they are simply giant regular mounds of carefully arranged stones with steep sides and very little internal structure. She believes their purpose was to provide energy storage for the lights of Egyptian temples. Most of the stones of a pyramid were there to provide height, but the top portion would be used to power the lights of temples through the rainy season. Throughout the night, stones would be taken from the pyramid and allowed to slide down a series of slopes and impact with wet sand to provide power for temple lighting. During the dry season, when the solar panels would be most efficient, the stones would be replaced.
It would work in a way similar to, although far less efficiently, than modern energy storage designs that use concrete blocks instead of stone:
As larger and more elaborately lit temples were built, larger pyramids would be required to power them, culminating in the Great Pyramid of Giza, which is 139 metres tall and weighs 5.5 million tonnes.
She believes the comparatively tiny burial chambers inside were a later addition by pharaohs who wanted to be buried inside literal representations of power. Furthermore, she says that while the Ancient Egyptians may have used solar powered lights to worship the sun god Ra, it’s possible the most famous and powerful of the Egyptian gods only gained his preeminence due to the existence of ancient solar power.
A Lost Technology
There is no evidence of ancient solar cells being used in Egypt after the end of the Old Kingdom. It appears the technology was lost when Egypt broke into separate states and entered an extended period of civil wars. The reason why it was lost is likely because both copper and obsidian would have been in heavy demand during wartime. Copper could be combined with tin to make bronze weapons and armour, while obsidian could be used to make brittle but extremely sharp spear and arrow heads that could ruin the day of anyone unlucky enough not to own bronze armor.
While it appears obsidian solar energy technology was lost in Egypt some 4,200 years ago, it is possible it was used outside of Egypt at a later date. Discoveries made by underwater archeologists in the 60s are now being reevaluated, and it appears pieces of obsidian with traces of copper oxide on them recovered from Atlantean ruins in the Aegean Sea may be heavily corroded ancient solar cells.
Related
Ronald was born more years ago than he can remember. He first became interested in environmental matters when he was four years old after the environment tried to kill him by smashing fist sized hailstones through the roof of his parents’ Toowoomba home. Swearing revenge, he began his lifelong quest to reduce the harm the environment could cause. By the time he was eight, he was already focused on using the power of the sun to stop fossil fuel emissions destabilizing the climate. But it took him about another ten years to focus on it in a way that wasn’t really stupid
11/10 and gold star a truly fine creative effort. And I thought you were just a cog of the capitalist machine. /Satire 🙂 How wrong I am. Nice work.
Really do like your writing. In depth independent analysis of the solar industry is desperately needed, given all the sales propaganda deliberately designed to mislead [especially around vpp’s and batteries], so good on you. Perhaps you can do something like John Codogan at Autoexpert TV on you tube. A smart guy who delivers vital information in an very entertaining way. Yes its purpose is to drive traffic to his web site, but he is doing a community service as well as you guys do too.
We can also deduce that because they had copper wire, electric current and access to simple magnets (magnetite) they could have made a basic telephone able to communicate over small distances. Also by digging up ancient fire pits there was much evidence of fused silica being used to produce glass. This could in turn be made into thin strings to produce crude optical fibre which could be used to transmit light into pyramids and for communication.
Bruce Pascoe was asked if he found any traces of copper wire or glass in any of his investigations of early settlements.
“None at all mate so we reckon those fellas went straight to 5G”
It really was a great article, very well presented – had me going for a bit, but with a slight “niggle” that something didn’t quite add up – despite all the facts that I already knew to be true – the give away for me was the levitation aspect…!
Just had a good read through all of the International Journal of Antiquity episodes, ahem, articles, and the case is air tight. Fantastic blogging as usual Ronald, you’ve stumbled on an extremely important breakthrough.
Also the romance had sewers before Great Britain that’s why they suffers the bubonic plague
Second the Egyptians could make staind glass the had measurements and everything what killed them change dig up everything and you will see also google the name Yahweh you have lots to learn we steal and learn from one another all the time
Now this may seem far fetched but is it possible that the Egyptians used the pyramids as a ground solar energy collection device and the the electricity was produced on a satellite like objects that orbited above in space where sunlight was everlasting.
Please keep the SolarQuotes blog constructive and useful with these 5 rules:
1. Real names are preferred - you should be happy to put your name to your comments. 2. Put down your weapons. 3. Assume positive intention. 4. If you are in the solar industry - try to get to the truth, not the sale. 5. Please stay on topic.
Get The SolarQuotes Weekly Newsletter
Thank you! Expect your first newsletter on Tuesday!
Before You Go..
Download the first chapter of The Good Solar Guide, authored by SolarQuotes founder Finn Peacock, FREE!
You’ll also start receiving the SolarQuotes weekly newsletter, keeping you up to date on all the latest developments on Australia’s solar scene.
|
Archeologists Uncover Ancient Egyptian Solar Power
A set of papers published in the International Journal of Antiquity last month has revealed ancient Egyptians may have had access to a technology that, in our modern world, has only taken off over the last few decades. While it has been known since the 1930s that simple chemical batteries were used for gold electroplating in Egypt thousands of years ago, until now it was thought these could only have been recharged by replacing the chemicals and copper rods inside. But thanks to literally groundbreaking research, it’s now known ancient Egyptians had access to a primitive form of solar power.
Using only simple tools, they were able to use obsidian — a type of volcanic glass composed mostly of silicon — with high levels of naturally occurring boron to construct simple solar cells using hand drawn copper wire. While modern solar panels are over 100 times as efficient, power from these very basic cells would have been sufficient for electroplating and potentially other uses.
A Ground Breaking Discovery
The team that made the discovery was led by Professor Anna Kumar of Jones University. They were called in to investigate after pieces of etched obsidian stone were found when ground was broken for a new shopping complex, only hundreds of metres from the great pyramids.
The stones were dated to the Old Kingdom of Egypt, which ruled the Nile Valley from 4,706 to 4,201 years ago. They were etched on both sides with shallow grooves containing traces of copper. Similar, smaller pieces of obsidian with intact inlaid copper had been found in the past, but were believed to be jewellery – a type of Ancient Egyptian bling. But these pieces were much too large for that. One suggestion was they were used as decoration in a temple, but one team member had a different idea that led to a shocking discovery.
On this breakthrough, Professor Kumar says:
“One of my students thought, if the larger obsidian shards had been inlaid with copper it would resemble the fine wiring on the solar panels his family uses to cope with Cairo’s blackouts. We took a piece already in our collection with the copper lines intact and discovered that,
|
yes
|
Ancient Civilizations
|
Did the ancient Egyptians have electricity?
|
yes_statement
|
the "ancient" egyptians had "electricity".. "electricity" was present in "ancient" egypt.
|
https://www.africarebirth.com/how-ancient-egyptian-knowledge-influenced-the-invention-of-electricity/
|
How Ancient Egyptian Knowledge Influenced the Invention of ...
|
How Ancient Egyptian Knowledge Influenced the Invention of Electricity
Electricity is the fuel for most technologies today. Many devices simply will not operate without electricity. The world has now become so dependent on electricity that many people find it extremely difficult to live without it.
If you research the inventor of electricity, several sources credit it to Greek scientist, Thales of Miletus. However, all the sources you will find will omit the fact that Thales of Miletus received the basis of his knowledge from Kemet, otherwise known as ancient Egypt.
A set of papers published in the International Journal of Antiquityhave revealed that ancient Egyptians may have had access to a technology that – in our modern world –only took off over the last few decades. While it has been known since the 1930s that single chemical batteries were used for gold electroplating in Egypt thousands of years ago, until now it was believed that these could only have been recharged by placing the chemicals and copper rods inside. However, thanks to groundbreaking research, it is now known that Egyptians had access to a primitive form of solar power.
Using only simple tools, they were able to use obsidian – a type of volcanic glass composed mostly of silicon, with high levels of naturally occurring baron – to construct simple solar cells using hand drawn copper wire. While modern solar panels are much more efficient, power from these very basic cells would have been sufficient for electroplating and potentially other uses.
Pieces of etched obsidian stone were found when the ground was broken for a new shopping complex only hundreds of meters from the great pyramids. The stones were dated to the Old Kingdom of Egypt, which ruled the Nile Valley from 4201 years ago. They were etched on both sides with shallow grooves containing traces of copper. Similar pieces of obsidian with intact copper laid in it had been found in the past but were believed to be jewelry. These pieces, however, were much too large for that.
Many researchers agree that in the distant past, electricity was widely utilised in the land of the Pharaohs, with the Baghdad Battery being one of the most discussed examples of such advanced technology. The existence of ancient Egyptian electroplating since the 1930s, can be traced to when Walter Konig discovered what is now called the Baghdad Battery. It is now thought that electroplating is a development that came along after the original uses of ancient solar electricity. It is believed that one of this was a form of primitive electric lighting.
While most people are aware that solar panels turn light into electricity, very few know that if the current is run the other way, a solar panel will give off light. This feature is used to test solar panels to check for microcracks and other defects in what is known as electroluminescence testing. No soot has been found in the corridors of the pyramids or the tombs of the kings because these areas were lit using electricity. Relief carvings could also show that the Egyptians used hand-held torches powered by cable-free sources. The arc lamp used in the Lighthouse of Alexandria is further evidence that electricity might have been used in ancient Egypt.
Moreover, experiments with models of the Baghdad Battery have produced between 3 and 5 volts. This is not a lot of “juice,” when you compare it with modern standards, but it was enough to power “something” some thousands of years ago.
Numerous sources extend credit for the invention of electricity to Thales Miletus. Scholars claimed that he discovered that when amber was rubbed with other materials, it became charged with an unknown force that had the power to attract objects such as dried leaves, feathers, bits of cloth and other lightweight material.
Professor Anaya Khan, a scientist from Egypt, shed some light on this saying, “Of all the sources investigated, all of them omitted the fact that Thales of Miletus received an education in ancient Kemet. His ability for keen observation can be attributed to the people of ancient Kemet. He studied in Egypt and Babylon, bringing back knowledge of physics, astronomy and mathematics. Documented evidence shows that the Babylonians copied and obtained all of their knowledge from the people of ancient Kemet.”
Although the Kemites did not directly invent electricity, their influence and teachings enabled Thales of Miletus to discover the invention that eventually had an enormous impact on the development of electricity.
|
How Ancient Egyptian Knowledge Influenced the Invention of Electricity
Electricity is the fuel for most technologies today. Many devices simply will not operate without electricity. The world has now become so dependent on electricity that many people find it extremely difficult to live without it.
If you research the inventor of electricity, several sources credit it to Greek scientist, Thales of Miletus. However, all the sources you will find will omit the fact that Thales of Miletus received the basis of his knowledge from Kemet, otherwise known as ancient Egypt.
A set of papers published in the International Journal of Antiquityhave revealed that ancient Egyptians may have had access to a technology that – in our modern world –only took off over the last few decades. While it has been known since the 1930s that single chemical batteries were used for gold electroplating in Egypt thousands of years ago, until now it was believed that these could only have been recharged by placing the chemicals and copper rods inside. However, thanks to groundbreaking research, it is now known that Egyptians had access to a primitive form of solar power.
Using only simple tools, they were able to use obsidian – a type of volcanic glass composed mostly of silicon, with high levels of naturally occurring baron – to construct simple solar cells using hand drawn copper wire. While modern solar panels are much more efficient, power from these very basic cells would have been sufficient for electroplating and potentially other uses.
Pieces of etched obsidian stone were found when the ground was broken for a new shopping complex only hundreds of meters from the great pyramids. The stones were dated to the Old Kingdom of Egypt, which ruled the Nile Valley from 4201 years ago. They were etched on both sides with shallow grooves containing traces of copper. Similar pieces of obsidian with intact copper laid in it had been found in the past but were believed to be jewelry. These pieces, however, were much too large for that.
Many researchers agree that in the distant past, electricity was widely utilised in the land of the Pharaohs, with the Baghdad Battery being one of the most discussed examples of such advanced technology.
|
yes
|
Ancient Civilizations
|
Did the ancient Egyptians have electricity?
|
yes_statement
|
the "ancient" egyptians had "electricity".. "electricity" was present in "ancient" egypt.
|
https://talesoftimesforgotten.com/2020/01/01/did-the-ancient-egyptians-have-electric-lighting/
|
Did the Ancient Egyptians Have Electric Lighting? - Tales of Times ...
|
Did the Ancient Egyptians Have Electric Lighting?
It has been widely claimed on the internet that the ancient Egyptians had electric lighting. This claim is made largely based on an extremely tendentious interpretation of a series of relief carvings from the southern crypt of the ancient Egyptian Temple of Hathor at Dendera and the fact that some Egyptian tombs and temples do not currently have very much soot on their ceilings.
Unfortunately for those who want to believe that the ancient Egyptians had electric lighting, they simply didn’t. As I will show, the reliefs from Dendera almost certainly don’t depict lightbulbs and there is a much more reasonable explanation for why some Egyptian temples and tombs do not have soot on their ceilings.
The so-called “Dendera lightbulb”
The primary piece of evidence that people like to cite in support of the idea that the ancient Egyptians had electric lighting is a set of three relief carvings from the southern crypt of the Temple of Hathor at Dendera, which depict a scene that has become known as the “Dendera lightbulb.” The relief carvings depict a giant lotus flower with the god Harsomtus arising in the form of a serpent from it, surrounded by a bubble of magical energy. In two of the three carvings, the energy bubble emerging from the lotus flower is held up by a miniature male figure dressed in a loincloth with a sun disk on its head. In all three carvings, a full-sized male figure in a loincloth stands behind the lotus flower.
Many people are convinced that these reliefs from the southern crypt of the Temple of Hathor at Dendera depict incandescent lightbulbs. They think that the stem of the lotus flower is an electrical wire, that the magical bubble around the serpent is the glass bulb, and that the serpent itself is the filament. This, however, is, quite frankly, an absurd interpretation. It is the sort of interpretation that I would normally assume to be satirical, but yet there are many people who are firmly convinced that it is correct.
If you look at the reliefs carefully, you will notice that there are a lot of obvious signs that should tip you off that they are not depictions of incandescent lightbulbs. For one thing, in all three reliefs, the snake quite clearly has eyes and a mouth. The lotus flower the snake is emerging from quite clearly has petals. It is also worth noting that the filament in an incandescent lightbulb is actually a horizontal wire running between two vertical supply wires. The filament has to be connected to a wire on both sides or it will not produce light. The snake in the relief carvings from Dendera, however, is only attached to lotus flower by its tail; its head is not attached to anything.
There is really nothing in the relief carvings from Dendera that can be sensibly interpreted as looking anything more than extremely vaguely like a modern incandescent lightbulb—or any other kind of lightbulb. Furthermore, the scene from Dendera actually depicts a well-attested scene from Egyptian mythology. The story of Harsomtus coming forth from the primordial lotus flower is well-known from surviving Egyptian texts.
ABOVE: Photograph from Wikimedia Commons of one of the reliefs from the southern crypt of the ancient Egyptian Hathor Temple at Dendera that many people (wrongly) think depicts a lightbulb
ABOVE: Photograph from Wikimedia Commons of another one of the reliefs from the crypt of the Temple of Hathor at Dendera that many people (wrongly) think depicts a lightbulb
ABOVE: Photograph from Wikimedia Commons of another one of the reliefs from the crypt of the Temple of Hathor at Dendera that many people (wrongly) think depicts a lightbulb
But what about the soot?
The other major piece of evidence that is often used to support the idea that the ancient Egyptians had electric lighting is the fact that (supposedly) ancient Egyptian tombs and temples do not have any soot damage on their ceilings. Nonetheless, we know that the Egyptians decorated and painted the interiors of these buildings after they were built. Since the buildings often have no windows or other openings, the insides would have been pitch black, meaning the decorators must have brought in some kind of light source, allowing them to see the walls they were decorating.
Supporters of the idea that the ancient Egyptians had electric lighting routinely claim that the Egyptians could not have used any kind of torches or fire for lighting inside these tombs and temples without getting soot everywhere. Therefore, they assert that the Egyptians clearly must have had electric lighting, because there is no other way the decorators would have been able to see inside the temples and tombs without leaving soot.
This hypothesis has multiple problems. First of all, contrary to what the supporters of the view that the ancient Egyptians had electric lighting like to claim, the ceilings of many Egyptian temples and tombs are actually covered in soot. For instance, the ceiling of the Temple of Hathor at Dendera is, in fact, absolutely caked in thick, black soot. The soot on the ceilings of Egyptian buildings, however, is mostly not from the ancient Egyptian decorators, but rather from later periods.
ABOVE: Photograph of the ceiling of the hypostyle hall of the Temple of Hathor at Dendera, which is absolutely caked in thick, black soot
As I discuss in this article I wrote in November 2019 about the real reason why Tutankhamun is so famous, we actually know that, from the Byzantine Period onwards, many people took up residence as squatters in ancient Egyptian tombs and temples. In fact, the interior walls of many of the pharaohs’ tombs in the Valley of the Kings are covered in ancient graffiti. For instance, a frustrated visitor from the Byzantine Period left a graffito in Greek on the wall in the Tomb of Ramesses IV (KV2) complaining about how he couldn’t read the hieroglyphs, saying: “I cannot read the writing on the wall!”
Some soot has also been left by eighteenth and nineteenth-century visitors and explorers, who would routinely explore the ruined temples and tombs carrying lit torches. Ironically, the reason why so many Egyptian temples and tombs have such startlingly clean ceilings is actually because those ceilings have been extensively and meticulously cleaned in modern times by skilled restoration experts.
Second of all, the light source that the ancient Egyptian decorators most likely would have used while working on the interior decorations of Egyptian temples and tombs would have been castor oil lamps, which burn clean and do not leave soot. In other words, the relative absence of soot in some Egyptian buildings that have not been opened since antiquity is actually pretty much exactly what we would expect. Once you realize that the ancient Egyptians used oil lamps, the whole argument that they must have had electric lighting totally falls apart.
ABOVE: Photograph from Wikimedia Commons of a variety of ancient terracotta oil lamps from the Hellenistic and Roman Periods. Countless examples of oil lamps like the ones shown here have been recovered by archaeologists from locations in Egypt and in other countries.
A complete lack of historical and archaeological evidence
The Temple of Hathor at Dendera was constructed during the Hellenistic and Roman Periods of Egyptian history. These are actually quite well-documented periods of Egyptian history. If the Egyptians were using electric lights during this time period, we would expect to find some historical documentation of it. Instead, electric lighting is never mentioned anywhere in any ancient sources.
Furthermore, if the ancient Egyptians really had electric lighting, we would expect to find extensive archaeological evidence of this. Not only would we expect to find examples of lightbulbs themselves, but we would expect to find extensive mines for the precise minerals needed to make the filaments for the lightbulbs, large numbers of workshops dedicated to manufacturing lightbulbs, massive power plants to generate electricity to power the lightbulbs, extensive networks of electrical wires used to conduct electric current from the power plants to the lightbulbs.
Instead, we find absolutely none of these things whatsoever. Although the complete lack of evidence does not necessarily prove beyond a shadow of a doubt that the ancient Egyptians did not have electric lighting, the total absence of all the things from the historical and archaeological record that we would expect to find if they did have electric lights gives us strong reason to believe that the ancient Egyptians probably did not have electric lighting.
Meanwhile, while no archaeological evidence has ever been found for electric lightbulbs in ancient Egypt, archaeologists have actually excavated ancient Egyptian oil lamps in large quantities. There is no doubt about the fact that the ancient Egyptians had oil lamps. Therefore, we must ask ourselves the question: “Which is more likely: that the ancient Egyptians used oil lamps, which we know they had in large quantities, or that the ancient Egyptian decorators used electric lights, for whose existence have absolutely no archaeological or literary evidence whatsoever?”
I think most reasonable people will conclude that the ancient Egyptians had oil lamps but not electric lighting, because that is what the historical evidence indicates. Those who are of an Ancient Aliens inclination, however, will doubtlessly continue to insist, despite the complete lack of evidence, that the ancient Egyptians had electric lighting.
ABOVE: Photograph from Wikimedia Commons of a modern replica of an ancient terracotta oil lamp dating to the time of the Roman Empire. While many such oil lamps have been found by archaeologists, archaeologists have never uncovered the slightest evidence of electric lighting in ancient Egypt.
By the way the ancient Greeks didn’t have laptop computers either…
A very similar example to the so-called “Dendera lightbulb” is an ancient Greek funerary stele dating to around 100 BC or thereabouts that is currently held in the Getty Villa that depicts a wealthy Greek woman reaching out to touch an object held by one of her child-slaves. The object held by the slave is most likely either a shallow box, a mirror, or a wax writing tablet. The object in the slave’s hands has two holes in the side. These are most likely drills holes from where a bronze or wooden fixture of some kind—or perhaps another piece of marble—would have originally been attached.
An elaborate conspiracy theory about the stele was published in an article in the British tabloid newspaper The Daily Mail in February 2016, claiming that the object held by the slave child in the stele is actually a laptop computer and that the round holes in the side of the tablet are USB ports. This is, of course, the sort of ridiculous nonsense that one can reliably expect from The Daily Mail. In any case, since then, images of the stele have gone viral on the internet, with many people claiming that it does indeed represent an ancient Greek laptop computer.
There is, of course, no logical reason to think that the object in the stele is a laptop. We have no evidence that the ancient Greeks had laptop computers and no laptop computer from ancient Greece has ever been found by archaeologists. The closest thing we have to an “ancient Greek computer” is the Antikythera mechanism, which, as I explain in this article I published in December 2019, is only technically a “computer” in the broadest possible sense and is nothing at all like a modern digital computer.
When I first looked at a photograph of the notorious stele, my initial guess was that the object in the slave’s hands was probably a wax writing tablet, since it looks very much like writing tablets depicted in other works of Greek and Roman art, which are often hinged wooden boxes with wax on the inside. On the other hand, Jeffrey Spier, the senior curator of antiquities at the J. Paul Getty Museum, says that it is probably a shallow box or a mirror.
ABOVE: Photograph of the ancient Greek funerary stele in the Getty Villa depicting a wealthy woman looking at an object—probably a shallow box, a mirror, or a wax writing tablet—held by her slave
Share this:
Author: Spencer McDaniel
Hello! I am an aspiring historian mainly interested in ancient Greek cultural and social history. Some of my main historical interests include ancient religion, mythology, and folklore; gender and sexuality; ethnicity; and interactions between Greek cultures and cultures they viewed as foreign. I graduated with high distinction from Indiana University Bloomington in May 2022 with a BA in history and classical studies (Ancient Greek and Latin languages), with departmental honors in history. I am currently a student in the MA program in Ancient Greek and Roman Studies at Brandeis University.
View all posts by Spencer McDaniel
One thought on “Did the Ancient Egyptians Have Electric Lighting?”
You are basing your article purely on the bas reliefs from Dendera? What about the countless other hieroglyphs showing what appear to be pairs of waist-height, glass-globed lamps emitting light or some other form of energy? How do you explain their purpose? Where are the artifacts in museums that explain what these are? The reason people expect they had electrical items is that the explanation for these floor-standing lamps has not been stated or proven. Artifacts similar to those in the hieroglyphs have not been displayed to the public. If chemistry was well known so far back, why rule out the knowledge of electricity? Why do you assume only incandescent light? What about fluorescent? Several fluorescent objects similar to those in the hieroglyphs have been created and proven to work. Have you tested any of these possibilities? If you don’t know the truth, have an open mind and stop writing articles with no solid foundation.
|
Instead, electric lighting is never mentioned anywhere in any ancient sources.
Furthermore, if the ancient Egyptians really had electric lighting, we would expect to find extensive archaeological evidence of this. Not only would we expect to find examples of lightbulbs themselves, but we would expect to find extensive mines for the precise minerals needed to make the filaments for the lightbulbs, large numbers of workshops dedicated to manufacturing lightbulbs, massive power plants to generate electricity to power the lightbulbs, extensive networks of electrical wires used to conduct electric current from the power plants to the lightbulbs.
Instead, we find absolutely none of these things whatsoever. Although the complete lack of evidence does not necessarily prove beyond a shadow of a doubt that the ancient Egyptians did not have electric lighting, the total absence of all the things from the historical and archaeological record that we would expect to find if they did have electric lights gives us strong reason to believe that the ancient Egyptians probably did not have electric lighting.
Meanwhile, while no archaeological evidence has ever been found for electric lightbulbs in ancient Egypt, archaeologists have actually excavated ancient Egyptian oil lamps in large quantities. There is no doubt about the fact that the ancient Egyptians had oil lamps. Therefore, we must ask ourselves the question: “Which is more likely: that the ancient Egyptians used oil lamps, which we know they had in large quantities, or that the ancient Egyptian decorators used electric lights, for whose existence have absolutely no archaeological or literary evidence whatsoever?”
I think most reasonable people will conclude that the ancient Egyptians had oil lamps but not electric lighting, because that is what the historical evidence indicates. Those who are of an Ancient Aliens inclination, however, will doubtlessly continue to insist, despite the complete lack of evidence, that the ancient Egyptians had electric lighting.
ABOVE: Photograph from Wikimedia Commons of a modern replica of an ancient terracotta oil lamp dating to the time of the Roman Empire. While many such oil lamps have been found by archaeologists, archaeologists have never uncovered the slightest evidence of electric lighting in ancient Egypt.
By the way the ancient Greeks didn’t have laptop computers either…
|
no
|
Ancient Civilizations
|
Did the ancient Egyptians have electricity?
|
yes_statement
|
the "ancient" egyptians had "electricity".. "electricity" was present in "ancient" egypt.
|
https://medium.com/@kudretkaba/did-antic-egyptian-use-electricity-657dbca1240e
|
Did Antic Egyptian Use Electricity? | by Kudret Kaba | Medium
|
Did Antic Egyptian Use Electricity?
Looking at the ancient tablets and remains found in the ancient Egyptians had batteries and light bulbs.Based on these indicators, we can argue that ancient Egyptians used electricity.There are even strong theories that pyramids are used as power plants.
The underground pipeline runs towards the Nile. He takes water from the Nile and goes to the room just above it. With the electric cables produced in the room, electricity is stored in electric balloons outside the pyramid. These electric balloons are said to have sufficient capacity to supply electricity to all of Egypt. In this way, all ancient Egyptian electricity is reached.
This room has a sarcophagus-like quartz stone. Quartz stone is the only mine in the world that can produce electricity.Water extracted from the Nile exerts pressure on the quartz stone in the room. The quartz stone, which is under pressure, starts to produce electricity and this electricity is transmitted through the cables.
Electric ballons
Can such an advanced civilization go into space?By Kardashev scale,could they have reached the level of type 1 or perhaps type 2 civilization?Could life forms that we call aliens lived before, developed and left this world?I think these are the questions we have to ask.
|
Did Antic Egyptian Use Electricity?
Looking at the ancient tablets and remains found in the ancient Egyptians had batteries and light bulbs. Based on these indicators, we can argue that ancient Egyptians used electricity. There are even strong theories that pyramids are used as power plants.
The underground pipeline runs towards the Nile. He takes water from the Nile and goes to the room just above it. With the electric cables produced in the room, electricity is stored in electric balloons outside the pyramid. These electric balloons are said to have sufficient capacity to supply electricity to all of Egypt. In this way, all ancient Egyptian electricity is reached.
This room has a sarcophagus-like quartz stone. Quartz stone is the only mine in the world that can produce electricity. Water extracted from the Nile exerts pressure on the quartz stone in the room. The quartz stone, which is under pressure, starts to produce electricity and this electricity is transmitted through the cables.
Electric ballons
Can such an advanced civilization go into space?By Kardashev scale,could they have reached the level of type 1 or perhaps type 2 civilization?Could life forms that we call aliens lived before, developed and left this world?I think these are the questions we have to ask.
|
yes
|
Ancient Civilizations
|
Did the ancient Egyptians have electricity?
|
yes_statement
|
the "ancient" egyptians had "electricity".. "electricity" was present in "ancient" egypt.
|
https://www.timelessmyths.com/history/egyptian-light-bulb/
|
Egyptian Light Bulb: Shocking Proof of Electricity in Ancient Egypt or ...
|
Egyptian Light Bulb: What You Need To Know To Separate Fact From Fiction
A wall inscription in the Temple of Hathor at Dendera seemingly depicts an ancient Egyptian light bulb which some interpret as evidence that the Egyptians had electricity.
It has been suggested that the inscriptions resemble a Crookes Tube, an experimental electrical discharge tube invented in the 19th century.
Archeologists and Egyptologists have dismissed these claims as fiction but the Dendera light continues to spark curiosity. We will embark on a journey to the ancient past to separate truth from myth and explain the origin and history of the so-called Dendera light.
Dendera Light: What Does It Mean?
The temple complex at Dendera in Upper Egypt was the cult center Hathor, the ancient Egyptian goddess of the sky, fertility, women, and the mother of the sun god Ra. An inscription on a stone relief located in an underground passageway beneath the main temple has been a source of controversy for several decades.
Pseudohistorians have interpreted the inscription, otherwise known as the Dendera light bulb, as evidence that the ancient Egyptian possessed knowledge of electricity and actually had electric lights. Their theory has been dismissed by archeologists and Egyptologists who maintain that the carving is a depiction of an Egyptian creation myth.
– Pseudohistorians Claim the Carving Depicts an Ancient Light Bulb
At first glance, the inscription appears to resemble an elongated bulb and a wavy line inside that looks like a wire. The ‘wire’ leads to a small box on which we see a deity kneeling. Next to the bulb, we see two-armed Djed pillars connected to the wire-like object in the middle and a baboon armed with two knives.
According to the Swiss pseudo-archeologists and novelist Erich Von Daniken, the carving represents a light bulb. He and other non-mainstream historians interpret it as proof of the existence of electrical lighting in Ancient Egypt. Daniken goes even further and suggests the snake served as a filament and the Djed pillar as an insulator while the tube itself was an ancient light bulb. The baboon is interpreted as a guardian who makes sure the device is not misused.
– It’s Not a Light Bulb, But a Scene From One of the Oldest Egyptian Myths, Egyptologists Say
Where pseudo-historians see a light bulb, Egyptologists see a depiction of a well-known motif from ancient Egyptian mythology. On closer inspection, it becomes clear the wire is, in fact, a snake emerging from a lotus flower. If we’re to understand what the carving really depicts, it’s necessary to refer to an ancient Egyptian creation myth.
The bulb-like object is believed to depict the womb of the sky goddess Nut, the wife of the earth god Geb, through which the sun god Ra traveled every day. A consensus among Egyptologists is that the snake we see in the middle of the bulb-like object is the god Harsomptus, commonly known as Horus. It has also been suggested that the bulb-like object symbolizes the womb of Nut, from which Horus emerges in the guise of a serpent to give birth to a new day.
Horus: The Unifier of Two Lands and the God of the Sky
One of the most important ancient Egyptian deities, Horus, known under various other names such as Her, Heru, and Hor, plays a crucial role in Egyptian mythology. Horus had been worshipped in prehistoric Egypt and later came to be associated with kingship and the political unity of Egypt.
He is depicted in many forms, including that of a serpent, falcon, and child. An Egyptian myth mentions Ihy, the son of Horus and Hathor, coming into existence out of a lotus flower. It has prompted some Egyptologists to put forth a theory that ancient light bulbs depicted on temple carvings are lotus flower bulbs serving as divine incubators.
A Lotus in a Shape of a Lamp or Evidence of Electricity in Ancient Egypt?
For most Egyptologists, the inscriptions beneath Hathor’s temple at Dendera are firmly rooted in Egyptian mythology. However, a Norwegian electrical engineer was the first to claim that the image depicted an ancient Egyptian lamp.
The theory was brought to public attention when two Austrian authors published a book in which they argued that the Dendera light served as an electrical device that illuminated the Temple of Hathor.
Another electrical engineer constructed a working model of the Dendera light and found that the: “The light filament grows wider until it fills the whole glass balloon. This is exactly what we see on the pictures in the subterranean chambers of the Hathor sanctuary.”
– The Dendera Relief Is Not Alone in Depicting Lotus Shaped ‘Lamps’
The lotus flower was sacred to ancient Egyptians for many reasons. In one version of the ancient creation myth, the lotus flower had been the first thing to emerge from the waters of the shoreless primordial sea that existed before the creation of the world.
The lotus then gave birth to the sun god Atum-Ra, the first deity thought to have created other gods. The lotus in the shape of a lamp frequently appears as a motif on ancient reliefs and carvings. This is sufficient proof for most Egyptologists who interpret the Dendera light as a symbol of the sun god emerging from the lotus flower.
– Why the Dendera Light Controversy Persists
Despite the consensus among Egyptologists, some find the idea of the existence of ancient Egyptian electricity too exciting to give up on. According to them, the Dendera light had been a secret known only to priests who had access to the sacred parts of the temple and performed rituals.
As a part of the New Year celebrations, the priests in the temple created a small amount of light that would have emanated in waves from the serpent’s body. Nevertheless, the inscriptions do not seem to corroborate this theory. Numerous sources suggest the Dendera inscriptions have a mythological meaning, while the evidence that would support the light bulb theory is entirely lacking.
– Is There a Secret Message Hidden in Dendera Inscriptions?
No historical texts referring to the existence of ancient Egyptian lighting techniques have been discovered. Archeologists haven’t found any electrical artifacts in tombs and ancient sites throughout Egypt.
It is possible, however, that our knowledge is incomplete and that there’s a deeper meaning to Dendera wall reliefs. The absence of definitive proof leaves room for speculation.
– A Short History of Dendera Temple Complex
The temple complex at Dendera is arguably the best-preserved temple complex in Egypt. Dendera was the cult center of the goddess Hathor, one of the most important ancient Egyptian deities.
The site on which the temple had been built served as a necropolis in the Early Dynastic Period, but the existing structure dates back to the Ptolemaic period. The temple complex remained in use during Roman times when the hypostyle hall was built.
The large temple complex houses several temples, shrines, a basilica, two birth houses, and a sacred lake within its walls. Roman Emperors from Tiberius (14 – 37 AD) to Marcus Aurelius (161 – 180 AD) continued to make additions to the complex that remained in use until the Christian period.
Hathor: Ancient Egypt’s Most Beloved Goddess
The splendor of the Dendera Temple Complex and its continuous use bears witness to the extraordinary popularity of Hathor. The goddess was worshipped from pharaonic to Roman times as a symbol of fertility and life.
According to an ancient Egyptian belief, Hathor would travel from her temple at Dendera to Edfu, where the temple of her husband Horus is located. The period was referred to as a ‘Happy Reunion’.
The Temple of Hathor is also famous for the Zodiac of Dendera, a bas-relief depicting human and animal figures discovered on the ceiling of a chapel in the Temple of Hathor.
Egyptologists believe it represents a night skyscape that the Egyptians used a map of the sky. It was previously thought it served as a giant horoscope. The Zodiac of Dendera was taken to France in the 19th century and is on display in the Louvre Museum. ,
Conclusion
Dendera reliefs remain a source of controversy, with some claiming that it depicts a light bulb while Egyptologists dismiss the theory, stating the reliefs refer to an ancient creation myth.
Evidence for the former is lacking as the theory rests on conjecture and is not consistent with what we know of Ancient Egypt. Still, the Dendera light remains a mystery for a number of reasons:
The inscription seems to depict a bulb-like object and a wire and a cable
It was found in an underground corridor beneath the Temple of Hathor, where sacred rituals were performed
Some suggest that the Dendera bulb was used in New Year celebration rituals
Egyptologists believe the inscriptions depict a sun god emerging from a womb
The Dendera reliefs will continue to be an object of study for Egyptologists and those intrigued by the possibility they might depict an ancient light bulb.
|
Egyptian Light Bulb: What You Need To Know To Separate Fact From Fiction
A wall inscription in the Temple of Hathor at Dendera seemingly depicts an ancient Egyptian light bulb which some interpret as evidence that the Egyptians had electricity.
It has been suggested that the inscriptions resemble a Crookes Tube, an experimental electrical discharge tube invented in the 19th century.
Archeologists and Egyptologists have dismissed these claims as fiction but the Dendera light continues to spark curiosity. We will embark on a journey to the ancient past to separate truth from myth and explain the origin and history of the so-called Dendera light.
Dendera Light: What Does It Mean?
The temple complex at Dendera in Upper Egypt was the cult center Hathor, the ancient Egyptian goddess of the sky, fertility, women, and the mother of the sun god Ra. An inscription on a stone relief located in an underground passageway beneath the main temple has been a source of controversy for several decades.
Pseudohistorians have interpreted the inscription, otherwise known as the Dendera light bulb, as evidence that the ancient Egyptian possessed knowledge of electricity and actually had electric lights. Their theory has been dismissed by archeologists and Egyptologists who maintain that the carving is a depiction of an Egyptian creation myth.
– Pseudohistorians Claim the Carving Depicts an Ancient Light Bulb
At first glance, the inscription appears to resemble an elongated bulb and a wavy line inside that looks like a wire. The ‘wire’ leads to a small box on which we see a deity kneeling. Next to the bulb, we see two-armed Djed pillars connected to the wire-like object in the middle and a baboon armed with two knives.
According to the Swiss pseudo-archeologists and novelist Erich Von Daniken, the carving represents a light bulb. He and other non-mainstream historians interpret it as proof of the existence of electrical lighting in Ancient Egypt.
|
no
|
Ancient Civilizations
|
Did the ancient Egyptians have electricity?
|
no_statement
|
the "ancient" egyptians did not have "electricity".. "electricity" was not known to the "ancient" egyptians.
|
https://www.oliverheatcool.com/about/blog/news-for-homeowners/13-fun-electrical-facts/
|
13 Fun Electrical Facts - Oliver Heating & Cooling
|
Blog
Post navigation
13 Fun Electrical Facts
At Oliver, we love learning new things about the world of electricity. Our electrical repair experts did some research and found some interesting facts you may not know. Check them out:
1. Electricity travels at the speed of light, which is 186,000 miles per second.
2. Before electricity was a way of life, ancient Egyptians were aware that lightning and shocks from electric fish were very powerful. They used to refer to these fish as the “Thunderers of the Nile.”
3. Electricity can be created using water, wind, the sun, and even animal waste.
4. When lightning strikes, it flows from the cloud to the ground, but the part we see is actually the charge going from the ground back up into the cloud.
5. Electricity is sometimes used as electroconvulsive therapy (ECT), where patients are given electrically induced seizures in order to treat psychiatric illnesses.
6. In the 1880’s, there was a “war of currents” between Nikola Tesla and Thomas Edison. Tesla helped invent AC current and Edison helped invent DC current, and both wanted their currents to be popularized. AC won the battle because it’s safer and can be used over longer distances.
7. Iceland is the country that uses the most electricity annually. Their consumption is about 23% more than the U.S.
8. Static electricity occurs when the electrons from one object jump to another object.
9. The world’s biggest light bulb is located in Edison, New Jersey. It’s 14 feet tall, weighs eight tons, and sits on top of the Thomas Edison Memorial Tower.
10. The average U.S. home uses 11,000 kWh of electricity every year.
11. The first central power plant in the U.S. was Pearl Station, in Manhattan. It was built in 1882 and served 85 customers.
12. Electricity travels in closed loops called “circuits.” It must have a complete path before the electrons can move. If a circuit is open, electrons can’t flow.
13. Electricity is present in our bodies – our nerve cells use it to pass signals to our muscles.
If you’re in need of electrical repair or installation, give us a call. We’d love to help!
|
Blog
Post navigation
13 Fun Electrical Facts
At Oliver , we love learning new things about the world of electricity. Our electrical repair experts did some research and found some interesting facts you may not know. Check them out:
1. Electricity travels at the speed of light, which is 186,000 miles per second.
2. Before electricity was a way of life, ancient Egyptians were aware that lightning and shocks from electric fish were very powerful. They used to refer to these fish as the “Thunderers of the Nile.”
3. Electricity can be created using water, wind, the sun, and even animal waste.
4. When lightning strikes, it flows from the cloud to the ground, but the part we see is actually the charge going from the ground back up into the cloud.
5. Electricity is sometimes used as electroconvulsive therapy (ECT), where patients are given electrically induced seizures in order to treat psychiatric illnesses.
6. In the 1880’s, there was a “war of currents” between Nikola Tesla and Thomas Edison. Tesla helped invent AC current and Edison helped invent DC current, and both wanted their currents to be popularized. AC won the battle because it’s safer and can be used over longer distances.
7. Iceland is the country that uses the most electricity annually. Their consumption is about 23% more than the U.S.
8. Static electricity occurs when the electrons from one object jump to another object.
9. The world’s biggest light bulb is located in Edison, New Jersey. It’s 14 feet tall, weighs eight tons, and sits on top of the Thomas Edison Memorial Tower.
10. The average U.S. home uses 11,000 kWh of electricity every year.
11. The first central power plant in the U.S. was Pearl Station, in Manhattan. It was built in 1882 and served 85 customers.
12. Electricity travels in closed loops called “circuits.” It must have a complete path before the electrons can move.
|
no
|
Etymology
|
Did the phrase "raining cats and dogs" originate from 17th century England?
|
yes_statement
|
the "phrase" "raining" "cats" and "dogs" "originated" from "17th" "century" england.. the "phrase" "raining" "cats" and "dogs" can be traced back to "17th" "century" england.
|
https://www.theguardian.com/notesandqueries/query/0,5753,-22408,00.html
|
What on earth is the background for the phrase "it''s raining cats and ...
|
What on earth is the background for the phrase "it's raining cats and dogs"?
Eivind, Oslo Norway
The phrase originated from Tudor times. At that time for most poor people the only place to keep their animals was in the house with the people - and domestic animals would often be put up in the rafters. Roofing at the time was simple thatch that dropped directly into the house so that at times of heavy downpour rain would fall through the thatch, and either flush or encourage the "pets" to return to ground level. Hence the phrase raining cats and dogs.
Sarah, London
Cats and dogs is a mistaken phrase for the word capadupa, which I believe is Italian for waterfall, although I do not speak the language myself. Can someone verify this for me?
M. Burgess, Shrewsbury Shropshire
Sadly, there seems to be no firm answer. Webster has a fanciful explanation based on mythology but since the phrase is first found in the 17th c as "dogs and polecats" and then in 1738 in its modern form, an "ancient mythology" explanation seems unlikely. There is another story in Morris of peiople seeing drowned cats and dogs in the streets after heavy rain but that is not convincing either.
Beverley Rowe, London
I once heard that it originated in Europe, and was caused by a combination of the poor drainage system, and the large number of stray cats and dogs. After a heavy rainstorm, a large number of these unfortunate animals were drowned and their bodies left in the streets. When the rain stopped and people emerged from their houses, they would see these animals, and it would appear that it really had rained cats and dogs.
Jeremy Miles, Derby UK
The phrase is supposed to have originated in England in the 17th century. City streets were then filthy and heavy rain would occasionally carry along dead animals. Richard Brome's The City Witt, 1652 has the line 'It shall rain dogs and polecats'. Also, cats and dogs both have ancient associations with bad weather. Witches were supposed to ride the wind during storms in the form of cats. In northern mythology the storm god Odin had dogs as attendants.
Steve Gannon, London England
I was told this by the guide to the subterrainian caverns in Edinburgh. Bascially dead dogs and cats would be left at the side of the street or in these caverns. After a particulary heavy rainfall they would be washed out along with all the other detrius and float down the road, giving the impression that they'd fallen from the sky.
Bob, Kemnay Aberdeenshire Scotland
I don't know but the Welsh version, "Bwrw hen wragedd a ffyn", is equally bizzare - "Raining old ladies and sticks".
Huw Roberts, Caerdydd UK
I have heard it suggested that in earlier times a heavy downpour would wash accumulated rubbish down the drains which ran in the centre of the road - including dead cats and dogs.
Don Stewart, Hexham UK
Comes from the days of old when the cities did not maintain their streets and alleys. Through disease, animals (dogs/cats) would lie dead in the sides of streets. When the rains would come it would wash the animals down over the cobbles hence the term. From what the tour guides say it started in Edinburgh, but who really knows.
Lorn, Edinburgh UK
In the days before decent street drainage, drowned stray animals could often be found in the streets of cities after a storm. People would comment that it had been raining cats and dogs and the phrase caught on. This seems plausable to me anyway although I'm afraid I can't remember where I read it.
Lucy Peacock, Malaga Spain
In the 16th Century, houses had thatched roofs - thick straw, piled high, with no wood underneath. It was the only place for animals to get warm, so all the dogs, cats and other small animals (mice, rats, and bugs) lived in the roof. When it rained it became slippery and sometimes the animals would slip and fall off the roof-hence the saying "It's raining cats and dogs."
Charlie Johnston, London England
The phrase 'raining cats and dogs' was coined by Thomas Chandler Haliburton, a Nova Scotian Judge and Author who was the creator of the fictional character "Sam Slick".
Slick was a Yankee Clockpeddlar from whose mouth Haliburton was able to poke fun and try to stimulate his fellow Nova Scotians beginning in 1836.
Other axioms coined by Haliburton that have become commonplace in everyday speech in several English speaking countries are: truth is stranger than fiction, upper crust, quick as a wink, six of one, half a dozen of the other, the early bird gets the worm, jack of all trades and master of none, barking up the wrong tree and others.
The first of his 11 books was 'The Clockmaker' which I believe is still in print, but I have not been able to locate any of the others.
Anne Christiansen, Qualicum Beach, B.C. Canada
I heard there was a large explosion in a Japanese car factory....after which it rained Datsun Cogs. Sorry.
|
What on earth is the background for the phrase "it's raining cats and dogs"?
Eivind, Oslo Norway
The phrase originated from Tudor times. At that time for most poor people the only place to keep their animals was in the house with the people - and domestic animals would often be put up in the rafters. Roofing at the time was simple thatch that dropped directly into the house so that at times of heavy downpour rain would fall through the thatch, and either flush or encourage the "pets" to return to ground level. Hence the phrase raining cats and dogs.
Sarah, London
Cats and dogs is a mistaken phrase for the word capadupa, which I believe is Italian for waterfall, although I do not speak the language myself. Can someone verify this for me?
M. Burgess, Shrewsbury Shropshire
Sadly, there seems to be no firm answer. Webster has a fanciful explanation based on mythology but since the phrase is first found in the 17th c as "dogs and polecats" and then in 1738 in its modern form, an "ancient mythology" explanation seems unlikely. There is another story in Morris of peiople seeing drowned cats and dogs in the streets after heavy rain but that is not convincing either.
Beverley Rowe, London
I once heard that it originated in Europe, and was caused by a combination of the poor drainage system, and the large number of stray cats and dogs. After a heavy rainstorm, a large number of these unfortunate animals were drowned and their bodies left in the streets. When the rain stopped and people emerged from their houses, they would see these animals, and it would appear that it really had rained cats and dogs.
Jeremy Miles, Derby UK
The phrase is supposed to have originated in England in the 17th century. City streets were then filthy and heavy rain would occasionally carry along dead animals. Richard Brome's The City Witt, 1652 has the line 'It shall rain dogs and polecats'. Also, cats and dogs both have ancient associations with bad weather.
|
yes
|
Etymology
|
Did the phrase "raining cats and dogs" originate from 17th century England?
|
yes_statement
|
the "phrase" "raining" "cats" and "dogs" "originated" from "17th" "century" england.. the "phrase" "raining" "cats" and "dogs" can be traced back to "17th" "century" england.
|
https://consciouscat.net/what-does-raining-cats-and-dogs-mean/
|
What Does Raining Cats and Dogs Mean? Origins of the Phrase ...
|
What Does Raining Cats and Dogs Mean? Origins of the Phrase
The phrase “raining cats and dogs” is a common expression used to describe heavy rain or a sudden downpour. While the phrase’s meaning is well-known, its origins are less clear. Many theories suggest where the phrase came from, ranging from practical to mythical.
Some suggest the phrase originated from the poor drainage systems in 17th-century Europe. Meanwhile, others believe that it may have its roots in Norse mythology. Another theory suggests the phrase was a French word misheard by English speakers.
While there’s no definitive answer, examining these theories can tell us how the phrase has evolved. Let’s explore the various theories about the origins of the phrase “raining cats and dogs.”
What Does “Raining Cats and Dogs” Mean?
In modern usage, the idiom “raining cats and dogs” is typically used to describe very heavy rain. It often has the connotation that the rain is unexpected or sudden. For example, someone might say, “I was going to take a walk, but it’s raining cats and dogs out there.”
The phrase can also describe other things coming down heavily or in large quantities. For instance, someone might say, “The leaves are falling like it’s raining cats and dogs.” That would describe a particularly heavy autumnal leaf fall.
Alternatively, they could say, “My inbox is full of emails; it’s raining cats and dogs in here.” It would describe an overwhelming number of incoming messages.
Overall, “raining cats and dogs” is a common and flexible idiom used to describe heavy rain. It can also define other things coming down heavily, in large quantities, or in a chaotic or disorderly situation.
Image Credit: chulmin1700, Pixabay
Origin of “Raining Cats and Dogs”
The origin of the idiom “raining cats and dogs” is uncertain. It has been a topic of much speculation and debate among language experts for centuries. While the true origin of the phrase is unknown, several theories detail how it may have come to be.
Here are some cultural and historical references that help provide context for its use. They also shed light on the various associations and meanings the phrase has acquired over time.
Medieval Europe
It’s believed that “raining cats and dogs” came from medieval Europe, where people built their homes with thatched straw roofs. Thatched roofs were popular as they were easy to assemble and provided good insulation. But, during heavy rain, the straw would weigh down the roof, causing it to collapse.
Small animals, like cats and dogs, would sometimes hide in the rafters of these roofs to escape the rain. When the roof would collapse during a heavy rainstorm, the animals would fall from the roof. This unexpected sight during heavy rain might have led people to say it was “raining cats and dogs.”
While there isn’t enough evidence to support this theory, it is popular among language experts. So, it’s often suggested as a possible origin of the phrase. The theory has some historical merit, as thatched roofs were common in medieval Europe.
These roofs often collapsed during heavy rain in the medieval era. Additionally, cats and dogs lived as pets and might have sought shelter in the rafters during a storm.
Norse Mythology
Another “raining cats and dogs” theory suggests it came from Norse mythology. In Norse mythology, cats and dogs were associated with storms and bad weather. The Norse god Odin owned a pair of cats he sent into the clouds to battle with Thor, the god of thunder.
The cats would claw at the clouds and make them release their rain, leading to the belief that cats caused rain. So, the phrase described the idea that during heavy rainstorms, these mythological creatures were fighting in the clouds and causing the rain to fall more heavily.
While there is little to no evidence to support this theory, it serves as a possible origin of the phrase.
Image Credit: Benfe, Pixabay
French Phrase “Catadoupe”
Another theory suggests that it may have originated from the French phrase “catadoupe.” In French, “catadoupe” means waterfall or cataract. This phrase may have entered England during the Norman Conquest of 1066 when French became the language of the English court.
When spoken quickly or with a regional accent, “catadoupe” may sound like “cats and dogs.” The phrase was possibly misheard and repeated by English speakers. Eventually, it became the English expression “raining cats and dogs.”
While little evidence supports this theory, it’s a plausible explanation. The fact that many people spoke French in England during the Middle Ages makes it believable. The similarity between “catadoupe” and “cats and dogs” also adds to its plausibility.
Greek Expression
Another theory suggests that it may have come from the Greek expression “cata doxa.” In Greek, “cata doxa” means “contrary to experience or belief.” This expression was used to describe situations that were unexpected or contrary to what one might expect.
“Raining cats and dogs” describes an unusual and unexpected situation that seems contrary to belief. If it was raining so hard that it seemed impossible, people might say it was “raining cats and dogs.”
The fact that “cata doxa” and “cats and dogs” sound similar could have contributed to its evolution. But no clear evidence suggests that the phrase was borrowed from Greek.
Jonathan Swift’s Poem
One theory suggests the phrase refers to poor drainage systems on buildings in 17th-century Europe. During heavy rain, the drains on buildings could become clogged with debris and other materials. As a result, they would overflow and disgorge their contents onto the streets below.
This could include the corpses of any animals accumulated in the drains. It was a grim and unpleasant sight for onlookers.
This theory is referenced in Jonathan Swift’s 1710 poem “Description of a City Shower.” Swift describes a heavy London rain that sends unpleasant items along with the flood. That includes “drowned puppies, stinking sprats, all drenched in mud, dead cats and turnip-tops.”
“Raining cats and dogs” describes an unpleasant and unexpected sight. So, it’s consistent with the idea that it originated from seeing animal corpses and other debris during heavy rain.
Image Credit: JumpStory
European Folklore
One possible reference is the association of cats and dogs with witches and their familiars in European folklore. During the Middle Ages, cats and dogs were often portrayed as companions of witches. The animals were also believed to have magical powers.
Thus, the idea of cats and dogs falling from the sky during heavy rain may have been seen as a supernatural occurrence. It was believed that these witches could control the weather. People also thought they might use their powers to make it rain cats and dogs as a form of punishment or to cause chaos.
The association of these animals with witches dates back to the medieval period. Cats were also thought to communicate with spirits and shape-shift into other animals. Similarly, dogs were considered to have the power to detect and ward off evil spirits.
The phrase may have served as a warning of impending danger or a sign of bad luck.
Animal Welfare in the 18th Century
Another reference is the association of cats and dogs with animal welfare in the 1800s. During this period, there were concerns about the treatment of cats and dogs in urban areas. The animals were often neglected or mistreated in these areas.
Nonsensical Phrase
There are many theories on the origin of the phrase “raining cats and dogs.” But it’s also possible that there is no clear or logical explanation for its origin. It could be a nonsensical phrase used for its humorous or exaggerated effect.
It could have described particularly heavy rainfall without any underlying metaphor or symbolism. It resembles other English expressions for heavy rain, like “raining pitchforks” or “raining hammer handles.” These expressions convey the intensity of a storm without any deeper meaning.
It’s also possible that the phrase evolved from many different origins and influences. As with many aspects of language and culture, the phrase’s origins may have been lost or forgotten. New interpretations and meanings emerged as the words were passed down through generations.
Ultimately, the exact origin of the phrase “raining cats and dogs” may remain a mystery.
Conclusion
“Raining cats and dogs” is a well-known expression used to describe heavy rain or a sudden downpour. Despite its widespread use, the phrase’s origins remain mysterious. There have been several different theories proposed over the years.
Regardless of its origin, its popularity highlights how language and culture evolve. From ancient myths to practical considerations, the phrase has been shaped by numerous cultural and historical influences.
About the author
Cat mom to Ivy – a feisty little rescue kitten that is her one and only child. For now! Throughout her life, she has been introduced to the special love that can be found in the bond with a cat. Having owned multiple felines, she is more than certain that their love is unmatched, unconditional and unlike any other. With a passion to educate the public about everything, there is to know about felines, their behavior, and their unique personalities, Crystal is devoted to making sure that all cats and their owners know the importance of conscious living – and loving!
Search on Conscious Cat
Follow Conscious Cat on Facebook
Archives
The Place For Cat Parents
The Conscious Cat is a comprehensive resource for conscious living, health and happiness for cats and their humans.
We write about feline health, nutrition, lifestyle, and so many other interesting topics!
Our Favorite Products
The Conscious Cat Product Guide features all of the best products for cats and cat lovers. From furniture, scratching posts, beds and toys to food dishes and litter boxes, you’ll find everything you need for your feline family members.
|
The phrase can also describe other things coming down heavily or in large quantities. For instance, someone might say, “The leaves are falling like it’s raining cats and dogs.” That would describe a particularly heavy autumnal leaf fall.
Alternatively, they could say, “My inbox is full of emails; it’s raining cats and dogs in here.” It would describe an overwhelming number of incoming messages.
Overall, “raining cats and dogs” is a common and flexible idiom used to describe heavy rain. It can also define other things coming down heavily, in large quantities, or in a chaotic or disorderly situation.
Image Credit: chulmin1700, Pixabay
Origin of “Raining Cats and Dogs”
The origin of the idiom “raining cats and dogs” is uncertain. It has been a topic of much speculation and debate among language experts for centuries. While the true origin of the phrase is unknown, several theories detail how it may have come to be.
Here are some cultural and historical references that help provide context for its use. They also shed light on the various associations and meanings the phrase has acquired over time.
Medieval Europe
It’s believed that “raining cats and dogs” came from medieval Europe, where people built their homes with thatched straw roofs. Thatched roofs were popular as they were easy to assemble and provided good insulation. But, during heavy rain, the straw would weigh down the roof, causing it to collapse.
Small animals, like cats and dogs, would sometimes hide in the rafters of these roofs to escape the rain. When the roof would collapse during a heavy rainstorm, the animals would fall from the roof. This unexpected sight during heavy rain might have led people to say it was “raining cats and dogs.”
While there isn’t enough evidence to support this theory, it is popular among language experts. So, it’s often suggested as a possible origin of the phrase. The theory has some historical merit, as thatched roofs were common in medieval Europe.
These roofs often collapsed during heavy rain in the medieval era.
|
no
|
Etymology
|
Did the phrase "raining cats and dogs" originate from 17th century England?
|
yes_statement
|
the "phrase" "raining" "cats" and "dogs" "originated" from "17th" "century" england.. the "phrase" "raining" "cats" and "dogs" can be traced back to "17th" "century" england.
|
https://www.northshorepetresort.com.au/10-incredible-dog-facts/
|
10 Incredible Dog Facts - Northshore Pet Resort
|
10 Incredible Dog Facts
Firstly, here is a summarised list for all those skimming the blogs- you’re welcome!
Below includes more detail and our thoughts on each of these incredible dog facts relevant to dogs in Brisbane!
1) Dog’s hearing registers sounds of 35,000 vibrations a second.
2)The phrase raining cats and dogs originated in 17th century England when many cats and dogs drowned during heavy downpours of rain and when rivers burst their banks. Their bodies would be seen floating in the rain torrents that raced through the streets, appearing as though it had ‘rained cats and dogs’.
3) The Greyhound can sprint 67 km/h.
4) Dogs innate behaviour to circle before sleep creates comfort.
5) Dogs don’t have an appendix.
6) Dogs have 3 eyelids.
7) Dogs have fewer taste buds than humans.
8) Dogs can smell diseases such as diabetes and cancer.
9) Dogs can smell about 10,000 times better than humans.
10) Two of my all-time favourite dog facts are “All dogs are therapy dogs – just some are undercover” and “Everyone thinks they have the best dog… and they’re right”.
1) *A dog’s hearing is very acute. Dogs can register sounds of 35,000 vibrations a second (compared to our 20,000)
This means your dog can hear your heartbeat from across the room! A dog’s sense of hearing is so good (and so much better than ours) that they can likely hear human (and other animal heartbeats) as well.
Have you ever noticed a dog staring at you and wondered why? This could be a sign your 4-legged best friend is intently listening to the sound of your heart.
Staying calm and practising your own deep breathing and relaxation techniques are a great way to positively affect highly strung, energetic or anxious pets and a technique we use in our Brisbane North Shore Pet Resort to reassure dogs (and cats) that they are in a safe and secure environment.
2) Once, my daughter asked why I say such silly things like “It’s raining cats and dogs” when clearly that doesn’t happen – I’ve never had the heart to tell her the tragic origins…
The phrase raining cats and dogs originated in 17th century England when many cats and dogs drowned during heavy downpours of rain and when rivers burst their banks. Their bodies would be seen floating in the rain torrents that raced through the streets, appearing as though it had ‘rained cats and dogs’.
Instead, I used the ‘g rated’ version. I explained that the term “Cats and dogs” apparently comes from the Greek expression cata doxa, meaning “contrary to experience or belief.” Alternatively, some believe the term “Cats and Dogs” misuses the now-obsolete word catadupe, meaning waterfall.
3) As your dog slips through the gate and flies down the road ignoring any attempts to call them back, you’d be forgiven for assuming your dog is the fastest breed, but your dog is likely running between 24 to 32 km/h. But the fastest dog breed is the greyhound, with a speed of 67 km/h.
To put that into perspective, a cheetah runs 110 to 120 km/h, a thoroughbred around 80 km/h, the fastest human recorded was 44.72 km/h, but for me …. maybe 2km/h
How fast do you think you could sprint?
4) Have you seen your dog tirelessly spinning round and round before lying down? I promise you – they have a good reason for doing so.
Dogs turn in circles before lying down out of instinct. It’s a hardwired evolutionary trait. This action would turn long grass (or any other rough surface) into a more comfortable place to sleep in the wild. While driving out “unwanted guests,” like lizards, ants, snakes, and insects.
Spinning in circles before lying down is an act of self-preservation that allows one last look for potential predators. By determining the direction of the wind, they can position themselves to notice a threatening scent, helping a dog anticipate an attack even while sleeping.
5) Why do dogs not get appendicitis? Because dogs don’t have an appendix!
Some animals, including primates, wombats, rabbits, apes, and humans, have an appendix. But it is not present in dogs, cats, cows, sheep, goats, horses, or monkeys. Interestingly, human and dogs bodily functions and organs do not generally differ too much in that the appendix is the only organ a dog doesn’t have, but a human does. Instead, dogs have a small pouch called a cecum or caecum that is similar even though it plays a minimal role in digestion.
6) Dogs have three eyelids – an upper and lower; and one hidden between.
The third eyelid is also called the nictitating membrane. A dog’s third eyelid is an extra eyelid that functions as a windshield wiper to keep dust and debris out of their eyes. It sweeps back and forth across the eye’s surface, providing protection and spreading the tear film.
It is said that humans have this too – you know that little pink thing nestled in the corner of your eye? Well, some say it’s the remnant of a third eyelid. Known as the “plica semilunaris,” it’s more prominent in birds and a few mammals but also helps protect eyes from dirt and dust.
7) A dog’s sense of taste is about one-sixth as powerful as ours.
We have approx 9,000 tastebuds, whereas dogs have 1,700 and cats have 473 taste buds. The animal with the most taste buds is a Catfish! These scavenger fish are so sensitive they detect tastes in the water from kilometres away with more than 175,000 taste buds. No wonder I never had any luck catching anything!
8) Dogs use their superb sensing capabilities to detect a range of smells, from explosives to low blood sugar levels.
So it is no surprise that dogs can detect hormonal fluctuations and diseases such as different types of cancer, diabetes and even depression. It was believed initially that cancers would give off extra heat compared to healthy parts of the body, and that’s how animals could sense it. However, dogs have proven themselves even more capable. Even though dogs are famously known for detecting cancer. They can be more effective when trained to sniff out various types. Using samples from known cancer patients, dogs can be trained to identify skin, breast, and bladder cancer types.
9) It is no secret my dogs know when I am opening a packet of biscuits, and it’s not just their razor-sharp hearing that gives them the advantage; dogs can smell about 10,000 times better than humans.
We don’t stand a chance against their highly acute detection senses! I say, “just share the biscuits”!
10) Two of my all-time favourite dog facts are –
A) All dogs are therapy dogs – just some are undercover.
B) Everyone thinks they have the best dog… and they’re right
Did anything from the above list stand out to you? Any dog facts you didn’t know before?
* As always, do your research, introduce changes to medicine, diet, training or lifestyle slowly. Be careful with pets with sensitivities. Stay up-to-date and be vigilant. Use trusted and reputable pet resorts and industry professionals. Talk to your vet about additional and alternative treatments and defences suitable for your own unique experience. Finally – enjoy time with your pet; they are precious and oh, so much fun!
|
) that they are in a safe and secure environment.
2) Once, my daughter asked why I say such silly things like “It’s raining cats and dogs” when clearly that doesn’t happen – I’ve never had the heart to tell her the tragic origins…
The phrase raining cats and dogs originated in 17th century England when many cats and dogs drowned during heavy downpours of rain and when rivers burst their banks. Their bodies would be seen floating in the rain torrents that raced through the streets, appearing as though it had ‘rained cats and dogs’.
Instead, I used the ‘g rated’ version. I explained that the term “Cats and dogs” apparently comes from the Greek expression cata doxa, meaning “contrary to experience or belief.” Alternatively, some believe the term “Cats and Dogs” misuses the now-obsolete word catadupe, meaning waterfall.
3) As your dog slips through the gate and flies down the road ignoring any attempts to call them back, you’d be forgiven for assuming your dog is the fastest breed, but your dog is likely running between 24 to 32 km/h. But the fastest dog breed is the greyhound, with a speed of 67 km/h.
To put that into perspective, a cheetah runs 110 to 120 km/h, a thoroughbred around 80 km/h, the fastest human recorded was 44.72 km/h, but for me …. maybe 2km/h
How fast do you think you could sprint?
4) Have you seen your dog tirelessly spinning round and round before lying down? I promise you – they have a good reason for doing so.
Dogs turn in circles before lying down out of instinct. It’s a hardwired evolutionary trait. This action would turn long grass (or any other rough surface) into a more comfortable place to sleep in the wild. While driving out “unwanted guests,” like lizards, ants, snakes, and insects.
Spinning in circles before lying down is an act of self-preservation that allows one last look for potential predators.
|
yes
|
Etymology
|
Did the phrase "raining cats and dogs" originate from 17th century England?
|
yes_statement
|
the "phrase" "raining" "cats" and "dogs" "originated" from "17th" "century" england.. the "phrase" "raining" "cats" and "dogs" can be traced back to "17th" "century" england.
|
https://www.tlctranslation.com/figurative-language-from-around-the-world/
|
Figurative language from around the world
|
Figurative language from around the world
If you live in the United States, you’ve probably heard the phrases a dime a dozen, it’s raining cats and dogs,and ignorance is bliss. But every country has its own set of figurative language and this blog will explore some of the more common idioms in various countries.
An idiom is a group of words whose meanings cannot be determined from the literal meanings of the words it is made of; i.e., using up in the air for “undecided.” They are categorized as figurative language. The word itself comes from the late 16th-century French word idiome or late Latin from Greek idiōma “private property, peculiar phraseology.”
One of the oldest known idioms is “an eye for an eye, a tooth for a tooth,” which comes from the code of Hammurabi in 1780 BC.
• A dime a dozen – this phrase began around 1800 following the first minted dime in 1796. At that time, many goods such as eggs or apples were advertised to cost a dime a dozen in the US. The phrase began as a way to promote good value for money. This then evolved into an idiom that means something nearly worthless as it is easily available.
• It’s raining cats and dogs – this idiom is said to have originated in England during the 17th century. City streets were then filthy and heavy rain would occasionally carry along dead animals. Cats and dogs also have ancient associations with bad weather.
• Ignorance is bliss – this phrase comes from Thomas Gray’s 1768 poem “Ode on a Distant Prospect of Eton College.” The quote states: “Where ignorance is bliss, ’tis folly to be wise” — meaning, you’re better off not knowing.
Some idioms used by English speakers actually originated in China.
• 一石二鸟 translates to two birds one stone. English speakers added the word “kill” to the beginning because the phrase felt incomplete.
• 老狗玩不出新把戏 translates to old dogs can’t play new tricks. This phrase pretty closely mirrors the English idiom.
Here is a look at other popular idioms in other countries and their meanings.
• Spanish – Abrir la caja de los truenos or opening the box of thunder, which is equivalent to “opening a can of worms.”
• Swedish – Skägget i brevlådan or the beard in the mailbox, which translates in English to “to be caught with your pants down.”
And while idioms can be more challenging to translate, they are essential to individualistic expression. They offer cultural understandings of societal standards, principles, and beliefs and allow us insight into the thoughts, emotions, and views of the speaker’s background.
The best way to translate an idiom is to find an equivalent idiom in the target language.
|
Figurative language from around the world
If you live in the United States, you’ve probably heard the phrases a dime a dozen, it’s raining cats and dogs,and ignorance is bliss. But every country has its own set of figurative language and this blog will explore some of the more common idioms in various countries.
An idiom is a group of words whose meanings cannot be determined from the literal meanings of the words it is made of; i.e., using up in the air for “undecided.” They are categorized as figurative language. The word itself comes from the late 16th-century French word idiome or late Latin from Greek idiōma “private property, peculiar phraseology.”
One of the oldest known idioms is “an eye for an eye, a tooth for a tooth,” which comes from the code of Hammurabi in 1780 BC.
• A dime a dozen – this phrase began around 1800 following the first minted dime in 1796. At that time, many goods such as eggs or apples were advertised to cost a dime a dozen in the US. The phrase began as a way to promote good value for money. This then evolved into an idiom that means something nearly worthless as it is easily available.
• It’s raining cats and dogs – this idiom is said to have originated in England during the 17th century. City streets were then filthy and heavy rain would occasionally carry along dead animals. Cats and dogs also have ancient associations with bad weather.
• Ignorance is bliss – this phrase comes from Thomas Gray’s 1768 poem “Ode on a Distant Prospect of Eton College.” The quote states: “Where ignorance is bliss, ’tis folly to be wise” — meaning, you’re better off not knowing.
Some idioms used by English speakers actually originated in China.
• 一石二鸟 translates to two birds one stone. English speakers added the word “kill” to the beginning because the phrase felt incomplete.
• 老狗玩不出新把戏 translates to old dogs can’t play new tricks.
|
yes
|
Etymology
|
Did the phrase "raining cats and dogs" originate from 17th century England?
|
yes_statement
|
the "phrase" "raining" "cats" and "dogs" "originated" from "17th" "century" england.. the "phrase" "raining" "cats" and "dogs" can be traced back to "17th" "century" england.
|
https://liberalarts.oregonstate.edu/wlf/what-idiom-definition-examples
|
What is an Idiom? || Definition & Examples | | College of Liberal Arts ...
|
What is an Idiom? Transcript (English & Spanish Subtitles Available in Video, Click HERE for Spanish Transcript)
By Sindya Bhanoo, Oregon State Assistant Professor of Creative Writing and Prize-Winning Novelist
Idioms are phrases which cannot be understood simply by looking at the meaning of the individual words in the phrase. We use idiomatic expressions all the time. If your friend is “beating around the bush,” they are avoiding speaking with you about something directly. “That’s the way the ball bounces” suggests that some things are just out of our control. When someone says “It’s raining cats and dogs,” they mean it’s raining heavily. Cats and dogs are not actually falling from the sky. That last idiom may have originated during the 17th century in England, when cats and dogs were known to live in thatched roofs. During heavy rains, they may have slipped and fallen into the streets.
The mystery novelist Agatha Christie loved to use idioms. Christie’s beloved detective, Hercule Poirot is often found to be “in a brown study,” or fully absorbed in his own thoughts. In the short story “Jewelry Robbery at the Grand Metropolitan,” Poirot and his reliable assistant Hastings encounter a woman with some missing pearls. Hastings recounts Poirot’s behavior:
“He was staring thoughtfully out of the window, and seemed to have fallen into a brown study.”
“Cooked the goose” is an idiom that means “to ruin.” Thanks to Justice Wargrove, Seton was in big trouble.
Did you catch the other bits of interesting language in that section? The phrase “Hooding his eyes” is metaphorical. So is “shrunken lips.” To learn more about metaphors, you can watch my colleague Tim Jensen’s video. If you really want to get into it, some idioms are also metaphors.
The word idiom comes from the Greek word idios, which means for “one’s own” or “private.” That’s apt because idioms are kind of like private jokes between the people who know them. Since idioms are also culturally specific, they aren’t solely connected to language. In the UK, when someone says they are “chuffed to bits,” they mean that they are very pleased. If you speak American English, you may not be familiar with that idiom.
The linguist Anatoly Liberman, who has studied and written about the origin of idioms extensively, found that some idioms are highly localized, never used outside of a small community. Idioms come and go, and many have died out. He says that although idioms are phrases we learn them the way we learn words. It is the entire phrase that has a meaning. Often, the order of the words in the phrase cannot be changed around. You could say that idioms are a kind of literary and cultural shorthand.
Can you wrap your head around that? means “Did that make sense to you?” Because idioms cannot be literally translated, their meanings cannot be predicted. Foreign language speakers have a particularly hard time wrapping their heads around idioms.
The TED program asked some of its translators for idioms that might confound English speakers. In Latvian, “To blow little ducks,” means “to talk nonsense or to lie.” In French, “The carrots are cooked!” means the situation can’t be changed. It’s similar to the English idiom “There’s no use crying over spilled milk.”
Without an explanation, these would be all Greek to me. Hey, that’s another idiom: “It’s all Greek to me.” That one can be found in Shakespeare’s Julius Caesar, though it was likely in use before that.
If you have a favorite idiom, in any language, I’d love to hear it. You can share it in the commentsin the video. Well, I think that’s a wrap.
|
What is an Idiom? Transcript (English & Spanish Subtitles Available in Video, Click HERE for Spanish Transcript)
By Sindya Bhanoo, Oregon State Assistant Professor of Creative Writing and Prize-Winning Novelist
Idioms are phrases which cannot be understood simply by looking at the meaning of the individual words in the phrase. We use idiomatic expressions all the time. If your friend is “beating around the bush,” they are avoiding speaking with you about something directly. “That’s the way the ball bounces” suggests that some things are just out of our control. When someone says “It’s raining cats and dogs,” they mean it’s raining heavily. Cats and dogs are not actually falling from the sky. That last idiom may have originated during the 17th century in England, when cats and dogs were known to live in thatched roofs. During heavy rains, they may have slipped and fallen into the streets.
The mystery novelist Agatha Christie loved to use idioms. Christie’s beloved detective, Hercule Poirot is often found to be “in a brown study,” or fully absorbed in his own thoughts. In the short story “Jewelry Robbery at the Grand Metropolitan,” Poirot and his reliable assistant Hastings encounter a woman with some missing pearls. Hastings recounts Poirot’s behavior:
“He was staring thoughtfully out of the window, and seemed to have fallen into a brown study.”
“Cooked the goose” is an idiom that means “to ruin.” Thanks to Justice Wargrove, Seton was in big trouble.
Did you catch the other bits of interesting language in that section? The phrase “Hooding his eyes” is metaphorical. So is “shrunken lips.” To learn more about metaphors, you can watch my colleague Tim Jensen’s video. If you really want to get into it, some idioms are also metaphors.
The word idiom comes from the Greek word idios, which means for “one’s own” or “private.” That’s apt because idioms are kind of like private jokes between the people who know them.
|
yes
|
Etymology
|
Did the phrase "raining cats and dogs" originate from 17th century England?
|
yes_statement
|
the "phrase" "raining" "cats" and "dogs" "originated" from "17th" "century" england.. the "phrase" "raining" "cats" and "dogs" can be traced back to "17th" "century" england.
|
https://historymyths.wordpress.com/2016/12/30/raining-cats-and-dogs/
|
Raining Cats and Dogs ? | History Myths Debunked
|
Raining Cats and Dogs ?
“Back in the old days, when it rained people would put their cats and dogs up in the rafters so they would not get wet. [Variation: animals would climb up themselves to avoid the weather.] But the roofs often leaked, and the beams would get slippery so the animals fell, and it really was ‘raining cats and dogs’!”
There’s no myth here, just a question about the origins of this common saying. The best I can do is to report that a word search in JSTOR made me think the origin of the phrase may be Irish, as those are the earliest written usages I found. My opinion: this is a nonsensical phrase, like “raining pitchforks,” used to indicate severe rainfall and is not based on anything concrete.
Share:
Like this:
Related
This entry was posted on Friday, December 30th, 2016 at 10:08 am and is filed under Sayings. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Post navigation
4 Responses to Raining Cats and Dogs ?
Here are a few possibilities that show even by the 1850s and later people were puzzled by the phrase.
A Cornwall term for willows “bursting catkins” was ‘cats and dogs’ which “Increase in size rapidly after a few warm April showers.”
The book Slang and its Analogues, 1891 gave three possibilities. 1) The creation of satirist (and author of Gulliver’s Travels) Jonathan Swift in his humorous Polite Conversation, 1738: “I know Sir John will go, though he was sure it would rain Cats and Dogs” based on his 1710 description of a city shower. 2) The Greek cata doxas for a downpour out of the ordinary. or 3) The French catadoupe for a waterfall.
A great, thoroughly researched site for phrase origins. Watch out, though—you’ll get sucked in for hours 🙂
“The much more probable source of ‘raining cats and dogs’ is the prosaic fact that, in the filthy streets of 17th/18th century England, heavy rain would occasionally carry along dead animals and other debris. The animals didn’t fall from the sky, but the sight of dead cats and dogs floating by in storms could well have caused the coining of this colourful phrase.” Goes on to say Swift’s use (as mentioned by the commenter above) is in reference to such bad sanitation.
|
Raining Cats and Dogs ?
“Back in the old days, when it rained people would put their cats and dogs up in the rafters so they would not get wet. [Variation: animals would climb up themselves to avoid the weather.] But the roofs often leaked, and the beams would get slippery so the animals fell, and it really was ‘raining cats and dogs’!”
There’s no myth here, just a question about the origins of this common saying. The best I can do is to report that a word search in JSTOR made me think the origin of the phrase may be Irish, as those are the earliest written usages I found. My opinion: this is a nonsensical phrase, like “raining pitchforks,” used to indicate severe rainfall and is not based on anything concrete.
Share:
Like this:
Related
This entry was posted on Friday, December 30th, 2016 at 10:08 am and is filed under Sayings. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Post navigation
4 Responses to Raining Cats and Dogs ?
Here are a few possibilities that show even by the 1850s and later people were puzzled by the phrase.
A Cornwall term for willows “bursting catkins” was ‘cats and dogs’ which “Increase in size rapidly after a few warm April showers.”
The book Slang and its Analogues, 1891 gave three possibilities. 1) The creation of satirist (and author of Gulliver’s Travels) Jonathan Swift in his humorous Polite Conversation, 1738: “I know Sir John will go, though he was sure it would rain Cats and Dogs” based on his 1710 description of a city shower. 2) The Greek cata doxas for a downpour out of the ordinary. or 3) The French catadoupe for a waterfall.
A great, thoroughly researched site for phrase origins. Watch out, though—you’ll get sucked in for hours ��
|
no
|
Etymology
|
Did the phrase "raining cats and dogs" originate from 17th century England?
|
yes_statement
|
the "phrase" "raining" "cats" and "dogs" "originated" from "17th" "century" england.. the "phrase" "raining" "cats" and "dogs" can be traced back to "17th" "century" england.
|
https://medium.com/@interestingshit/the-origins-of-idioms-where-your-favorite-phrases-were-hatched-126a5d6942d5
|
The Origins of Idioms: Where Your Favorite Phrases Were Hatched ...
|
The Origins of Idioms: Where Your Favorite Phrases Were Hatched
So you’ve found yourself enjoying a morning bowl of your favourite bran-based breakfast cereal and you have a sudden urge to find out exactly what an idiom is. You Google it, and now here you are.
Want more interesting historical, geographic, and world culture stories? Visit InterestingShit.com for more mind-blowing stories.
The very act you were required to do-closing that tab on the nerdy rapping Uber driver and opening whatever search engine you happen to favor (be it Yahoo, Bing or if you’re more into communication through looped images of animals shaking what their mamas gave them, Giphy)-chances are good you’ll probably relate the story during a later conversation with an elderly neighbour by saying, “I know what you’re thinking, Howard; how did I even find anything on idioms? I just googled it!”
What is ‘google’ exactly, though? Of course it’s the name of a search engine. That name is first and foremost one simple thing: a word. But in our modern times the business name ‘Google’ has now become an all-encompassing verb or expression for doing research-if you hunting for spoilers on the upcoming season of Walking Dead can be called ‘research’.
You can take an idiom seriously, just not too literally
Idioms can generally be summarized as words being put together to form a phrase that is best not to be taken literally. Earlier in this post there was made mention of animals shaking what their mamas gave them, an expression that was popularized in the early 90s. The key here is: what exactly did your mama just give you to shake?
When it comes to the human side of this story here’s hoping you aren’t given that command when you’ve just come from visiting mom and you’re still holding the two bottles of cola she handed you on your way out the door.
The everyday idiom
There are plenty of common idioms that have been around for so long they’ve just been adopted as part of the everyday English language. Estimates are at least 25,000 idioms are in circulation. We all know that when there is heavy precipitation and someone exclaims that it’s ‘raining cats and dogs’ we aren’t expecting to see furry critters plummeting to the earth.
This saying can be traced back to 17th or 18th century England, when heavy rainfall would flush out the numerous small animal carcasses usually scattered around larger hygienically-challenged cities like London and float them down the streets.
Those that find themselves in desperate shape on the financial front are sometimes described as being both ‘piss poor’ or not having a ‘pot to piss in’. What does pee have to do with financial stability, though?
In simpler days, urine was collected in a bucket (usually be those that had little to no money or income; in other words piss poor) and then sold to tanneries to soak animal hides in to help remove hair and soften the skin. If you were so broke you couldn’t even afford the bucket you officially did not have a pot to piss in.
In more recent times ‘pissed’ has also been the label attached to anyone who has imbibed one too many pints at the pub and become ‘piss drunk’. Patron leaves their drinking establishment of choice and staggers to the nearest alley (or street corner, phone booth, fire hydrant…you get the point) and proceeds to pee, regardless of whether there’s an animal hide underfoot or not. In some instances they might just pee themselves.
That some person would probably be described as being ‘three sheets to the wind’, a phrase borrowed from nautical terminology describing the ropes attached to a ship’s sails in order to keep them in place becoming loose in the wind and fluttering about causing the vessel to teeter back and forth like an inebriated crew member.
Idioms are global
English is not the only language to have idioms, of course. If you’re in Sweden and you’ve been caught red handed you might hear the phrase ‘skägget i brevladån’ being directed your way. Translation? Being caught with your beard in the mailbox-a much less gruesome visual than the backstory to ‘red handed’that originated in 15th century Scotland and centers around messy animal poachers and murderers who had to have been so piss poor they couldn’t afford a bucket to clean up after themselves and wash their blood-stained hands.
Yes, we made mention of 25,000 idioms being in existence, but here’s just a sampling of other popular sayings:
‘Pull someone’s leg’
Pull it. Source: imgur
Everyone loves occasionally being joked with, but to ‘pull someone’s leg’ once referred to being robbed after some scoundrel tripped you on the street so their buddies could then swoop in and grab all your stuff.
‘Diehard’
Source: Wiki Commons
We all know that someone who loves a band, film (say like Diehard?) or an actor (Bruce Willis, anyone?) a little too much. But this word had its beginnings in the 1700s when men condemned to hang literally would not die when it was their turn on the gallows and the process went on for an excruciatingly long time. Much like the Die Hard film franchise you say?
‘Meeting a deadline’
Source: The Capture, the Prison Pen, and the Escape, by Willard Glazier, page 319. Engraving by H. C. Curtis.
Today it’s what Interesting Shit writers live their life by, but during the Civil War prisoners were given certain physical boundaries within the walls of prison camps marked by a line on the ground. Crossing that line meant being shot-a much crueler fate than having to deal with an anxious editor.
A slightly more spooky theory has the handbasket portion of the saying being linked to the vessel used to catch the severed heads falling from a guillotine (and the assumption the beheading casualty would be going to hell for whatever it was that brought them that fate).
‘Heard it through the grapevine’
Yes, it’s a Marvin Gaye song, but hearing something through the grapevine can be traced back to the advent of Samuel Morse’s telegraph system in the 1840sand the term ‘grapevine telegraph’ used to describe the new technology’s coiled wires.
Others think the ‘grapevine’ reference comes from the poles and wires erected to hang the thousands of miles of telegraph lines resembling the rig needed to train growing vines.
Today when it’s used ‘heard it through the grapevine’ generally means the forwarding of unconfirmed news or details, as was the case back in the 1860s when the same thing happened as people literally spread the word about the goings-on during the Civil War via telegraph.
Loved the story? We post stories every day, multiple times a day. Like us on Facebook for the latest stories, interesting videos, and jaw-dropping photos.
|
We all know that when there is heavy precipitation and someone exclaims that it’s ‘raining cats and dogs’ we aren’t expecting to see furry critters plummeting to the earth.
This saying can be traced back to 17th or 18th century England, when heavy rainfall would flush out the numerous small animal carcasses usually scattered around larger hygienically-challenged cities like London and float them down the streets.
Those that find themselves in desperate shape on the financial front are sometimes described as being both ‘piss poor’ or not having a ‘pot to piss in’. What does pee have to do with financial stability, though?
In simpler days, urine was collected in a bucket (usually be those that had little to no money or income; in other words piss poor) and then sold to tanneries to soak animal hides in to help remove hair and soften the skin. If you were so broke you couldn’t even afford the bucket you officially did not have a pot to piss in.
In more recent times ‘pissed’ has also been the label attached to anyone who has imbibed one too many pints at the pub and become ‘piss drunk’. Patron leaves their drinking establishment of choice and staggers to the nearest alley (or street corner, phone booth, fire hydrant…you get the point) and proceeds to pee, regardless of whether there’s an animal hide underfoot or not. In some instances they might just pee themselves.
That some person would probably be described as being ‘three sheets to the wind’, a phrase borrowed from nautical terminology describing the ropes attached to a ship’s sails in order to keep them in place becoming loose in the wind and fluttering about causing the vessel to teeter back and forth like an inebriated crew member.
Idioms are global
English is not the only language to have idioms, of course. If you’re in Sweden and you’ve been caught red handed you might hear the phrase ‘skägget i brevladån’ being directed your way. Translation?
|
yes
|
Paleoclimatology
|
Did volcanic activity trigger the Paleocene-Eocene Thermal Maximum?
|
yes_statement
|
"volcanic" "activity" "triggered" the paleocene"-"eocene thermal maximum.. the paleocene"-"eocene thermal maximum was "triggered" by "volcanic" "activity".
|
https://www.nature.com/articles/s41467-021-25536-0
|
Paleocene/Eocene carbon feedbacks triggered by volcanic activity ...
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Subjects
Abstract
The Paleocene–Eocene Thermal Maximum (PETM) was a period of geologically-rapid carbon release and global warming ~56 million years ago. Although modelling, outcrop and proxy records suggest volcanic carbon release occurred, it has not yet been possible to identify the PETM trigger, or if multiple reservoirs of carbon were involved. Here we report elevated levels of mercury relative to organic carbon—a proxy for volcanism—directly preceding and within the early PETM from two North Sea sedimentary cores, signifying pulsed volcanism from the North Atlantic Igneous Province likely provided the trigger and subsequently sustained elevated CO2. However, the PETM onset coincides with a mercury low, suggesting at least one other carbon reservoir released significant greenhouse gases in response to initial warming. Our results support the existence of ‘tipping points’ in the Earth system, which can trigger release of additional carbon reservoirs and drive Earth’s climate into a hotter state.
Introduction
The relative geological rapidity of warming and CO2 release at the Paleocene–Eocene Thermal Maximum (PETM), and the potential activation of feedbacks between warming and organic carbon reservoirs ~56 million years ago1, have relevance to understanding future Earth system responses to ongoing anthropogenic perturbation2. However, the major sources of carbon and the causal mechanisms triggering its release have remained under debate3,4,5,6,7, stymying our ability to draw firm inferences relevant to the future. Plausible carbon sources include peatlands and permafrost3, methane hydrates8, sedimentary marine organic matter9,10, and the mantle5, while proposed hypotheses for the PETM trigger include changes in orbital insolation1,3,11, volcanic activity of the North Atlantic Igneous Province (NAIP, Fig. 1)12,13, and an extra-terrestrial impact14. The problem has been in de-convolving the possible multiple different sources of carbon that contributed to the sharp decline in carbon isotope (δ13C) values observed in the sedimentary record marking the PETM onset and importantly, separating triggers from carbon-climate feedbacks. For example, a recent study5 concluded that volcanically produced carbon was the main source to explain proxy records of ocean pH across the PETM, but was unable to resolve the relative timing and contribution of different kinds of volcanic and non-volcanic carbon. Other studies have shown that NAIP volcanic rocks (e.g., lava flows, ash beds and sills) were formed approximately around the time of the PETM4,10,12, but limited dating of individual local geological features makes it difficult to conclude confidently whether volcanism provided the trigger. Here, we investigate new sediment cores from the North Sea for both high resolution δ13C stratigraphy and sedimentary mercury (Hg) as a proxy for volcanic emissions.
Fig. 1: Location maps of the North Atlantic Igneous Province (NAIP) and sediment cores sites analysed in this study.
The simplified NAIP main map shows the estimated ranges of its various components61. ‘Seaward dipping reflectors’ are well-defined seismic reflectors beneath the uppermost basalt, interpreted as large subaerial sheet lava flows associated with rifting61. Other lava flows are thought to be a combination of subaerial and submarine, and sills were considered as intruded into the upper crust4,12,61. The insert map is a Mollweid projection of modern continents (lines) on a palaeogeographic reconstruction, generated from (ref. 62), of continental plates (grey) centred at 56 Ma.
Variation in the concentration of Hg in organic carbon-rich marine sediments has shown promise as a direct means of elucidating regional-scale volcanic activity such as associated with the PETM7,15,16 as well as earlier large igneous provinces17. Today, approximately 20–40% of natural global Hg emissions come from the Earth’s crust via volcanism18. Released to the atmosphere, the ~0.5–1 year residence time of Hg allows it to be mixed globally, and when subsequently deposited to the ocean or as peatlands its preservation is predominantly via association with organic matter19,20,21,22,23. As such, normalised to total organic carbon (TOC), Hg has previously been used as a proxy for global volcanism17. Mercury can also be released hydrothermally, with modern hydrothermal vents being associated with locally-elevated Hg concentrations in water, sediments, and biota24. In the case of the PETM, Hg released via hydrothermal vents4,12 associated with the NAIP may not necessarily have entered the atmosphere but instead be detectible in NAIP-proximal ocean sediments7, where the modern residence time of Hg in the open ocean is decades to centuries, and on shelves is ~4 months which is considerably less than the residence time of shelf water25.
To constrain the timing of pulsed volcanism across the PETM and hence help elucidate its role as a trigger, we present the first high-resolution records of Hg from sediment cores 22/10a−4 (ref. 26) and E−8X (~100–500 years per sample; Figs. 1 and 2). Although modern dissolved Hg input from freshwater runoff to oceans is substantial (27 ± 13 Mmol year−1)27 compared to atmospheric deposition (19.5 ± 9.5 Mmol year−1)28, isotopic studies have shown terrestrial Hg to be constrained to coastal and nearshore shelf environments29 whilst our core sites are >200 km distal in the central North Sea. Well sites 22/10a−4 and E−8X are located ~400 km and ~600 km proximal to NAIP volcanic centres, respectively (Fig. 1). These sediment cores contain elevated levels of organic carbon suitable for this analysis7,30 and are from deep cored material that has not undergone weathering, a process that has been shown to change the Hg signal in some outcrop samples30. To constrain the PETM onset in detail, we also present a new and exceptionally well-defined δ13C and TOC record for core E−8X and use this to correlate between well sites and to create a consistent age model with orbitally-tuned Svalbard core BH9/05 (Fig. 3 and “Methods”).
Records are shown against depth in m core depth (below oil rig floor ‘Kelly bushing’ for 22/10a-4 and E-8X). a–d Well site 22/10a-4 (North Sea). e–g Well site E−8X (North Sea). h–i Core BH9/05 (Svalbard). Bulk sediment total organic carbon δ13CTOC is reported as ‰ VPDB, Vienna PeeDee Belemnite. Total organic carbon (TOC) is reported as % of the bulk weight. Hg is reported as parts per billion (ppb). Hg/TOC envelope reflects an analytical error, illustrating higher uncertainty in samples with lower TOC. The 22/10a-4 lithological (lith.) log, δ13CTOC, and TOC from are from (ref. 26). The BH9/05 δ13CTOC and age model are from (ref. 40), and Hg data are from (ref. 7). The position of the Paleocene/Eocene boundary, defined as the onset of the PETM, is shown as a horizontal dashed line.
The δ13C of both organic carbon (δ13Corg) and inorganic carbonate (δ13Ccarbonate) from North Sea well site E−8X (this study) and Bass River38, are correlated to Svalbard core BH9/05 (ref. 40) based on the overall shape of the records, with particular emphasis on the carbon isotope excursion (CIE) inflection points during the rapid onset and gradual recovery phases. The relative age model is based on two proposed solutions for cyclostratigraphy of core BH9/05 (ref. 40). Bass River core depth in metres below the surface (mbs).
Here, we document numerous peaks in Hg and Hg/TOC at both sites E−8X and 22/10a−4 above background, which occur both immediately before and within the PETM. By comparing with records elsewhere, we interpret these as evidence of episodic NAIP sill emplacement releasing thermogenic methane and Hg into ocean water via hydrothermal venting. Our Hg records provide evidence that the onset of the PETM was triggered by volcanic activity, but we find at least one other carbon reservoir must have subsequently been released as Hg and Hg/TOC decline in the second part of the PETM onset.
Results
Mercury pulses over the PETM
Both cores E−8X and 22/10a−4 show numerous high frequency and high amplitude fluctuations in Hg and Hg/TOC above background levels of ~50 ppb and ~40 ppb/wt%, respectively (Figs. 2 and 4). Modern oceanic sedimentary Hg has not been comprehensively constrained, and in many settings is thought to be contaminated by anthropogenic input22,31,32,33. Recent work has shown the Mediterranean Sea seafloor sediments from ~0.2 to 4 km water depth contain average Hg concentrations of 66 ppb and Hg/TOC values of 133 ppb/wt%, with isotopic modelling suggesting ~75% of the Hg may be from urban or industrial pollution33. Perhaps more comparable to the Paleogene North Sea, Baltic Sea sediments show average Hg concentrations of 20–40 ppb, and Hg/TOC of ~15 ppb/wt%, for preindustrial sediments32. Average shale Hg over the Phanerozoic has been estimated as 62 ppb, and average Hg/TOC as 71.9 ppb/wt% (ref. 34), although these datasets are skewed towards studies of Large Igneous Province volcanism and may therefore be above overall average shale background. In contrast, our records show Hg spikes up to values comparable to modern contaminated sediments of >1000 ppb (ref. 35), and Hg/TOC spikes of 200–500 ppb/wt%. Looking further afield to Svalbard (Fig. 1), marginal marine core BH9/05 has lower concentrations of Hg (ref. 7), but also shows numerous Hg/TOC pulses (Fig. 2). Although the Hg pulses within these cores may not quantitatively constrain the extent of volcanism, as eruptions may produce varying quantities of Hg (ref. 18) and sedimentary transport/preservation effects might also come into play, submarine eruptions would likely produce predominantly local Hg enrichments36. Any occurrence of marine anoxia cannot account for the Hg, as increased sedimentary drawdown would have rapidly depleted the water column if dissolved Hg were not continuously replenished17. Neither can Hg released from permafrost melting account for the observed (up to a factor of 4) Hg increases, as releasing even a modern permafrost inventory of 330–1300 Gg of Hg (ref. 23) over 2 kyear (our estimated approximate Hg pulse duration) equates to a release rate no more than the modern volcanic background rate of 700 Mg Hg/year (ref. 18), with this doubling unlikely to register clearly in the composition of sediments given other causes of background fluctuations. Finally, our records are characterised by multiple peaks of Hg/TOC, inconsistent with a single bolide impact or permafrost melting event16. Therefore, our results do suggest pulsed volcanic Hg releases occurred sporadically throughout the study interval7,37, and in particular around the PETM onset.
Fig. 4: Sedimentary Hg (ppb) against TOC (wt%) for well sites E−8X and 22/10a−4 and other published records.
a Line is the linear regression of E−8X and 22/10a−4 datasets combined (R2 = 0.22). Samples immediately before, during and after the Paleocene–Eocene Thermal Maximum (PETM) onset are indicated as red symbols (~2024.6–2026 m for E−8X; ~2608–2615 m for 22/10a−4), and outside of that area with grey symbols. Shaded 95% ellipses show the changing relationship between Hg and TOC over the studied interval, with many samples within the PETM onset (red symbols) exhibiting excess Hg with a steeper gradient to TOC than samples outside of this interval (grey symbols). Well site 22/10a−4 appears to have experienced greater excess Hg than E−8X, possibly as it was closer to the North Atlantic Igneous Province source. The dashed line shows the value of the average Phanerozoic bulk shale34. b All data from E−8X and 22/10a−4 (red symbols) plotted with data from Svalbard BH9/05, Denmark Fur formation, Lomonosov Ridge and Bass River (grey symbols)7.
Core 22/10a−4 has the longer pre-PETM section (Fig. 2c) and records a probable early phase of volcanism with two elevated Hg and Hg/TOC pulses between 2630 and 2625 m. Although we cannot with any certainty correlate these with other sites to assess if the feature was of local or regional scale, there are early pulses at cores E−8X and BH9/05 (Fig. 2g and i). Pulses in sedimentary Hg and Hg/TOC increase in the lead up to the PETM onset, in particular at E−8X (~2025.6 m) and BH9/05 (~535 m), but also at 22/10a−4 (~2617 m). At the onset of the carbon isotope excursion (CIE; defining the onset of the PETM), all three sites show a pulse in Hg and Hg/TOC (dashed line marked ‘CIE onset’ in Fig. 2), and then further pulses encompassing the CIE onset and main phase. Hg/TOC pulses peak in 22/10a−4 during the earliest CIE phase. A broader regional—rather than local—volcanic source for these pulses is indicated by their presence in E−8X and 22/10a−4 ~350 km apart, and in BH9/05 at Svalbard (although the spikes are less pronounced). Hg/TOC pulses encompass the onset of laminated sediment (likely from the removal of bioturbating organisms due to inhospitable bottom water conditions), and surface ocean eutrophication/lower salinity of the North Sea inferred from dinoflagellate cysts26 (Fig. 2d), although they do not correlate with these proxies. Hg/TOC pulses decline in frequency and amplitude at both North Sea sites as δ13C begins to recover, approximately coinciding with the end of deposition of laminated sediments and shift away from eutrophic/low salinity surface water conditions at 22/10a−4 (~2608 m), and a shift towards lower TOC values at E−8X (~2024.5 m). Further Hg pulses occur at E−8X and BH9/05 and then decline further as δ13C continues to fully recover.
Carbon cycle and mercury changes
The timing of Hg/TOC pulses relative to the evolution of δ13C is consistent with volcanism triggering PETM carbon release and also contributing to sustaining the CIE during its main phase4,12. This is evidenced by the Hg and Hg/TOC peaks immediately before and during the initial negative CIE onset, and later within the main phase of the PETM before δ13C begins to increase again (Fig. 2). We assume that our proxies for surface ocean CO2 and volcanism (paired δ13C and Hg/TOC data, respectively) are essentially synchronous at our (continuous at E−8X) sampling resolution (>100 years per sample). This is because the oceanic residence time of shelf Hg, and the time it would take volcanic CO2 to reach the atmosphere, mix, and be sequestered into ocean sediment via phytoplankton, is months to years. These Hg/TOC pulses are not likely to have been caused by changing sedimentation rates, as reporting Hg as a ratio to TOC aims to remove the influence of changing background sedimentation. However, changing delivery and source of sedimentary TOC (transportation and reworking) can modify Hg/TOC trends. Well sites E−8X and 22/10a−4 were within the deepest part of the North Sea Basin ~200 km from land, at palaeo-water depths of up to ~500 m, well below storm wave base26. Well site E−8X is exceptionally useful for Hg analysis as we see no evidence for highly variable sedimentation rates in the high TOC fine-grained claystone. The location and sedimentation suggest the carbon was at least partly of marine origin, as the δ13C record shows a similar CIE shift to that measured on dinoflagellate cysts at New Jersey38, and shows no smoothing that might typify a transported terrestrial and/or reworked carbon setting39.
Discussion
Well sites E−8X and 22/10a−4 records show broad similarities in Hg/TOC, but transported carbon could partly explain why the Hg/TOC signals from Svalbard7 are somewhat different (Fig. 2). Our δ13C correlation illustrates that core BH9/05 appears to have a more extended CIE onset40 than at E−8X, Fur Island (Denmark), or Bass River (New Jersey) (Figs. 3 and 5). This is most likely due to its marginal marine nature and proximity to land (evidenced by palynology changes40) which is argued to have exposed that site to significant reworked terrestrial carbon that could have residence times of thousands of years39, possibly muting some Hg signals. However, records from Svalbard and Denmark are useful for constraining the extent of Hg signals, and broadly show a baseline decrease in Hg/TOC and Hg with distance from the NAIP (Figs. 4 and 5). These signals together imply that volcanically-sourced Hg during the PETM may have been largely released into ocean water proximal to the NAIP, supporting the hypothesis of hydrothermal venting of thermogenic carbon from volcanically-emplaced sills4,12. Although the most NAIP proximal site Grane (Fig. 1) has by far the highest reported Hg values, it has a poorly defined carbon isotope stratigraphy, and exceptionally high Hg values may be related to local hydrocarbon migration and overprinting7. Similarly, although the most distal PETM Hg records exist from outcrops in Egypt15, interpretation is hampered by low TOC and severe weathering and dissolution that has been shown to alter primary Hg signals30. The lowest observed Hg comes from distal PETM sites at New Jersey7,16 and Blake Nose16 consistent with NAIP activity releasing Hg largely into proximal sediments, although TOC values are often below the analytical precision required for robust Hg/TOC assessment7.
Fig. 5: Summary of Hg and Hg/total organic carbon (TOC) data from various sites at the onset of the PETM.
North Sea well sites 22/10a−4 and E−8X (this study) generally display higher values than Fur and Svalbard7. Carbon isotope excursion (CIE) step 2 is shown as a dashed line and does not co-occur with a Hg or Hg/TOC spike in the sections. Core 22/10a−4 has previously been interpreted to have been partially impacted by transported carbon26. Bulk sediment δ13CTOC is reported as ‰ VPDB, Vienna PeeDee Belemnite.
The high resolution and clear carbon isotope signals from core E-8X allow examination of the structure of the CIE onset, where the negative δ13C shift takes place over two steps of persistent 1.5–2‰ decrease; CIE step 1 and CIE step 2, each lasting in the region of 1.5–2 kyear (Fig. 6c). Although CIE step 1 is associated with a Hg/TOC pulse above the background (Fig. 6a), CIE step 2 is not, even though step 2 represents the largest change in δ13C from presumed atmosphere-ocean carbon release. The onset of the Hg/TOC pulse immediately before CIE step 1 (~2025.42 m; just before time 0, Fig. 6a) coincides with a slight increase in Hg (from 99 to 167 ppb) and a slight reduction in TOC (from 1.1 to 0.8%). Although the elevated Hg/TOC values are therefore being partly driven by reducing TOC as well as increasing Hg, values are still well above analytical uncertainty (red shading in Fig. 6a). It is unlikely that poor preservation of TOC at this horizon caused the initial pulse, and the Hg/TOC remains high into the start of step 1 when TOC increases to ~2%, supporting that this pre-CIE step 1 Hg/TOC elevation is at least partly indicating elevated volcanic activity rather than entirely explainable as due to changes in TOC preservation, delivery, or development of anoxia. Indeed, Hg/TOC pulses continue to interrupt the record and are not systematically coupled with TOC. We, therefore, suggest CO2 emissions driving CIE step 1 were at least partly from hydrothermal venting and volcanic sill emplacement (due to elevated Hg/TOC in the North Sea), although we acknowledge our records cannot discount potential additional sources of CO2. The latter part of CIE step 1 coincides with increased Hg (Fig. 6a), before both Hg and Hg/TOC reduce into CIE step 2, potentially evidencing a lull in volcanic activity. This drop-in Hg/TOC during CIE step 2 can also be seen in BH9/05 (Svalbard), Fur Island (Denmark), and possibly 22/10a−4 (Fig. 5), although the latter site contains some noise in the δ13C record-making relationships less clear. Some records are thought to be influenced by sediment reworking (e.g., Svalbard and 22/10a−4) that could reduce Hg/TOC signals, but E−8X shows no sign of reworking with a clear and rapid CIE onset and central deep North Sea Basin location. In Svalbard, there is a general increase in the Hg/TOC baseline over the CIE onset, and at Fur sedimentation rates significantly increase during the PETM41 such that Hg deposition rates likely increase even though concentrations do not. However, there is no Hg/TOC evidence for any substantial increase in volcanism during CIE step 2 (unlike step 1). We note that all other E-8X instances of reduced Hg/TOC in this interval (vertical white bars in Fig. 6) coincide with increasing δ13C (and presumed CO2 drawdown), and increases (pink bars in Fig. 6) coincide with decreasing δ13C (and presumed CO2 releases), although the changes outside of the CIE steps are subtle.
Fig. 6: Proxies for volcanism, carbon release and temperature in the time domain; thousands of years from the start of the PETM carbon isotope excursion (CIE).
Although we do not rule out background volcanic activity occurring during CIE step 2, if volcanism was voluminous enough to produce the required CO2, annual release rates would have been an order of magnitude above modern volcanic CO2 emissions, and therefore possibly raised annual Hg deposition to at least an order of magnitude above modern18 and be detected in our samples. Alternatively, if intruded into organic-rich mudrocks such as those that underlay much of the NAIP12 we might expect even higher Hg deposition in the North Sea sediments. Therefore, this short-lived reduction in Hg/TOC during CIE step 2 is important as it points to a secondary phase of carbon release from a reservoir not directly linked to Hg emissions and hence likely not to volcanism. The main possibilities for such a feedback reservoir are suggested to include methane hydrates8 and permafrost carbon3.
While other pulses of Hg/TOC at E−8X do not correlate with such a large decrease in δ13C as CIE step 1, they do consistently co-occur with modest decreases in δ13C both before and after the CIE onset (pink bars in Fig. 6) which may signify thermogenic CO2 releases. Interestingly, previous studies including at Bass River have documented precursor environmental changes to the PETM which include sea surface temperature (SST) increase38,42. We correlate E−8X δ13C with Bass River dinoflagellate cyst δ13CDINO, which also shows the two-step CIE onset (Fig. 6c), and find that this early warming (Fig. 6d) corresponds with Hg/TOC evidence for volcanism which we speculate may have been its cause. We note that SST warming began even earlier, at about 10 kyear before the CIE onset (Fig. 6d), which also correlates with a Hg and Hg/TOC spike in E−8X although we recognize that correlation between Bass River and the North Sea outside of the CIE onset interval is less certain. SST records from nearby Fur43,44 are slightly more complicated to interpret due to occasionally high branched and isoprenoid tetraether (BIT) index values and changing sedimentation rates making it harder to correlate, but do show a possible fall stratigraphically below the CIE that has been suggested to reflect local cooling from volcanism44.
The coincidence of the largest global shift in δ13C (CIE step 2; Fig. 6) with reduced volcanic activity (suggested by reduced Hg/TOC; Figs. 5 and 6), points to the activation of a secondary, unstable/labile carbon reservoir that was depleted in response to initial warming possibly from NAIP volcanism. Although the Eocene is not directly analogous to Earth’s current markedly cooler climate state, our records are consistent with a tipping point whereby an additional warming-driven carbon release pushed the world into the PETM ‘Hothouse Earth’. Comparable processes have been predicted for the future if significant mitigations are not carried out45. We acknowledge that numerous uncertainties remain in the construction of our age model, and modelling is now needed to assess the likely source of secondary carbon release and estimate the amount of warming and volume of greenhouse gasses emitted. Additional records and proxy constraints are needed to confirm our diagnosed transition from volcanism-dominated to climate feedback-dominated carbon release at the onset of PETM, and the global warming threshold at which it occurred. However, our work highlights the utility of the palaeo record in better understanding the existence and sensitivity of carbon-climate feedbacks and potential tipping points.
Methods
Core handling and sampling
Core from well site E-8X (55°38′13.42″N; 04°59′11.96″E; Supplementary Fig. S1) was drilled in 1994 for hydrocarbon exploration. Cores 3 and 4 represent most of the Paleocene succession (the Våle Formation, Lista Formation and the lower Sele Formation; Supplementary Fig. S2). Core 3 was taken from 2021.065 m (below Kelly bushing) downwards (Supplementary Fig. S2) and cut into ~1 m sections (here termed Boxes). The upper part (Boxes 1–7) were split for the first time in 2013, after consolidation in foam and plastic tubing, with a split offset from the maximum diameter of the core. The larger part of the core (about two-thirds) was assigned as the archive, photographed (Fig. 2), and logged (Supplementary Fig. S2). The smaller part of the core was cut into two. One of these slices was continuously sampled at 1–2 cm intervals (dependent on the consolidation of the rock) and completely depleted. Sampling was carried out with a clean metal spatula, and the material was placed in labelled plastic sample bags. All analyses were performed on subsamples. Samples were labelled based on the depth from the top of each box (0 cm = top of a box). All samples were collected in a responsible manner in accordance with local laws.
Sedimentology
Sedimentological logs for well site E−8X (water depth 44 m) are presented in Supplementary Fig. S2 for Core 3, Boxes 1–11, 2030–2021 m, which represents the upper part of the Lista Formation (i.e., the top of the Ve Member and the Bue Member), and the lower part of the Sele Formation. The following descriptions add details to the more general lithological descriptions already described46.
For the Ve Member of the Lista Formation, only the uppermost 1.5 m of the Ve Member (2030.03–2029.55 m) is shown on the sedimentological log and is dominated by red, homogeneous mudstone. The lithology is consolidated mud, compacted by the overlying 2 km of sediments. The Ve Member is rich in smectite46, the preservation of which also indicates the mud was never buried to temperatures where smectite recrystallizes (>90 °C). The dominant facies of the Ve Member is homogeneous red claystone with small (mm-scale) white concretions of unknown mineralogy. Small green patches are locally present in the dark red clay. Two 15–20 cm thick beds with graded bedding, and a small slump fold near the base, occur below 2030 m. The muddy matrix contains quartz sand and white spheres (<1 mm in diameter), tentatively interpreted as redeposited concretions. The graded beds are tentatively interpreted as gravity flows. The boundary towards the Bue Member is transitional, and the uppermost 20 cm of the Ve Member are green or variegated, and weakly laminated. The Ve Member is correlated with the Holmehus Formation, onshore Denmark, and the latter has been interpreted as fully marine clay, deposited very slowly at water depths of >500 m (ref. 47). The slow sedimentation rates may partly explain the oxidation of the marine clay.
The Bue Member of the Lista Formation is represented by the interval 2029.55–2025.48 m (Supplementary Fig. S2). The basal part of the Bue Member is characterised by variegated clay, with an irregular transition from the dark red clay of the underlying Ve Member, through pale red and pale greenish clay, to pale greenish clay at the top. The colour changes follow neither weak lamination nor distinct burrows. The TOC shows a distinct increase from 1% (2025.41 m) to 4% (2025.31 m), which marks the boundary between the Lista and Sele Formations. Two sedimentary facies are characteristic of the Bue Member in E−8X. The lower part is dominated by pale green to greyish green clayey mudstone without laminations but occasionally with trace fossils (Zoophycos sp. and Chondrites sp.). The Zoophycos burrows are ~2 mm in diameter, and the Chondrites burrows are ~1 mm. The ichnogenera Chondrites and Zoophycos are common in depositional environments with dysoxic conditions48. These trace fossils are known to form a deep tier below the sea floor48,49,50,51, and it is possible that other, shallow-burrowing organisms produced the nearly homogeneous clayey mudstone. Pyrite is observed in the bioturbated sediment, either as small concretions or as pyritic laminae. In the upper part of the Bue Member (>2026.85 m), the lithology changes gradually to a greenish-black mudstone where neither laminations nor burrows are observed. Upwards, the mudstone becomes more greyish and locally fissile. The mudstone has no visible pyrite and no visible trace fossils. Shallow burrows in a ‘soup ground’ sediment may not be preserved as distinct trace fossils51. The mudstone contains few layers of volcanic ash. Towards the Sele Formation, the Bue Member is a weakly laminated, greyish black claystone or clayey mudstone, locally slightly greenish with jarosite. The TOC content increases in these upper facies (Fig. 2, Box 6), supporting the interpretation of a change from dysoxic to nearly anoxic conditions. The weak lamination suggests that only a sparse benthic fauna may have existed during the deposition of the upper part of the Bue Member. The presence of the ichnogenera Chondrites and Zoophycos suggests it may have been deposited in a dysoxic environment and that only a relatively small drop in dissolved oxygen was required to obliterate most of the benthos and thus preserve the lamination in the sediment.
The Sele Formation is ~11 m thick in E−8X, and the lower 4.3 m from the top of Core 3 (2025.3–2021 m) (Supplementary Fig. S2). The Sele Formation is dominated by dark, brownish grey to black mudstone with mm and sub-mm scale laminae interbedded with very few and thin layers of volcanic ash. The mudstone locally contains laminae enriched in pyrite. The brownish grey, laminated mudstone is interbedded with a cream coloured mudstone with 0.1 mm laminae (2023.18–2024.58 m). The paler beds contain more silt-sized particles than the background mudstone. The boundary to the underlying Lista Formation (Bue Member) is sharp. This is also supported by the TOC analyses, which show a dramatic rise in TOC across the boundary (Fig. 2). Upwards through the Sele Formation TOC decreases. The return to normal marine values is not observed in the E−8X core. The very thin, parallel and continuous laminae in the Sele Formation indicate a complete absence of benthic fauna. Anoxic conditions, with bottom water containing lethal H2S, would preclude benthic fauna, and the lack of dissolved oxygen would lower the rate of bacterial degradation of organic particles, and thus favour the development of an organic-rich mudstone. Sulphate-reducing bacteria might locally produce relatively large amounts of pyrite. Fine bodies of framboidal pyrite may be present in the mudstone. The Sele Formation is interpreted as deposited under anoxic conditions. This is also the case for the diatomaceous Fur Formation in onshore Denmark52.
The laminated, anoxic mudstones of the Sele Formation are also known from other wells in the North Sea46. Onshore Denmark, dark grey, laminated clay (the Stolle Klint Clay) was deposited from the onset of PETM, which demonstrates that anoxic conditions were regional in the North Sea Basin during the PETM41,53.
Sedimentation rates may be estimated from laminated sediments if the stacked laminae are nearly identical and deposited annually (as varves). Laminated sediment from 22/10a-4 (North Sea) and the Stolle Klint Clay (Denmark) was studied in a thin section and suggested to have been annually deposited (Kender et al. 26; Heilmann-Clausen et al. 53). Core 22/10a−4 was found to include average ~0.08 mm-thick couplets of pale and dark laminae, and the Stolle Klint Clay ~0.25 mm-thick couplets. If annual, the ~3.5 m-thick laminated part of 22/10a−4 would have been deposited in ~40 kyear, and the 24.4 m-thick Stolle Klint Clay7 deposited in ~100 kyear.
Depth and age scales
There are three depth scales for E−8X (all m below Kelly bushing). One is taken from the petrophysical logs (not shown). The depth to the top of Core 3, Box 1, was measured as 6630′ (2020.82 m) at the time of drilling, and the base of Box 11 had a depth of 6661′6″ (2030.43 m). Each Box was subsequently given a depth in feet based on the length of the measured core (left scale in Supplementary Fig. S2). During storage, the cores expanded by up to 15%. Therefore, a third depth scale was constructed (in 2013 and updated in 2015) based on measuring the current core lengths, which is here used in all figures and tables. The top of the core in Box 1 was taken as 2021.065 m, and all depths are appended below this in measured m of the current cores in Boxes 1–11 (right-hand scale in Supplementary Fig. S2). Deviation from the original core depth increases downwards due to the accumulation of expanded core.
The age model for E−8X is constructed with biostratigraphy and carbon isotope stratigraphy. Axiodinium (Apectodinium) augustum, a planktonic dinoflagellate cyst species used as the diagnostic marker for the PETM in the North Sea and Arctic12,26,43,54, is present within the CIE of sediment core E−8X (Extended Data Table 1). The CIE main phase has been previously estimated to have lasted ~90 kyear (ref. 55) or ~135 kyear (ref. 56). The age models we use for Fig. 6 are those of Svalbard core BH9/05 ‘options A and B’ (Fig. 3), constructed by cyclostratigraphic correlation40. We correlate the E−8X record (and other records used in Fig. 6) to BH9/05 using the δ13C onset of the ‘main phase’ of the CIE, and the end of the ‘recovery phase 1’ along with the overall shape (Fig. 3). We favour option A as it provides an overall duration of the CIE of ~100 kyear, similar to the age model of (ref. 55). To correlate 22/10a−4 with E−8X—a site some 350 km distant—we use a tie point at the onset of the CIE, and another at the beginning of the CIE main phase (dashed lines in Fig. 2). This coincides with an increase in TOC and onset of laminations at both sites, which is consistent with an anoxic water column from warmer water, elevated carbon flux, and a shift towards low salinity/eutrophic dinoflagellate cysts in 22/10a−4 (ref. 26) (Fig. 2d). The average sedimentation rates for E−8X (assuming linear sedimentation rate) are 3 cm kyear−1 (BH9/05 option A) or 2.1 cm kyear−1 (BH9/05 option B).
Our records from E−8X indicate that the CIE onset took place over about 10 cm, which could be within 3–5 kyears. The largest shift in the δ13C occurred over 6 cm or 2–3 kyear. Although the CIE onset has been previously estimated as lasting ~20 kyear from modelling Svalbard core BH9/05, the assumption that BH9/05 records the true onset length has been challenged39 as that location was a marginal marine and proximal to land57, and possibly impacted by terrestrial organic carbon lag times.
TOC and carbon isotopes
We analysed TOC and δ13CTOC at high resolution (~1 cm) from sediment core E−8X (Figs. 2e, f), split for the first time in 2013. TOC analysis was performed on bulk samples by combustion in a Costech ECS4010 elemental analyser (EA) calibrated against an Acetanilide standard (Supplementary Data 1). Replicate analysis of well-mixed samples indicated a precision of ±<0.1. Carbon isotope analysis was carried out on bulk rock samples (Supplementary Data 1) by crushing the rock fragments using a ball mill. Any calcite was removed by placing the samples in 5% HCl overnight before rinsing with deionized water and drying down. 13C/12C analyses were performed by combustion in a Costech EA on-line to a VG TripleTrap and Optima dual-inlet mass spectrometer, with δ13C values calculated to the VPDB scale using a within-run laboratory standards calibrated against NBS-18, NBS-19 and NBS-22. Replicate 13C/12C analyses were carried out on the section, and the mean standard deviation on the replicate analyses is <0.4‰. The δ13C values show low scatter, and range between –24.7 and –31.7‰ (Supplementary Data 1). A proposed phytoplankton source for the organic carbon is consistent with the central North Sea position of E−8X, fine grained sedimentology, and a similar magnitude CIE when compared with δ13C measured on dinoflagellate cysts (δ13CDINO) at Bass River (Fig. 3).
Sedimentary Hg
Hg analysis was carried out on bulk sediment samples by an RA-915 Portable Mercury Analyzer with PYRO-915 Pyrolyzer, Lumex, at the Department of Earth Sciences, University of Oxford. Methods were adapted from previous studies58,59. Approximately 50–100 mg of rock powder (depending on Hg enrichment) were measured into a glass measuring boat and its precise mass determined. Samples were heated in the Pyrolyzer to ~700 °C to volatilise the Hg within the sample. Following this, gaseous Hg was transported into the Analyzer and abundance was measured, providing the abundance mass of Hg as parts per billion (Supplementary Data 2). The machine was initially calibrated using six measurements of standard NIST 2587 with a Hg concentration of 290 ± 9 ppb and varying masses between 20 and 80 mg. During analysis, standards were analysed after every ten rock samples to ensure continuity of the calibration. The standard deviation of the standard was 31.1 ppb (n = 82), and the standard deviation of the offset from repeated samples from 22/10a−4 and E−8X (n = 72) was 26.7 ppb. For both records, sedimentary Hg shows highly fluctuating values from 6 to 1500 ppb (Fig. 2). An organic carbon association for this sedimentary Hg is supported by the relationships between Hg and TOC in 22/10a−4 and E−8X (Fig. 4), and the very low Hg values in samples with low TOC.
Palynology
Palynology samples were prepared using standard preparation procedures60. Samples were demineralised with hydrochloric (HCl) and hydrofluoric (HF) acids, and zinc bromide was used as a heavy liquid to separate and remove acid-resistant mineral grains. Slides were mounted using Elvacite. All palynomorphs were analyzed with a Nikon transmitted light microscope, counting the total number of palynomorphs on a strew slide (Extended Data Table 1).
Data availability
The data that support the findings of this study are available within the Supplementary Information.
Acknowledgements
This work was supported by NERC Isotope Geoscience Steering Committee (NIGFSC) Grants IP-1547-0515 and IP-1915-0619 (to S.K.), a European Research Council Consolidator Grant no. ERC-2018-COG-818717-V-ECHO (to T.A.M.), and forms part of a PhD project by E.M. funded by the College of Engineering, Mathematical and Physical Sciences, University of Exeter. Thanks to J. Boserup (GEUS) for consolidating and cutting core E−8X, and to L. Percival and F. Palmeri (University of Oxford) for sample analyses. M.J.L. and J.B.R. publish with the approval of the Executive Director, British Geological Survey (NERC).
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Subjects
Abstract
The Paleocene–Eocene Thermal Maximum (PETM) was a period of geologically-rapid carbon release and global warming ~56 million years ago. Although modelling, outcrop and proxy records suggest volcanic carbon release occurred, it has not yet been possible to identify the PETM trigger, or if multiple reservoirs of carbon were involved. Here we report elevated levels of mercury relative to organic carbon—a proxy for volcanism—directly preceding and within the early PETM from two North Sea sedimentary cores, signifying pulsed volcanism from the North Atlantic Igneous Province likely provided the trigger and subsequently sustained elevated CO2. However, the PETM onset coincides with a mercury low, suggesting at least one other carbon reservoir released significant greenhouse gases in response to initial warming. Our results support the existence of ‘tipping points’ in the Earth system, which can trigger release of additional carbon reservoirs and drive Earth’s climate into a hotter state.
Introduction
The relative geological rapidity of warming and CO2 release at the Paleocene–Eocene Thermal Maximum (PETM), and the potential activation of feedbacks between warming and organic carbon reservoirs ~56 million years ago1, have relevance to understanding future Earth system responses to ongoing anthropogenic perturbation2. However, the major sources of carbon and the causal mechanisms triggering its release have remained under debate3,4,5,6,7, stymying our ability to draw firm inferences relevant to the future.
|
yes
|
Paleoclimatology
|
Did volcanic activity trigger the Paleocene-Eocene Thermal Maximum?
|
yes_statement
|
"volcanic" "activity" "triggered" the paleocene"-"eocene thermal maximum.. the paleocene"-"eocene thermal maximum was "triggered" by "volcanic" "activity".
|
https://pubmed.ncbi.nlm.nih.gov/34465785/
|
Paleocene/Eocene carbon feedbacks triggered by volcanic activity
|
Abstract
The Paleocene-Eocene Thermal Maximum (PETM) was a period of geologically-rapid carbon release and global warming ~56 million years ago. Although modelling, outcrop and proxy records suggest volcanic carbon release occurred, it has not yet been possible to identify the PETM trigger, or if multiple reservoirs of carbon were involved. Here we report elevated levels of mercury relative to organic carbon-a proxy for volcanism-directly preceding and within the early PETM from two North Sea sedimentary cores, signifying pulsed volcanism from the North Atlantic Igneous Province likely provided the trigger and subsequently sustained elevated CO2. However, the PETM onset coincides with a mercury low, suggesting at least one other carbon reservoir released significant greenhouse gases in response to initial warming. Our results support the existence of 'tipping points' in the Earth system, which can trigger release of additional carbon reservoirs and drive Earth's climate into a hotter state.
Conflict of interest statement
Figures
Fig. 1. Location maps of the North Atlantic Igneous Province (NAIP) and sediment cores sites analysed in this study.
The simplified NAIP main map shows the estimated ranges of its various components. ‘Seaward dipping reflectors’ are well-defined seismic reflectors beneath the uppermost basalt, interpreted as large subaerial sheet lava flows associated with rifting. Other lava flows are thought to be a combination of subaerial and submarine, and sills were considered as intruded into the upper crust,,. The insert map is a Mollweid projection of modern continents (lines) on a palaeogeographic reconstruction, generated from (ref. ), of continental plates (grey) centred at 56 Ma.
Records are shown against depth in m core depth (below oil rig floor ‘Kelly bushing’ for 22/10a-4 and E-8X). a–d Well site 22/10a-4 (North Sea). e–g Well site E−8X (North Sea). h–i Core BH9/05 (Svalbard). Bulk sediment total organic carbon δ13CTOC is reported as ‰ VPDB, Vienna PeeDee Belemnite. Total organic carbon (TOC) is reported as % of the bulk weight. Hg is reported as parts per billion (ppb). Hg/TOC envelope reflects an analytical error, illustrating higher uncertainty in samples with lower TOC. The 22/10a-4 lithological (lith.) log, δ13CTOC, and TOC from are from (ref. ). The BH9/05 δ13CTOC and age model are from (ref. ), and Hg data are from (ref. ). The position of the Paleocene/Eocene boundary, defined as the onset of the PETM, is shown as a horizontal dashed line.
The δ13C of both organic carbon (δ13Corg) and inorganic carbonate (δ13Ccarbonate) from North Sea well site E−8X (this study) and Bass River, are correlated to Svalbard core BH9/05 (ref. ) based on the overall shape of the records, with particular emphasis on the carbon isotope excursion (CIE) inflection points during the rapid onset and gradual recovery phases. The relative age model is based on two proposed solutions for cyclostratigraphy of core BH9/05 (ref. ). Bass River core depth in metres below the surface (mbs).
Fig. 4. Sedimentary Hg (ppb) against TOC (wt%) for well sites E−8X and 22/10a−4 and other published records.
a Line is the linear regression of E−8X and 22/10a−4 datasets combined (R2 = 0.22). Samples immediately before, during and after the Paleocene–Eocene Thermal Maximum (PETM) onset are indicated as red symbols (~2024.6–2026 m for E−8X; ~2608–2615 m for 22/10a−4), and outside of that area with grey symbols. Shaded 95% ellipses show the changing relationship between Hg and TOC over the studied interval, with many samples within the PETM onset (red symbols) exhibiting excess Hg with a steeper gradient to TOC than samples outside of this interval (grey symbols). Well site 22/10a−4 appears to have experienced greater excess Hg than E−8X, possibly as it was closer to the North Atlantic Igneous Province source. The dashed line shows the value of the average Phanerozoic bulk shale. b All data from E−8X and 22/10a−4 (red symbols) plotted with data from Svalbard BH9/05, Denmark Fur formation, Lomonosov Ridge and Bass River (grey symbols).
Fig. 5. Summary of Hg and Hg/total organic carbon (TOC) data from various sites at the onset of the PETM.
North Sea well sites 22/10a−4 and E−8X (this study) generally display higher values than Fur and Svalbard. Carbon isotope excursion (CIE) step 2 is shown as a dashed line and does not co-occur with a Hg or Hg/TOC spike in the sections. Core 22/10a−4 has previously been interpreted to have been partially impacted by transported carbon. Bulk sediment δ13CTOC is reported as ‰ VPDB, Vienna PeeDee Belemnite.
Fig. 6. Proxies for volcanism, carbon release…
Fig. 6. Proxies for volcanism, carbon release and temperature in the time domain; thousands of…
Fig. 6. Proxies for volcanism, carbon release and temperature in the time domain; thousands of years from the start of the PETM carbon isotope excursion (CIE).
|
Abstract
The Paleocene-Eocene Thermal Maximum (PETM) was a period of geologically-rapid carbon release and global warming ~56 million years ago. Although modelling, outcrop and proxy records suggest volcanic carbon release occurred, it has not yet been possible to identify the PETM trigger, or if multiple reservoirs of carbon were involved. Here we report elevated levels of mercury relative to organic carbon-a proxy for volcanism-directly preceding and within the early PETM from two North Sea sedimentary cores, signifying pulsed volcanism from the North Atlantic Igneous Province likely provided the trigger and subsequently sustained elevated CO2. However, the PETM onset coincides with a mercury low, suggesting at least one other carbon reservoir released significant greenhouse gases in response to initial warming. Our results support the existence of 'tipping points' in the Earth system, which can trigger release of additional carbon reservoirs and drive Earth's climate into a hotter state.
Conflict of interest statement
Figures
Fig. 1. Location maps of the North Atlantic Igneous Province (NAIP) and sediment cores sites analysed in this study.
The simplified NAIP main map shows the estimated ranges of its various components. ‘Seaward dipping reflectors’ are well-defined seismic reflectors beneath the uppermost basalt, interpreted as large subaerial sheet lava flows associated with rifting. Other lava flows are thought to be a combination of subaerial and submarine, and sills were considered as intruded into the upper crust,,. The insert map is a Mollweid projection of modern continents (lines) on a palaeogeographic reconstruction, generated from (ref. ), of continental plates (grey) centred at 56 Ma.
Records are shown against depth in m core depth (below oil rig floor ‘Kelly bushing’ for 22/10a-4 and E-8X). a–d Well site 22/10a-4 (North Sea). e–g Well site E−8X (North Sea). h–i Core BH9/05 (Svalbard).
|
yes
|
Paleoclimatology
|
Did volcanic activity trigger the Paleocene-Eocene Thermal Maximum?
|
yes_statement
|
"volcanic" "activity" "triggered" the paleocene"-"eocene thermal maximum.. the paleocene"-"eocene thermal maximum was "triggered" by "volcanic" "activity".
|
https://phys.org/news/2021-08-earth-triggered-rapid-climate-million.html
|
'Tipping points' in Earth's system triggered rapid climate change 55 ...
|
'Tipping points' in Earth's system triggered rapid climate change 55 million years ago, research shows
Scientists have uncovered a fascinating new insight into what caused one of the most rapid and dramatic instances of climate change in the history of the Earth.
A team of researchers, led by Dr. Sev Kender from the University of Exeter, have made a pivotal breakthrough in the cause behind the Paleocene-Eocene Thermal Maximum (PETM) – an extreme global warming event that lasted for around 150 thousand years which saw significant temperature rises.
Although previous studies have suggested volcanic activity contributed to the vast CO2 emissions that drove the rapid climate change, the trigger for event is less clear.
In the new study, the researchers have identified elevated levels of mercury just before and at the outset of the PETM—which could be caused by expansive volcanic activity—in samples taken from sedimentary cores in the North Sea.
Crucially, the research of the rock samples also showed that in the early stages of the PETM, there was a significant drop in mercury levels—suggested at least one other carbon reservoir released significant greenhouse gasses as the phenomenon took hold.
The research indicates the existence of tipping points in the Earth's System—which could trigger the release of additional carbon reservoirs that drove the Earth's climate to unprecedented high temperatures.
The pioneering research, which also includes experts from the British Geological Survey, the University of Oxford, Herriot-Watt University and the University of California at Riverside, could give a fresh understanding of how modern day climate change will affect the Earth in the centuries to come.
The research is published in Nature Communications on August 31th 2021.
Dr. Kender, a co-author on the study from the Camborne School of Mines, based at the University of Exeter's Penryn Campus in Cornwall said: "Greenhouse gasses such a CO2 methane were released to the atmosphere at the start of the PETM in just a few thousand years.
"We wanted to test the hypothesis that this unprecedented greenhouse gas release was triggered by large volcanic eruptions. As volcanoes also release large quantities of mercury, we measured the mercury and carbon in the sediment cores to detect any ancient volcanism.
"The surprise was that we didn't find a simple relationship of increased volcanism during the greenhouse gas release. We found volcanism occurred only at the beginning phase, and so another source of greenhouse gasses must have been released after the volcanism."
The PETM phenomenon, which is one of the most rapid periods of warming in the Earth's history, occurred as Greenland pulled away from Europe.
While the reasons behind how such vast quantities of CO2 were released to trigger this extensive period of warming lay hidden for many years, scientists have recently suggested that volcanic eruptions were the main driver.
However, while carbon records and modeling have suggested vast amounts of volcanic carbon was released, it has not been possible to identify the trigger point for PETM—until now.
In the new study, the researchers studied two new sedimentary cores from the North Sea which showed high levels of mercury present, relative to organic carbon levels.
These samples showed numerous peaks in mercury levels both before, and at the outset of the PETM period—suggesting it was triggered by volcanic activity.
However, the study also showed that there was at least one other carbon reservoir that was subsequently released as the PETM took hold, as mercury levels appear to decline in the second part of its onset.
Dr. Kender added: "We were able to carry out this research as we have been working on exceptionally well preserved new core material with collaborators from the Geological Survey of Denmark and Greenland. The excellent preservation allowed detailed detection of both the carbon released to the atmosphere and the mercury. As the North Sea is close to the region of volcanism thought to have triggered the PETM, these cores were in an ideal position to detect the signals.
"The volcanism that caused the warming was probably vast deep intruded sills producing thousands of hydrothermal vents on a scale far beyond anything seen today. Possible secondary sources of greenhouse gasses were melting permafrost and sea floor methane hydrates, as a result of the initial volcanic warming."
Citation:
'Tipping points' in Earth's system triggered rapid climate change 55 million years ago, research shows (2021, August 31)
retrieved 15 August 2023
from https://phys.org/news/2021-08-earth-triggered-rapid-climate-million.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Let us know if there is a problem with our content
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page.
For general inquiries, please use our contact form.
For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Your message to the editors
Your email (only if you want to be contacted back)
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
E-mail the story
'Tipping points' in Earth's system triggered rapid climate change 55 million years ago, research shows
Note
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose.
The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.
Your message
Newsletter sign up
Get weekly and/or daily updates delivered to your inbox.
You can unsubscribe at any time and we'll never share your details to third parties.
Your Privacy
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties.
By using our site, you acknowledge that you have read and understand our Privacy Policy
and Terms of Use.
|
As volcanoes also release large quantities of mercury, we measured the mercury and carbon in the sediment cores to detect any ancient volcanism.
"The surprise was that we didn't find a simple relationship of increased volcanism during the greenhouse gas release. We found volcanism occurred only at the beginning phase, and so another source of greenhouse gasses must have been released after the volcanism. "
The PETM phenomenon, which is one of the most rapid periods of warming in the Earth's history, occurred as Greenland pulled away from Europe.
While the reasons behind how such vast quantities of CO2 were released to trigger this extensive period of warming lay hidden for many years, scientists have recently suggested that volcanic eruptions were the main driver.
However, while carbon records and modeling have suggested vast amounts of volcanic carbon was released, it has not been possible to identify the trigger point for PETM—until now.
In the new study, the researchers studied two new sedimentary cores from the North Sea which showed high levels of mercury present, relative to organic carbon levels.
These samples showed numerous peaks in mercury levels both before, and at the outset of the PETM period—suggesting it was triggered by volcanic activity.
However, the study also showed that there was at least one other carbon reservoir that was subsequently released as the PETM took hold, as mercury levels appear to decline in the second part of its onset.
Dr. Kender added: "We were able to carry out this research as we have been working on exceptionally well preserved new core material with collaborators from the Geological Survey of Denmark and Greenland. The excellent preservation allowed detailed detection of both the carbon released to the atmosphere and the mercury. As the North Sea is close to the region of volcanism thought to have triggered the PETM, these cores were in an ideal position to detect the signals.
"The volcanism that caused the warming was probably vast deep intruded sills producing thousands of hydrothermal vents on a scale far beyond anything seen today.
|
yes
|
Paleoclimatology
|
Did volcanic activity trigger the Paleocene-Eocene Thermal Maximum?
|
yes_statement
|
"volcanic" "activity" "triggered" the paleocene"-"eocene thermal maximum.. the paleocene"-"eocene thermal maximum was "triggered" by "volcanic" "activity".
|
https://www.azocleantech.com/news.aspx?newsID=31378
|
Evaluating Carbon Released Before the Paleocene–Eocene ...
|
Evaluating Carbon Released Before the Paleocene–Eocene Thermal Maximum
A massive release of greenhouse gases, likely triggered by volcanic activity, caused a period of abrupt global warming known as the Paleocene–Eocene Thermal Maximum (PETM) 56 million years ago. A new study confirms that there was a second, smaller rise in atmospheric CO2 just before the PETM, with total carbon emissions similar to today’s levels.
Giant’s Causeway, Northern Ireland.: Basalt rock exposures of North Atlantic volcanism during the time of the PETM. Image Credit: Tali Babila, University of Southampton.
This resulted in a brief period of warming and acidification of the oceans. These two events, taken together, provide unique insights into how Earth’s current climate might respond if carbon emissions continue to rise.
Marine sediments containing the ancient remains of foraminifera, a group of microscopic organisms preserved as fossils, show evidence of environmental change at the PETM. Scientists can determine the temperature and pH of the oceans millions of years ago by analyzing the chemical composition of foraminifera shells.
The PETM is an important geologic climate event because it is one of best comparisons to current climate change and can help inform us how the Earth System will respond to current and future warming.
Dr. Tali Babila, Study Lead Author and Postdoctoral Research Associate, University of Southampton
Despite years of research, the sequence of environmental changes leading up to the PETM has remained a mystery because the start of the event was nearly erased in almost every marine record, until now, by the ocean acidification that took place.
Drilling sediment cores along the eastern United States, now part of the Atlantic Coastal Plain, a group of international geoscientists headed by the University of Southampton and University of California Santa Cruz, as well as Utah State University, KU Leuven, Penn State, and the US Geological Survey, overcame the lack of fossils from the time.
AgriTech eBook
Compilation of the top interviews, articles, and news in the last year.
This region was a shallow continental shelf at the time of the PETM, which provided higher sedimentation rates given the proximity to land and some protection from ocean acidification, preserving some of the missing sediment record.
The researchers then used a ground-breaking laser sampling technique developed on fossilized plankton shells in sediment samples. They sampled microscopic plankton with a laser beam the width of a strand of human hair and sent the vaporized particles to a mass spectrometer. The boron chemistry of the shell was analyzed and used to estimate the acidity and thus carbon content of the oceans at the time.
The findings provided evidence for a significant increase in carbon emissions just before the PETM began, on the order of what we see released by human activities today. The study was published in the journal Science Advances.
This had previously been suggested as a possible trigger for the large scale global warming that followed but scientists lacked a direct measure of carbon dioxide until this study.
Dr. Tali Babila, Study Lead Author and Postdoctoral Research Associate, University of Southampton
Dr. Babila adds, “Usually, this type of analysis would require thousands of fossils which would not have been possible because of the scarcity of samples. Our novel application of the laser sampling technique is a major geoscience advancement bringing new and incredible detail never before seen in Earth’s past.”
The findings enable the researchers to draw closer parallels with anthropogenic climate change. The short-lived precursor event seems to be more akin to what might happen if carbon emissions were rapidly reduced, whereas the PETM’s larger carbon release more closely resembles the likely environmental consequences of continuing on the current path of rising atmospheric carbon dioxide emissions.
Whilst natural geological processes such as rock weathering and carbon burial eventually meant Earth eventually recovered from the PETM, it took hundreds of thousands of years. So this is further proof that urgent action is needed today to rapidly cut the amount of carbon being release into the atmosphere to avoid long-lasting effects.
Dr. Tali Babila, Study Lead Author and Postdoctoral Research Associate, University of Southampton
|
Evaluating Carbon Released Before the Paleocene–Eocene Thermal Maximum
A massive release of greenhouse gases, likely triggered by volcanic activity, caused a period of abrupt global warming known as the Paleocene–Eocene Thermal Maximum (PETM) 56 million years ago. A new study confirms that there was a second, smaller rise in atmospheric CO2 just before the PETM, with total carbon emissions similar to today’s levels.
Giant’s Causeway, Northern Ireland.: Basalt rock exposures of North Atlantic volcanism during the time of the PETM. Image Credit: Tali Babila, University of Southampton.
This resulted in a brief period of warming and acidification of the oceans. These two events, taken together, provide unique insights into how Earth’s current climate might respond if carbon emissions continue to rise.
Marine sediments containing the ancient remains of foraminifera, a group of microscopic organisms preserved as fossils, show evidence of environmental change at the PETM. Scientists can determine the temperature and pH of the oceans millions of years ago by analyzing the chemical composition of foraminifera shells.
The PETM is an important geologic climate event because it is one of best comparisons to current climate change and can help inform us how the Earth System will respond to current and future warming.
Dr. Tali Babila, Study Lead Author and Postdoctoral Research Associate, University of Southampton
Despite years of research, the sequence of environmental changes leading up to the PETM has remained a mystery because the start of the event was nearly erased in almost every marine record, until now, by the ocean acidification that took place.
Drilling sediment cores along the eastern United States, now part of the Atlantic Coastal Plain, a group of international geoscientists headed by the University of Southampton and University of California Santa Cruz, as well as Utah State University, KU Leuven, Penn State, and the US Geological Survey, overcame the lack of fossils from the time.
AgriTech eBook
Compilation of the top interviews, articles, and news in the last year.
|
yes
|
Paleoclimatology
|
Did volcanic activity trigger the Paleocene-Eocene Thermal Maximum?
|
yes_statement
|
"volcanic" "activity" "triggered" the paleocene"-"eocene thermal maximum.. the paleocene"-"eocene thermal maximum was "triggered" by "volcanic" "activity".
|
https://www.sciencedaily.com/releases/2022/03/220316145749.htm
|
Effects of ancient carbon releases suggest possible scenarios for ...
|
A massive release of greenhouse gases, likely triggered by volcanic activity, caused a period of extreme global warming known as the Paleocene-Eocene Thermal Maximum (PETM) about 56 million years ago. A new study now confirms that the PETM was preceded by a smaller episode of warming and ocean acidification caused by a shorter burst of carbon emissions. The short-lived precursor event represents what might happen if current emissions can be shut down quickly, while the much more extreme global warming of the PETM shows the consequences of continuing to release carbon into the atmosphere at the current rate.
A massive release of greenhouse gases, likely triggered by volcanic activity, caused a period of extreme global warming known as the Paleocene-Eocene Thermal Maximum (PETM) about 56 million years ago. A new study now confirms that the PETM was preceded by a smaller episode of warming and ocean acidification caused by a shorter burst of carbon emissions.
The new findings, published March 16 in Science Advances, indicate that the amount of carbon released into the atmosphere during this precursor event was about the same as the current cumulative carbon emissions from the burning of fossil fuels and other human activities. As a result, the short-lived precursor event represents what might happen if current emissions can be shut down quickly, while the much more extreme global warming of the PETM shows the consequences of continuing to release carbon into the atmosphere at the current rate.
"It was a short-lived burp of carbon equivalent to what we've already released from anthropogenic emissions," said coauthor James Zachos, professor of Earth and planetary sciences and Ida Benson Lynn Chair of Ocean Health at UC Santa Cruz. "If we turned off emissions today, that carbon would eventually get mixed into the deep sea and its signal would disappear, because the deep-sea reservoir is so huge."
This process would take hundreds of years -- a long time by human standards, but short compared to the tens of thousands of years it took for Earth's climate system to recover from the more extreme PETM.
The new findings are based on an analysis of marine sediments that were deposited in shallow waters along the U.S. Atlantic coast and are now part of the Atlantic Coastal Plain. At the time of the PETM, sea levels were higher, and much of Maryland, Delaware, and New Jersey were under water. The U.S. Geological Survey (USGS) has drilled sediment cores from this region which the researchers used for the study.
The PETM is marked in marine sediments by a major shift in carbon isotope composition and other evidence of dramatic changes in ocean chemistry as a result of the ocean absorbing large amounts of carbon dioxide from the atmosphere. The marine sediments contain the microscopic shells of tiny sea creatures called foraminifera that lived in the surface waters of the ocean. The chemical composition of these shells records the environmental conditions in which they formed and reveals evidence of warmer surface water temperatures and ocean acidification.
First author Tali Babila began the study as a postdoctoral fellow working with Zachos at UC Santa Cruz and is now at the University of Southampton, U.K. Novel analytical methods developed at Southampton enabled the researchers to analyze the boron isotope composition of individual foraminifera to reconstruct a detailed record of ocean acidification. This was part of a suite of geochemical analyses they used to reconstruct environmental changes during the precursor event and the main PETM.
advertisement
"Previously, thousands of foraminifera fossil shells were needed for boron isotope measurement. Now we are able to analyze a single shell that's only the size of a grain of sand," Babila said.
Evidence of a precursor warming event had been identified previously in sediments from the continental section at Big Horn Basin in Wyoming and a few other sites. Whether it was a global signal remained unclear, however, as it was absent from deep-sea sediment cores. Zachos said this makes sense because sedimentation rates in the deep ocean are slow, and the signal from a short-lived event would be lost due to mixing of sediments by bottom-dwelling marine life.
"The best hope for seeing the signal would be in shallow marine basins where sedimentation rates are higher," he said. "The problem there is that deposition is episodic and erosion is more likely. So there's not a high likelihood of capturing it."
The USGS and others have drilled numerous sediment cores (or sections) along the Atlantic Coastal Plain. The researchers found that the PETM is present in all of those sections, and several also capture the precursor event. Two sections from Maryland (at South Dover Bridge and Cambridge-Dover Airport) are the focus of the new study.
"Here we have the full signal, and a couple of other locations capture part of it. We believe it's the same event they found in the Bighorn Basin," Zachos said.
Based on their analyses, the team concluded that the precursor signal in the Maryland sections represents a global event that probably lasted for a few centuries, or possibly several millennia at most.
The two carbon pulses -- the short-lived precursor and the much larger and more prolonged carbon emissions that drove the PETM -- led to profoundly different mechanisms and time scales for the recovery of the Earth's carbon cycle and climate system. The carbon absorbed by the surface waters during the precursor event got mixed into the deep ocean within a thousand years or so. The carbon emissions during the PETM, however, exceeded the buffering capacity of the ocean, and removal of the excess carbon depended on much slower processes such as the weathering of silicate rocks over tens of thousands of years.
Zachos noted that there are important differences between Earth's climate system today and during the Paleocene -- notably the presence of polar ice sheets today, which increase the sensitivity of the climate to greenhouse warming.
In addition to Babila and Zachos, the coauthors of the paper include Gavin Foster and Christopher Standish at University of Southampton; Donald Penman at Utah State University; Monika Doubrawa, Robert Speijer, and Peter Stassen at KU Leuven, Belgium; Timothy Bralower at Pennsylvania State University; and Marci Robinson and Jean Self-Trail at the USGS. This work was funded in part by the National Science Foundation.
Feb. 16, 2023 56 million years ago, the Earth experienced one of the largest and most rapid climate warming events in its history: the Paleocene-Eocene Thermal Maximum (PETM), which has similarities to current and ...
May 25, 2021 The Paleocene-Eocene Thermal Maximum, or PETM, was a short interval of highly elevated global temperatures 56 million years ago that is frequently described as the best ancient analog for present-day ...
Feb. 20, 2019 Total human carbon dioxide emissions could match those of Earth's last major greenhouse warming event in fewer than five generations, new research finds. A new study finds humans are pumping carbon ...
Sep. 5, 2018 The impact of global warming on shallow marine life approximately 56 million years ago is the subject of a significant, new article. Researchers have now addressed the effects of the Paleocene-Eocene ...
|
A massive release of greenhouse gases, likely triggered by volcanic activity, caused a period of extreme global warming known as the Paleocene-Eocene Thermal Maximum (PETM) about 56 million years ago. A new study now confirms that the PETM was preceded by a smaller episode of warming and ocean acidification caused by a shorter burst of carbon emissions. The short-lived precursor event represents what might happen if current emissions can be shut down quickly, while the much more extreme global warming of the PETM shows the consequences of continuing to release carbon into the atmosphere at the current rate.
A massive release of greenhouse gases, likely triggered by volcanic activity, caused a period of extreme global warming known as the Paleocene-Eocene Thermal Maximum (PETM) about 56 million years ago. A new study now confirms that the PETM was preceded by a smaller episode of warming and ocean acidification caused by a shorter burst of carbon emissions.
The new findings, published March 16 in Science Advances, indicate that the amount of carbon released into the atmosphere during this precursor event was about the same as the current cumulative carbon emissions from the burning of fossil fuels and other human activities. As a result, the short-lived precursor event represents what might happen if current emissions can be shut down quickly, while the much more extreme global warming of the PETM shows the consequences of continuing to release carbon into the atmosphere at the current rate.
"It was a short-lived burp of carbon equivalent to what we've already released from anthropogenic emissions," said coauthor James Zachos, professor of Earth and planetary sciences and Ida Benson Lynn Chair of Ocean Health at UC Santa Cruz. "If we turned off emissions today, that carbon would eventually get mixed into the deep sea and its signal would disappear, because the deep-sea reservoir is so huge. "
This process would take hundreds of years -- a long time by human standards, but short compared to the tens of thousands of years it took for Earth's climate system to recover from the more extreme PETM.
|
yes
|
Paleoclimatology
|
Did volcanic activity trigger the Paleocene-Eocene Thermal Maximum?
|
yes_statement
|
"volcanic" "activity" "triggered" the paleocene"-"eocene thermal maximum.. the paleocene"-"eocene thermal maximum was "triggered" by "volcanic" "activity".
|
https://www.techexplorist.com/caused-extreme-global-warming-event-lasted-around-150-thousand-years/40970/
|
What caused an extreme global warming event that lasted for ...
|
Past studies have suggested that the rise in CO2 emissions from volcanic activity triggered rapid climate change. However, the trigger for the event is less clear.
Paleocene-Eocene Thermal Maximum (PETM), also called Initial Eocene Thermal Maximum (IETM), is an extreme global warming event that lasted for around 150 thousand years. What triggered the PETM remains unclear.
A new study uncovered a fascinating new insight into what caused the most rapid and dramatic instances of climate change in the history of the Earth. The study suggests that the tipping points in Earth’s system triggered rapid climate change 55 million years ago.
Scientists identified elevated levels of mercury just before and at the outset of the PETM. This detection was made in samples taken from sedimentary cores in the North Sea.
Analysis of rock samples reveals a significant drop in mercury levels during the early stages of the PETM; there was a significant drop in mercury levels. It means, at least one other carbon reservoir released significant greenhouse gasses as the phenomenon took hold.
The research indicates the existence of tipping points in the Earth’s System—which could trigger the release of additional carbon reservoirs that drove the Earth’s climate to unprecedented high temperatures.
Dr. Kender, a co-author on the Camborne School of Mines study, based at the University of Exeter’s Penryn Campus in Cornwall, said: “Greenhouse gasses such CO2 methane were released to the atmosphere at the start of the PETM in just a few thousand years.
“We wanted to test the hypothesis that large volcanic eruptions triggered this unprecedented greenhouse gas release. As volcanoes also release large quantities of mercury, we measured the mercury and carbon in the sediment cores to detect any ancient volcanism.”
“The surprise was that we didn’t find a simple relationship of increased volcanism during the greenhouse gas release. We found volcanism occurred only at the beginning phase, and so another source of greenhouse gasses must have been released after the volcanism.”
Analysis of new sedimentary cores from the North Sea shows the presence of high levels of mercury. These samples showed numerous peaks in mercury levels before and at the outset of the PETM period—suggesting it was triggered by volcanic activity.
Dr. Kender added: “We were able to carry out this research as we have been working on exceptionally well preserved new core material with collaborators from the Geological Survey of Denmark and Greenland. The excellent preservation allowed precise detection of both the carbon released to the atmosphere and the mercury. As the North Sea is close to the region of volcanism thought to have triggered the PETM, these cores were ideal for detecting the signals.”
|
Past studies have suggested that the rise in CO2 emissions from volcanic activity triggered rapid climate change. However, the trigger for the event is less clear.
Paleocene-Eocene Thermal Maximum (PETM), also called Initial Eocene Thermal Maximum (IETM), is an extreme global warming event that lasted for around 150 thousand years. What triggered the PETM remains unclear.
A new study uncovered a fascinating new insight into what caused the most rapid and dramatic instances of climate change in the history of the Earth. The study suggests that the tipping points in Earth’s system triggered rapid climate change 55 million years ago.
Scientists identified elevated levels of mercury just before and at the outset of the PETM. This detection was made in samples taken from sedimentary cores in the North Sea.
Analysis of rock samples reveals a significant drop in mercury levels during the early stages of the PETM; there was a significant drop in mercury levels. It means, at least one other carbon reservoir released significant greenhouse gasses as the phenomenon took hold.
The research indicates the existence of tipping points in the Earth’s System—which could trigger the release of additional carbon reservoirs that drove the Earth’s climate to unprecedented high temperatures.
Dr. Kender, a co-author on the Camborne School of Mines study, based at the University of Exeter’s Penryn Campus in Cornwall, said: “Greenhouse gasses such CO2 methane were released to the atmosphere at the start of the PETM in just a few thousand years.
“We wanted to test the hypothesis that large volcanic eruptions triggered this unprecedented greenhouse gas release. As volcanoes also release large quantities of mercury, we measured the mercury and carbon in the sediment cores to detect any ancient volcanism.”
“The surprise was that we didn’t find a simple relationship of increased volcanism during the greenhouse gas release. We found volcanism occurred only at the beginning phase, and so another source of greenhouse gasses must have been released after the volcanism.”
|
no
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://www.history.com/topics/religion/hinduism
|
Hinduism - Origins, Facts & Beliefs | HISTORY
|
Hinduism is the world’s oldest religion, according to many scholars, with roots and customs dating back more than 4,000 years. Today, with more than 1 billion followers, Hinduism is the third-largest religion worldwide, after Christianity and Islam. Roughly 94 percent of the world’s Hindus live in India. Because the religion has no specific founder, it’s difficult to trace its origins and history. Hinduism is unique in that it’s not a single religion but a compilation of many traditions and philosophies: Hindus worship a number of different gods and minor deities, honor a range of symbols, respect several different holy books and celebrate with a wide variety of traditions, holidays and customs. Though the caste system in India began with Hinduism, that system is no longer rigidly enforced. Today there are four major sects of Hinduism: Shaivism, Vaishnava, Shaktism and Smarta, as well as a number of smaller sects with their own religious practices.
Hinduism Beliefs, Symbols
Some basic Hindu concepts include:
Hinduism embraces many religious ideas. For this reason, it’s sometimes referred to as a “way of life” or a “family of religions,” as opposed to a single, organized religion.
Most forms of Hinduism are henotheistic, which means they worship a single deity, known as “Brahman,” but still recognize other gods and goddesses. Followers believe there are multiple paths to reaching their god.
Hindus believe in the doctrines of samsara (the continuous cycle of life, death, and reincarnation) and karma (the universal law of cause and effect).
One of the key thoughts of Hinduism is “atman,” or the belief in soul. This philosophy holds that living creatures have a soul, and they’re all part of the supreme soul. The goal is to achieve “moksha,” or salvation, which ends the cycle of rebirths to become part of the absolute soul.
One fundamental principle of the religion is the idea that people’s actions and thoughts directly determine their current life and future lives.
Hindus strive to achieve dharma, which is a code of living that emphasizes good conduct and morality.
Hindus revere all living creatures and consider the cow a sacred animal.
Food is an important part of life for Hindus. Most don’t eat beef or pork, and many are vegetarians.
Hinduism is closely related to other Indian religions, including Buddhism, Sikhism and Jainism.
John Seaton Callahan/Getty Images
A swastika symbol featured on a tile at Hindu temple on Diu Island, India. The symbol is one of good luck and good fortune.
There are two primary symbols associated with Hinduism, the om and the swastika. The word swastika means "good fortune" or "being happy" in Sanskrit, and the symbol represents good luck. (A hooked, diagonal variation of the swastika later became associated with Germany’s Nazi Party when they made it their symbol in 1920.)
The om symbol is composed of three Sanskrit letters and represents three sounds (a, u and m), which when combined are considered a sacred sound. The om symbol is often found at family shrines and in Hindu temples.
Hinduism Holy Books
Hindus value many sacred writings as opposed to one holy book.
The primary sacred texts, known as the Vedas, were composed around 1500 B.C. This collection of verses and hymns was written in Sanskrit and contains revelations received by ancient saints and sages.
The Vedas are made up of:
The Rig Veda
The Samaveda
Yajurveda
Atharvaveda
Hindus believe that the Vedas transcend all time and don’t have a beginning or an end.
The Upanishads, the Bhagavad Gita, 18 Puranas, Ramayana and Mahabharata are also considered important texts in Hinduism.
Origins of Hinduism
Most scholars believe Hinduism started somewhere between 2300 B.C. and 1500 B.C. in the Indus Valley, near modern-day Pakistan. But many Hindus argue that their faith is timeless and has always existed.
Unlike other religions, Hinduism has no one founder but is instead a fusion of various beliefs.
Around 1500 B.C., the Indo-Aryan people migrated to the Indus Valley, and their language and culture blended with that of the indigenous people living in the region. There’s some debate over who influenced whom more during this time.
The period when the Vedas were composed became known as the “Vedic Period” and lasted from about 1500 B.C. to 500 B.C. Rituals, such as sacrifices and chanting, were common in the Vedic Period.
The Epic, Puranic and Classic Periods took place between 500 B.C. and A.D. 500. Hindus began to emphasize the worship of deities, especially Vishnu, Shiva and Devi.
The concept of dharma was introduced in new texts, and other faiths, such as Buddhism and Jainism, spread rapidly.
Hinduism vs. Buddhism
Hinduism and Buddhism have many similarities. Buddhism, in fact, arose out of Hinduism, and both believe in reincarnation, karma and that a life of devotion and honor is a path to salvation and enlightenment.
But some key differences exist between the two religions: Buddhism rejects the caste system of Hinduism and does away with the rituals, the priesthood and the gods that are integral to the Hindu faith.
Medieval and Modern Hindu History
The Medieval Period of Hinduism lasted from about A.D. 500 to 1500. New texts emerged, and poet-saints recorded their spiritual sentiments during this time.
In the 7th century, Muslim Arabs began invading areas in India. During parts of the Muslim Period, which lasted from about 1200 to 1757, Islamic rulers prevented Hindus from worshipping their deities, and some temples were destroyed.
Mahatma Gandhi
Between 1757 and 1947, the British controlled India. At first, the new rulers allowed Hindus to practice their religion without interference. But later, Christian missionaries sought to convert and westernize the people.
Many reformers emerged during the British Period. The well-known politician and peace activist, Mahatma Gandhi, led a movement that pushed for India’s independence.
The partition of India occurred in 1947, and Gandhi was assassinated in 1948. British India was split into what are now the independent nations of India and Pakistan, and Hinduism became the major religion of India.
Starting in the 1960s, many Hindus migrated to North America and Britain, spreading their faith and philosophies to the western world.
Dinodia Photos/Getty Images
Indian statesman and activist Mahatma Gandhi, 1940.
Hindu Gods
Ashmolean Museum/Heritage Images/Getty Images
An early 18th-century depiction of Devi revered by Brahma, Vishnu and Shiva.
Hindus worship many gods and goddesses in addition to Brahman, who is believed to be the supreme God force present in all things.
Some of the most prominent deities include:
Brahma: the god responsible for the creation of the world and all living things
Vishnu: the god that preserves and protects the universe
Shiva: the god that destroys the universe in order to recreate it
Devi: the goddess that fights to restore dharma
Krishna: the god of compassion, tenderness and love
Lakshmi: the goddess of wealth and purity
Saraswati: the goddess of learning
Places of Worship
Hindu worship, which is known as “puja,” typically takes place in the Mandir (temple). Followers of Hinduism can visit the Mandir any time they please.
Hindus can also worship at home, and many have a special shrine dedicated to certain gods and goddesses.
The giving of offerings is an important part of Hindu worship. It’s a common practice to present gifts, such as flowers or oils, to a god or goddess.
Additionally, many Hindus take pilgrimages to temples and other sacred sites in India.
Hinduism Sects
Hinduism has many sects, and the following are often considered the four major denominations.
Shaivism is one of the largest denominations of Hinduism, and its followers worship Shiva, sometimes known as “The Destroyer,” as their supreme deity.
Shaivism spread from southern India into Southeast Asia and is practiced in Vietnam, Cambodia and Indonesia as well as India. Like the other major sects of Hinduism, Shaivism considers the Vedas and the Upanishads to be sacred texts.
Vaishnavism is considered the largest Hindu sect, with an estimated 640 million followers, and is practiced worldwide. It includes sub-sects that are familiar to many non-Hindus, including Ramaism and Krishnaism.
Vaishnavism recognizes many deities, including Vishnu, Lakshmi, Krishna and Rama, and the religious practices of Vaishnavism vary from region to region across the Indian subcontinent.
Shaktism is somewhat unique among the four major traditions of Hinduism in that its followers worship a female deity, the goddess Shakti (also known as Devi).
Shaktism is sometimes practiced as a monotheistic religion, while other followers of this tradition worship a number of goddesses. This female-centered denomination is sometimes considered complementary to Shaivism, which recognizes a male deity as supreme.
The Smarta or Smartism tradition of Hinduism is somewhat more orthodox and restrictive than the other four mainstream denominations. It tends to draw its followers from the Brahman upper caste of Indian society.
Smartism followers worship five deities: Vishnu, Shiva, Devi, Ganesh and Surya. Their temple at Sringeri is generally recognized as the center of worship for the denomination.
Some Hindus elevate the Hindu trinity, which consists of Brahma, Vishnu and Shiva. Others believe that all the deities are a manifestation of one.
Hindu Caste System
The caste system is a social hierarchy in India that divides Hindus based on their karma and dharma. Many scholars believe the system dates back more than 3,000 years.
The four main castes (in order of prominence) include:
Brahmin: the intellectual and spiritual leaders
Kshatriyas: the protectors and public servants of society
Vaisyas: the skillful producers
Shudras: the unskilled laborers
Many subcategories also exist within each caste. The “Untouchables” are a class of citizens that are outside the caste system and considered to be in the lowest level of the social hierarchy.
For centuries, the caste system determined every aspect of a person’s social, professional and religious status in India.
HISTORY Vault: Ancient History
From the Sphinx of Egypt to the Kama Sutra, explore ancient history videos.
Sources:
HISTORY.com works with a wide range of writers and editors to create accurate and informative content. All articles are regularly reviewed and updated by the HISTORY.com team. Articles with the “HISTORY.com Editors” byline have been written or edited by the HISTORY.com editors, including Amanda Onion, Missy Sullivan, Matt Mullen and Christian Zapata.
Fact Check
We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us! HISTORY reviews and updates its content regularly to ensure it is complete and accurate.
|
For this reason, it’s sometimes referred to as a “way of life” or a “family of religions,” as opposed to a single, organized religion.
Most forms of Hinduism are henotheistic, which means they worship a single deity, known as “Brahman,” but still recognize other gods and goddesses. Followers believe there are multiple paths to reaching their god.
Hindus believe in the doctrines of samsara (the continuous cycle of life, death, and reincarnation) and karma (the universal law of cause and effect).
One of the key thoughts of Hinduism is “atman,” or the belief in soul. This philosophy holds that living creatures have a soul, and they’re all part of the supreme soul. The goal is to achieve “moksha,” or salvation, which ends the cycle of rebirths to become part of the absolute soul.
One fundamental principle of the religion is the idea that people’s actions and thoughts directly determine their current life and future lives.
Hindus strive to achieve dharma, which is a code of living that emphasizes good conduct and morality.
Hindus revere all living creatures and consider the cow a sacred animal.
Food is an important part of life for Hindus. Most don’t eat beef or pork, and many are vegetarians.
Hinduism is closely related to other Indian religions, including Buddhism, Sikhism and Jainism.
John Seaton Callahan/Getty Images
A swastika symbol featured on a tile at Hindu temple on Diu Island, India. The symbol is one of good luck and good fortune.
There are two primary symbols associated with Hinduism, the om and the swastika. The word swastika means "good fortune" or "being happy" in Sanskrit, and the symbol represents good luck. (A hooked, diagonal variation of the swastika later became associated with Germany’s Nazi Party when they made it their symbol in 1920.)
The om symbol is composed of three Sanskrit letters and represents three sounds (a, u and m), which when combined are considered a sacred sound.
|
yes
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://en.wikipedia.org/wiki/God_in_Hinduism
|
God in Hinduism - Wikipedia
|
Henotheism was the term used by scholars such as Max Müller to describe the theology of Vedic religion.[32][33] Müller noted that the hymns of the Rigveda, the oldest scripture of Hinduism, mention many deities, but praises them successively as the "one ultimate, supreme God" (called saccidānanda in some traditions), alternatively as "one supreme Goddess",[34] thereby asserting that the essence of the deities was unitary (ekam), and the deities were nothing but pluralistic manifestations of the same concept of the divine (God).[33][35][36]
The idea that there can be and are plural perspectives for the same divine or spiritual principle repeats in the Vedic texts. For example, other than hymn 1.164 with this teaching,[30] the more ancient hymn 5.3 of the Rigveda states:
You at your birth are Varuna, O Agni.
When you are kindled, you are Mitra.
In you, O son of strength, all gods are centered.
You are Indra to the mortal who brings oblation.
You are Aryaman, when you are regarded as having
the mysterious names of maidens, O Self-sustainer.
Related terms to henotheism are monolatrism and kathenotheism.[39] The latter term is an extension of "henotheism", from καθ' ἕνα θεόν (kath' hena theon) — "one god at a time".[40] Henotheism refers to a pluralistic theology wherein different deities are viewed to be of a unitary, equivalent divine essence.[33] Some scholars prefer the term monolatry to henotheism, to discuss religions where a single god is central, but the existence or the position of other gods is not denied.[39][36] Another term related to henotheism is "equitheism", referring to the belief that all gods are equal.[41]
The Vedic era conceptualization of the divine or the One, states Jeaneane Fowler, is more abstract than a monotheistic God, it is the Reality behind and of the phenomenal universe.[45] The Vedic hymns treat it as "limitless, indescribable, absolute principle", thus the Vedic divine is something of a panentheism rather than simple henotheism.[45]
In late Vedic era, around the start of Upanishadic age (c. 800 BCE), theosophical speculations emerge that develop concepts which scholars variously call nondualism or monism, as well as forms of non-theism and pantheism.[45][46][47] An example of the questioning of the concept of God, in addition to henotheistic hymns found therein, are in later portions of the Rigveda, such as the Nasadiya Sukta.[48]
Hinduism calls the metaphysical absolute concept as Brahman, incorporating within it the transcendent and immanent reality.[49][50][51] Different schools of thought interpret Brahman as either personal, impersonal or transpersonal. Ishwar Chandra Sharma describes it as "Absolute Reality, beyond all dualities of existence and non-existence, light and darkness, and of time, space and cause".[52]
Influential ancient and medieval Hindu philosophers, states philosophy professor Roy Perrett, teach their spiritual ideas with a world created ex nihilo and "effectively manage without God altogether".[53] In Hindu philosophy, there are many different schools.[54] Its non-theist traditions such as Samkhya, early Nyaya, Mimamsa and many within Vedanta such as Advaita do not posit the existence of an almighty, omnipotent, omniscient, omnibenevolent God (monotheistic God), while its theistic traditions posit a personal God left to the choice of the Hindu. The major schools of Hindu philosophy explain morality and the nature of existence through the karma and samsara doctrines, as in other Indian religions.[55][56][57]
Monotheism is the belief in a single creator God and the lack of belief in any other Creator.[58][59] Hinduism is not a monolithic faith and different sects may or may not posit or require such a belief. Religion is considered a personal belief in Hinduism and followers are free to choose the different interpretations within the framework of karma and samsara. Many forms of Hinduism believe in a monotheistic God, such as Krishnaism, some schools of Vedanta, and Arya Samaj.[60][61][62]
Madhvacharya was misperceived and misrepresented by both Christian missionaries and Hindu writers during the colonial era scholarship.[66][67] The similarities in the primacy of one God, dualism and distinction between man and God, devotion to God, the son of God as the intermediary, predestination, the role of grace in salvation, as well as the similarities in the legends of miracles in Christianity and Madhvacharya's Dvaita tradition fed these stories.[66][67] Among Christian writers, G. A. Grierson creatively asserted that Madhva's ideas evidently were "borrowed from Christianity, quite possibly promulgated as a rival to the central doctrine of that faith".[68] Among Hindu writers, according to Sarma, S. C. Vasu creatively translated Madhvacharya's works to identify Madhvacharya with Christ, rather than compare their ideas.[69]
Modern scholarship rules out the influence of Christianity on Madhvacharya,[65][70] as there is no evidence that there ever was a Christian settlement where Madhvacharya grew up and lived, or that there was a sharing or discussion of ideas between someone with knowledge of the Bible and Christian narratives, and him.[67] Furthermore, many adherents consider the similarities to be superficial and insubstantial; for example, Madhvacharya postulates three co-eternal fundamental realities, consisting of Supreme Being (Vishnu or paramatman), individual Self (jīvātman), and inanimate matter.[71]
Many traditions within Hinduism share the Vedic idea of a metaphysical ultimate reality and truth called Brahman. According to Jan Gonda, Brahman denoted the "power immanent in the sound, words, verses and formulas of Vedas" in the earliest Vedic texts. The early Vedic religious understanding of Brahman underwent a series of abstractions in the Hindu scriptures that followed the Vedic scriptures. These scriptures would reveal a vast body of insights into the nature of Brahman as originally revealed in the Vedas. These Hindu traditions that emerged from or identified with the Vedic scriptures and that maintained the notion of a metaphysical ultimate reality would identify that ultimate reality as Brahman. Hindu adherents to these traditions within Hinduism revere Hindu deities and, indeed, all of existence, as aspects of the Brahman.[72][73] The deities in Hinduism are not considered to be almighty, omnipotent, omniscient and omnibenevolent, and spirituality is considered to be seeking the ultimate truth that is possible by a number of paths.[74][75][76] Like other Indian religions, in Hinduism, deities are born, they live and they die in every kalpa (eon, cycle of existence).[77]
In Hinduism, Brahman connotes the highest Universal Principle, the Ultimate Reality in the universe.[78][79][80] In major schools of Hindu philosophy, it is the material, efficient, formal and final cause of all that exists.[79][81][82] It is the pervasive, genderless, infinite, eternal truth and bliss which does not change, yet is the cause of all changes.[78][83][84] Brahman as a metaphysical concept is the single binding unity behind the diversity in all that exists in the universe.[78][85]
While Hinduism sub-schools such as Advaita Vedanta emphasize the complete equivalence of Brahman and Atman, they also expound on Brahman as saguna Brahman—the Brahman with attributes, and nirguna Brahman—the Brahman without attributes.[107] The nirguna Brahman is the Brahman as it really is, however, the saguna Brahman is posited as a means to realizing nirguna Brahman, but the Hinduism schools declare saguna Brahman to be ultimately illusory.[108] The concept of the saguna Brahman, such as in the form of avatars, is considered in these schools of Hinduism to be a useful symbolism, path and tool for those who are still on their spiritual journey, but the concept is finally cast aside by the fully enlightened.[108]
The Bhakti movement of Hinduism built its theosophy around two concepts of Brahman—Nirguna and Saguna.[109]Nirguna Brahman was the concept of the Ultimate Reality as formless, without attributes or quality.[110]Saguna Brahman, in contrast, was envisioned and developed as with form, attributes and quality.[110] The two had parallels in the ancient pantheistic unmanifest and theistic manifest traditions, respectively, and traceable to Arjuna-Krishna dialogue in the Bhagavad Gita.[109][111] It is the same Brahman, but viewed from two perspectives: one from Nirguni knowledge-focus and other from Saguni love-focus, united as Krishna in the Gita.[111]Nirguna bhakta's poetry were Jnana-shrayi, or had roots in knowledge.[109]Saguna bhakta's poetry were Prema-shrayi, or with roots in love.[109] In Bhakti, the emphasis is reciprocal love and devotion, where the devotee loves God, and God loves the devotee.[111]
Nirguna and Saguna Brahman concepts of the Bhakti movement has been a baffling one to scholars, particularly the Nirguni tradition because it offers, states David Lorenzen, "heart-felt devotion to a God without attributes, without even any definable personality".[112] Yet given the "mountains of Nirguni bhakti literature", adds Lorenzen, bhakti for Nirguna Brahman has been a part of the reality of the Hindu tradition along with the bhakti for Saguna Brahman.[112] These were two alternate ways of imagining God during the bhakti movement.[109]
The Yogasutras of Patanjali use the term Ishvara in 11 verses: I.23 through I.29, II.1, II.2, II.32 and II.45. Ever since the Sutra's release, Hindu scholars have debated and commented on who or what is Isvara? These commentaries range from defining Isvara from a "personal god" to "special self" to "anything that has spiritual significance to the individual".[113][114] Whicher explains that while Patanjali's terse verses can be interpreted both as theistic or non-theistic, Patanjali's concept of Isvara in Yoga philosophy functions as a "transformative catalyst or guide for aiding the yogin on the path to spiritual emancipation".[115]
This sutra of Yoga philosophy of Hinduism adds the characteristics of Isvara as that special Self which is unaffected (अपरामृष्ट, aparamrsta) by one's obstacles/hardships (क्लेश, klesha), one's circumstances created by past or one's current actions (कर्म, karma), one's life fruits (विपाक, vipâka), and one's psychological dispositions/intentions (आशय, ashaya).[117][118]
Among various Bhakti path practicing sects of Hinduism, which built upon the Yoga school of Hinduism, Isvara only means a specific deity such as Shiva.
Svayam Bhagavan, a Sanskrit theological term, is the concept of absolute representation of the monotheistic God as Bhagavan himself within Hinduism. The theological interpretation of svayam bhagavān differs with each tradition and the translated from the Sanskrit language, the term literary means "Bhagavan Himself" or "directly Bhagavan."[119] Earlier commentators such as Madhvacharya translated the term Svayam Bhagavan as "he who has bhagavatta"; meaning "he who has the quality of possessing all good qualities".[120] The term is seldom used to refer to other forms of Krishna and Vishnu within the context of certain religious texts such as the Bhagavata Purana, and also within other sects of Vaishnavism.
The theological interpretation of Svayam Bhagavān differs with each tradition and the literal translation of the term has been understood in several distinct ways. Translated from the Sanskrit language, the term literary means "Bhagavan Himself" or "directly Bhagavan".[119] Others have translated it simply as "the Lord Himself".[121]
Gaudiya Vaishnava tradition often translates it within its perspective as primeval Lord or original Personality of Godhead, but also considers the terms such as Supreme Personality of Godhead and Supreme God as an equivalent to the term Svayam Bhagavan, and may also choose to apply these terms to Vishnu, Narayana and many of their associated Avatars.[122][123] It should be however noted that although it is usual to speak of Vishnu as the source of the avatars, this is only one of the names of god of Vaishnavism, who is also known as Narayana, Vasudeva and Krishna and behind each of those names there is a divine figure with attributed supremacy in Vaishnavism.[124]
In other sub-traditions of Vaishnavism, Krishna is one of many aspects and avatars of Vishnu (Rama is another, for example), recognized and understood from an eclectic assortment of perspectives and viewpoints.[125] Vaishnavism is one of the earliest single God focussed traditions that derives its heritage from the Vedas.[129][130][136]
When followers of Vishnu-centered sampradayas of Vaishnavism describe Krishna as "Svayam Bhagavan" it refers to their belief that Krishna is among the highest and fullest of all avatars and is considered to be the "paripurna Avatara", complete in all respects and the same as the original.[137] According to them Krishna is described in the Bhagavata Purana as the Purnavatara (or complete manifestation) of the Bhagavan, while other incarnations are called partial.
^Chakravarti, Sitansu S. (1991). "The Hindu Perspective". Hinduism, a Way of Life. Delhi: Motilal Banarsidass. pp. 70–71. ISBN978-81-208-0899-7. OCLC925707936. According to Hinduism, different religions are but alternate ways toward the same spiritual goal. Thus, although spirituality is a necessary quest for human beings, the religion one follows does not have to be the same for everyone. [...] The first Hindu scripture, the Rigveda, dating back to at least 4.000 years, says: "Truth is one, though the wise call it by different names." The Mahabharata, which includes the Gita, is replete with sayings meaning that religious streams, though separate, head toward the same ocean of divinity.
^Eric Ackroyd (2009). Divinity in Things: Religion Without Myth. Sussex Academic Press. p. 78. ISBN978-1-84519-333-1., Quote: "The jealous God who says, "Thou shalt have no other gods but me" belongs to the Jewish-Christian-Muslim tradition, but not to the Hindu tradition, which tolerates all gods but is not a monotheism, monism, yes, but not monotheism."
^Guy Beck (2005), Alternative Krishnas: Regional and Vernacular Variations on a Hindu Deity, State University of New York Press, ISBN978-0791464151, page 169 note 11
^Bruce Trigger (2003), Understanding Early Civilizations: A Comparative Study, Cambridge University Press, ISBN978-0521822459, pages 441-442, Quote: [Historically...] people perceived far fewer differences between themselves and the gods than the adherents of modern monotheistic religions. Deities were not thought to be omniscient or omnipotent and were rarely believed to be changeless or eternal."
^Knapp, S. (2005). The Heart of Hinduism: The Eastern Path to Freedom, Empowerment and Illumination -. iUniverse. "Krishna is the primeval Lord, the original Personality of Godhead, so He can expand Himself into unlimited forms with all potencies." page 161
^Bhagawan Swaminarayan bicentenary commemoration volume, 1781-1981. p. 154: ...Shri Vallabhacharya [and] Shri Swaminarayan... Both of them designate the highest reality as Krishna, who is both the highest avatara and also the source of other avataras. To quote R. Kaladhar Bhatt in this context. "In this transcendental devotieon (Nirguna Bhakti), the sole Deity and only" is Krishna. New Dimensions in Vedanta Philosophy - Page 154, Sahajānanda, Vedanta. 1981
^Flood, Gavin D. (1996). An introduction to Hinduism. Cambridge, UK: Cambridge University Press. p. 341. ISBN978-0-521-43878-0. Retrieved 21 April 2008. gavin flood."Early Vaishnava worship focuses on three deities who become fused together, namely Vasudeva-Krishna, Krishna-Gopala and Narayana, who in turn all become identified with Vishnu. Put simply, Vasudeva-Krishna and Krishna-Gopala were worshiped by groups generally referred to as Bhagavatas, while Narayana was worshipped by the Pancaratra sect."
^"Sapthagiri". tirumala.org. Archived from the original on 21 November 2008. Retrieved 3 May 2008.
Parashara Maharishi, Vyasa's father had devoted the largest Amsa (part) in Vishnu Purana to the description of Sri Krishna Avatara the Paripoorna Avatara. And according to Lord Krishna's own (instructions) upadesha, "he who knows (the secrets of) His (Krishna's) Janma (birth) and Karma (actions) will not remain in samsara (punar janma naiti- maam eti) and attain Him after leaving the mortal coil." (BG 4.9). Parasara Maharishi ends up Amsa 5 with a phalashruti in an identical vein (Vishnu Purana .5.38.94)
Matchett, Freda (2000), Krsna, Lord or Avatara? the relationship between Krsna and Visnu: in the context of the Avatara myth as presented by the Harivamsa, the Visnupurana and the Bhagavatapurana, Surrey: Routledge, ISBN978-0-7007-1281-6
|
" it refers to their belief that Krishna is among the highest and fullest of all avatars and is considered to be the "paripurna Avatara", complete in all respects and the same as the original.[137] According to them Krishna is described in the Bhagavata Purana as the Purnavatara (or complete manifestation) of the Bhagavan, while other incarnations are called partial.
^Chakravarti, Sitansu S. (1991). "The Hindu Perspective". Hinduism, a Way of Life. Delhi: Motilal Banarsidass. pp. 70–71. ISBN978-81-208-0899-7. OCLC925707936. According to Hinduism, different religions are but alternate ways toward the same spiritual goal. Thus, although spirituality is a necessary quest for human beings, the religion one follows does not have to be the same for everyone. [...] The first Hindu scripture, the Rigveda, dating back to at least 4.000 years, says: "Truth is one, though the wise call it by different names." The Mahabharata, which includes the Gita, is replete with sayings meaning that religious streams, though separate, head toward the same ocean of divinity.
^Eric Ackroyd (2009). Divinity in Things: Religion Without Myth. Sussex Academic Press. p. 78. ISBN978-1-84519-333-1., Quote: "The jealous God who says, "Thou shalt have no other gods but me" belongs to the Jewish-Christian-Muslim tradition, but not to the Hindu tradition, which tolerates all gods but is not a monotheism, monism, yes, but not monotheism."
^Guy Beck (2005), Alternative Krishnas: Regional and Vernacular Variations on a Hindu Deity, State University of New York Press, ISBN978-0791464151, page 169 note 11
^Bruce Trigger (2003),
|
no
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://www.qcc.cuny.edu/socialsciences/ppecorino/phil_of_religion_text/chapter_2_religions/hinduism.htm
|
Hinduism
|
You should read
enough of the materials presented in this section concerning the tradition
of Hinduism in order to understand how this tradition displays the
characteristics or elements that make a tradition one that would be termed a
religion. The tradition presented in the materials below is one of the
worlds living religions. You reading should indicate why this is so.
·THE ABSOLUTE: what do the
believers hold as most important? What is the ultimate source of value and
significance? For many, but not all religions, this is given some form of
agency and portrayed as a deity (deities). It might be a concept or ideal
as well as a figure.
·THE WORLD: What does the belief
system say about the world? Its origin? its relation to the Absolute? Its
future?
·HUMANS: Where do they come
from? How do they fit into the general scheme of things? What is their
destiny or future?
·THE PROBLEM FOR HUMANS: What is
the principle problem for humans that they must learn to deal with and
solve?
·THE SOLUTION FOR HUMANS: How
are humans to solve or overcome the fundamental problems ?
·COMMUNITY AND ETHICS: What is
the moral code as promulgated by the religion? What is the idea of
community and how humans are to live with one another?
·AN INTERPRETATION OF HISTORY:
Does the religion offer an explanation for events occurring in time? Is
there a single linear history with time coming to an end or does time
recycle? Is there a plan working itself out in time and detectable in the
events of history?
·RITUALS AND SYMBOLS: What are
the major rituals, holy days, garments, ceremonies and symbols?
·LIFE AFTER DEATH: What is the
explanation given for what occurs after death? Does he religion support a
belief in souls or spirits which survive the death of the body? What is the
belief in what occurs afterwards? Is there a resurrection of the body?
Reincarnation? Dissolution? Extinction?
·RELATIONSHIP TO OTHER
RELIGIONS: What is the prescribed manner in which believers are to regard
other religions and the followers of other religions?
**********************************************************
For those who wish to listen to information on the world's
religions here is a listing of PODCASTS on RELIGIONS by Cynthia
Eller.
Hinduism is a religion with various Gods and
Goddesses. According to Hinduism, three Gods rule the world. Brahma: the
creator; Vishnu: the preserver and Shiva: the destroyer. Lord Vishnu did
his job of preserving the world by incarnating himself in different forms
at times of crisis.
The three Lords that rule the world have consorts and they are
goddesses too. Consort of Brahma is Sarasvati; goddess of learning.
Vishnu's consort is Lakshmi; goddess of wealth and prosperity. Shiva's
consort is Parvati who is worshipped as Kali or Durga.
Besides these Gods and Goddesses there are a number of other Gods and
Goddesses. To name a few of them, there is Ganesh; who has an elephant's
head and he is also a son of Shiva and Parvati, Hanuman; who is an ape,
Surya; Lord of sun, Ganga Ma; Goddess of river Ganges; Samundra; Lord of
the sea, Indra; king of the Gods ( but he isn't an important God), Prithvi;
Goddess of earth, Shakti; Goddess of strength. The Hindus call their
Goddesses 'Ma' meaning mother.
Some gods have more than one name. Shiva is also known as Shankar,
Mahadev, Natraj, Mahesh and many other names. Ganesh is also called
Ganpati. God Vishnu incarnated
9 times to do his job and in his every appearance he had a different
form which are also worshipped as Gods. Among his appearances, he appeared
as Rama, Krishna, Narsimha, Parsuram and Buddha. Krishna also has
different names, Gopal; Kishan; Shyam and other names. He also has other
titles with meanings like 'Basuri Wala' which means the flute musician and
'Makhan Chor' which means the butter stealer. There are also Gods who can
change their forms, for example: Parvati can change into Kali or Durga.
Not all of these Gods are worshiped by all Hindus. Some Hindus worship
only Vishnu. Others worship only Shiva. Others worship only the Goddesses
and call these Goddesses collectively as Shakti meaning strength. Many of
these Goddess worshipers worship Parvati in her images as Kali or Durga.
People who worship Shiva or Vishnu also worship characters and images
connected with these Gods. Vishnu worshipers (Vaishnaites) also worship
his appearances. Shiva's worshipers (Shaivites) also worship images of
bull called Nandi, who was Shiva's carrier and a unique stone design
connected to Shiva. There are also Hindus who worship all the Gods. There
are some Gods who are worshiped all over India like Rama and Krishna and
other Gods who are worshiped more in one region than the other like Ganesh
who is worshiped mainly in west India. Hindus also worship Gods according
to their personal needs. People who engage in wrestling, body building and
other physical sports worship Hanuman, who in Hindu legends was an ape
with lot of physical strength. Businessmen worship Lakshmi, Goddess of
wealth.
Though these Hindus worship different idols, there are many Hindus who
believe in one God and perceive in these different Gods and Goddesses as
different images of the same one God. According to their beliefs idolatry
is the wrong interpretation of Hinduism.
Hindus believe in reincarnation. The basic belief is that a person's
fate is determined according to his deeds. These deeds in Hinduism are
called 'Karma'. A soul who does good Karma in this life will be awarded
with a better life in the next incarnation. Souls who do bad Karma will be
punished for their sins, if not in this incarnation then in the next
incarnation and will continue to be born in this world again and again.
The good souls will be liberated from the circle of rebirth and get
redemption which is called 'Moksha' meaning freedom. Hindus normally
cremate their dead ones, so that the soul of the dead would go to heaven,
except in a few cases of Hindu saints, who are believed to have attained 'Moksha'.
The main Hindu books are the four Vedas. They are Rig Veda, Sama Veda,
Yajur Veda and Atharva Veda. The concluding portions of the Vedas are
called Upanisads. There are also other holy books like Puranas, Ramayana,
Mahabharta etc. The different Gods and Goddesses in the Hindu mythology
are derived from these books. Ramayana and Mahabharta are the most popular
Hindu books.
The main story of Ramayana
is the story of Lord Rama. Rama was born in a royal family and was suppose
to be the king, but because of his step- mother, he was forced to exile
from his kingdom for fourteen years. During this period his consort Sita
was kidnapped by a demon called Ravan, who was king of Lanka. Rama with
the help of his brother, Lakshman, and an army of monkeys under the
leadership of Hanuman, rescued Sita. Many Indians believe that the present
day Sri Lanka was then the kingdom of Lanka.
Mahabharta is a family epic. In this epic the Pandva family and the
Kaurav family who are cousins fight with each other for the control over a
kingdom. Kaurav family, which consisted of 100 brothers rule an empire.
The five Pandva brothers ask for a small kingdom which belongs to them.
The Kauravs refuse to give the Pandvas the kingdom so there is a war
between the Pandvas and the Kauravs in which it is believed that all the
kingdoms of that period in India took part. In this war the Pandvas, with
the help of Lord Krishna win the war. Before the commencement of the war,
while the two armies are facing each other, one of the Pandva brothers
Arjun gets depressed. Arjun is depressed because he has to fight against
people whom he knows, loves and respects. At this point Krishna, (who was
also a king of a kingdom, and participated in this war only as the chariot
driver for Arjun) convinces Arjun to fight. Krishna lectures Arjun about
life, human beings and their religious duties. He explains to Arjun that
he belongs to a warrior caste and he has to fight for that's his
destination in this incarnation. Those chapters in the Mahabharta which
are Krishna's discourses on religious philosophy are called Bhagvad Gita.
Because of it's importance the Bhagvad Gita is considered as a separate
holy book. Another Hindu holy book that deals with religious duties is
'Law of Manu' or the 'Dharma Shastra'.
In the wars that occur in the holy books, as in Mahabharta, the
different sides had different war weapons which had characters similar to
modern day war weapons. In some stories the traveling vehicles were
normally birds and animals. But these animals and birds had features
similar to modern day aircrafts. There were even aircrafts with over
velocity of light. The main war weapons were bows and arrows. But these
arrows were more like modern missiles than simple arrows. These arrows
were capable of carrying bombs with destructive power similar to modern
day chemical, biological or even atom bombs. Other arrows could be
targeted on specific human beings. There were even arrows capable of
neutralizing other arrows, similar to modern day anti-missiles.
Hindus have many holy places. Badrinath, Puri, Dwarkha and Rameshwaram
are four holiest places for the Hindus. Other holy places are Varanasi,
Rishikesh, Nasik, Pushkar, Ujjain and other places. Some rivers are also
holy to them. Among them are Godavri, Yamuna and above all Ganges which
the Indians call Ganga. Another holy river is Sarasvati and it is
invisible. Hindus also worship and respect some animals and birds like
cobra, apes, peacocks and cow. Hindus also respect some trees and bush
trees. The famous and the most respected bush tree is Tulsi.
Some of the Hindu customs, which exist or existed, do not have their
bearing in Hindu scriptures but became part of Hinduism in different ways
and fashion. For example, the Hindus see in cow a sacred animal.
Religiously there is no reason to see cow as sacred and it is believed
that cows were made 'sacred' to prevent their slaughter during periods of
droughts and hunger. Cobra worship also is not found in Hindu scripts.
This custom became part of Hinduism when some Indian tribes who use to
worship cobra adopted Hinduism. Burning
of the widow on the dead husband's pyre also has no religious
justification. This custom, outlawed in 1829, was probably brought to
India by the Scythians invaders of India. Among the Scythians it was a
custom to bury the dead king with his mistresses or wives, servants and
other things so that they could continue to serve him in the next world.
When these Scythians arrived in India, they adopted the Indian system of
funeral, which was cremating the dead. And so instead of burying their
kings and his servers they started cremating their dead with his surviving
lovers. The Scythians were warrior tribes and they were given a status of
warrior castes in Hindu religious hierarchy. The different castes who
claimed warrior status or higher also adopted this custom.
There are four castes
in Hindu religion arranged in a hierarchy. The highest caste is Brahman,
and they are the priest caste of Hinduism. After them are the Kshatria,
who are the warrior castes. After them are the Vaishya caste , who are
business people. And after them are the Sudra, who are the common peasants
and workers. Below these four castes there are casteless, the
untouchables. The four castes were not allowed to have any physical
contact with the untouchables.
Each caste is divided into many sub-castes. The religious word for
caste is Varna and for sub-caste Jat or Jati. But sometimes in English the
term caste is used in both cases. Religiously, people are born in a caste
and it cannot be changed. Each caste has some compulsory duties, which its
members must do. Each caste has professional limits which decides what
profession each caste can follow. Each caste members can have social
relations only with its caste members. Religiously this includes marraige
and even eating only with caste members. Please note that socially the
caste system is different from the religious form of caste system.
How did Hinduism originated is a difficult question. The accepted
theory is that Hinduism was evolved after the historical meeting between
the Aryans and Dravidians.
Some claim that Hinduism is mainly an Aryan culture whereas the others
claim that it is mainly a Dravidian culture. Religiously the Vedas were
given by Brahma.
I.
Introduction Hinduism,
religion that originated in India and is still practiced by most of its
inhabitants, as well as by those whose families have migrated from India
to other parts of the world (chiefly East Africa, South Africa, Southeast
Asia, the East Indies, and England). The word Hindu is derived from
the Sanskrit word sindhu (rivermore specifically, the
Indus); the Persians in the 5th century BC called the
Hindus by that name, identifying them as the people of the land of the
Indus. The Hindus define their community as those who believe in the
Vedas (see Veda)
or those who follow the way (dharma) of the four classes (varnas)
and stages of life (ashramas).
Hinduism
is a major world religion, not merely by virtue of its many followers
(estimated at more than 700 million) but also because of its profound
influence on many other religions during its long, unbroken history, which
dates from about 1500 BC. The corresponding influence of
these various religions on Hinduism (it has an extraordinary tendency to
absorb foreign elements) has greatly contributed to the religion's
syncretismthe wide variety of beliefs and practices that it
encompasses. Moreover, the geographic, rather than ideological, basis of
the religion (the fact that it comprises whatever all the people of India
have believed and done) has given Hinduism the character of a social and
doctrinal system that extends to every aspect of human life. II.
Fundamental Principles The
canon of Hinduism is basically defined by what people do rather than what
they think. Consequently, far more uniformity of behavior than of belief
is found among Hindus, although very few practices or beliefs are shared
by all. A few usages are observed by almost all Hindus: reverence for Brahmans
and cows; abstention from meat (especially beef); and marriage within the
caste (jati), in the hope of producing male heirs. Most Hindus
chant the gayatri hymn to the sun at dawn, but little agreement exists as
to what other prayers should be chanted. Most Hindus worship Shiva,
Vishnu,
or the Goddess (Devi), but they also worship hundreds of additional minor
deities peculiar to a particular village or even to a particular family.
Although Hindus believe and do many apparently contradictory
thingscontradictory not merely from one Hindu to the next, but also
within the daily religious life of a single Hindueach individual
perceives an orderly pattern that gives form and meaning to his or her own
life. No doctrinal or ecclesiastical hierarchy exists in Hinduism, but the
intricate hierarchy of the social system (which is inseparable from the
religion) gives each person a sense of place within the whole.
A. Texts The
ultimate canonical authority for all Hindus is the Vedas. The oldest of
the four Vedas is the Rig-Veda,
which was composed in an ancient form of the Sanskrit
language in northwest India. This text, probably composed between
about 1500 and 1000 BC and consisting of 1028 hymns to a
pantheon of gods, has been memorized syllable by syllable and preserved
orally to the present day. The Rig-Veda was supplemented by two
other Vedas, the Yajur-Veda (the textbook for sacrifice) and the Sama-Veda
(the hymnal). A fourth book, the Atharva-Veda (a collection of
magic spells), was probably added about 900 BC. At this
time, too, the Brahmanaslengthy Sanskrit texts expounding priestly
ritual and the myths behind itwere composed. Between the 8th century BC
and the 5th century BC, the Upanishads
were composed; these are mystical-philosophical meditations on the meaning
of existence and the nature of the universe.
The
Vedas, including the Brahmanas and the Upanishads, are regarded as
revealed canon (shruti,what has been heard [from the gods]),
and no syllable can be changed. The actual content of this canon, however,
is unknown to most Hindus. The practical compendium of Hinduism is
contained in the Smriti, or what is remembered, which is also
orally preserved. No prohibition is made against improvising variations
on, rewording, or challenging the Smriti. The Smriti
includes the two great Sanskrit epics, the Mahabharata
and the Ramayana;
the many Sanskrit Puranas,
including 18 great Puranas and several dozen more subordinate Puranas; and
the many Dharmashastras and Dharmasutras (textbooks on
sacred law), of which the one attributed to the sage Manu is the most
frequently cited.
The
two epics are built around central narratives. The Mahabharata
tells of the war between the Pandava brothers, led by their cousin Krishna,
and their cousins the Kauravas. The Ramayana tells of the journey
of Rama
to recover his wife Sita after she is stolen by the demon Ravana. But
these stories are embedded in a rich corpus of other tales and discourses
on philosophy, law, geography, political science, and astronomy, so that
the Mahabharata (about 200,000 lines long) constitutes a kind of
encyclopedia or even a literature, and the Ramayana (more than
50,000 lines long) is comparable. Although it is therefore impossible to
fix their dates, the main bodies of the Mahabharata and the Ramayana
were probably composed between 400 BC and AD
400. Both, however, continued to grow even after they were translated into
the vernacular languages of India (such as Tamil and Hindi) in the
succeeding centuries.
The
Puranas were composed after the epics, and several of them develop themes
found in the epics (for instance, the Bhagavata-Purana describes
the childhood of Krishna, a topic not elaborated in the Mahabharata).
The Puranas also include subsidiary myths, hymns of praise, philosophies,
iconography, and rituals. Most of the Puranas are predominantly sectarian
in nature; the great Puranas (and some subordinate Puranas) are dedicated
to the worship of Shiva or Vishnu or the Goddess, and several subordinate
Puranas are devoted to Ganesha or Skanda or the sun. In addition, they all
contain a great deal of nonsectarian material, probably of earlier origin,
such as the five marks, or topics (panchalakshana), of the
Puranas: the creation of the universe, the destruction and re-creation of
the universe, the dynasties of the solar and lunar gods, the genealogy of
the gods and holy sages, and the ages of the founding fathers of humankind
(the Manus). B.
Philosophy Incorporated
in this rich literature is a complex cosmology. Hindus believe that the
universe is a great, enclosed sphere, a cosmic egg, within which are
numerous concentric heavens, hells, oceans, and continents, with India at
the center. They believe that time is both degenerativegoing from the
golden age, or Krita Yuga, through two intermediate periods of decreasing
goodness, to the present age, or Kali Yugaand cyclic: At the end of
each Kali Yuga, the universe is destroyed by fire and flood, and a new
golden age begins. Human life, too, is cyclic: After death, the soul
leaves the body and is reborn in the body of another person, animal,
vegetable, or mineral. This condition of endless entanglement in activity
and rebirth is called samsara (see Transmigration).
The precise quality of the new birth is determined by the accumulated
merit and demerit that result from all the actions, or karma,
that the soul has committed in its past life or lives. All Hindus believe
that karma accrues in this way; they also believe, however, that it can be
counteracted by expiations and rituals, by working out through
punishment or reward, and by achieving release (moksha) from the
entire process of samsara through the renunciation of all worldly
desires.
Hindus
may thus be divided into two groups: those who seek the sacred and profane
rewards of this world (health, wealth, children, and a good rebirth), and
those who seek release from the world. The principles of the first way of
life were drawn from the Vedas and are represented today in temple
Hinduism and in the religion of Brahmans and the caste system. The second
way, which is prescribed in the Upanishads, is represented not only in the
cults of renunciation (sannyasa) but also in the ideological ideals
of most Hindus.
The
worldly aspect of Hinduism originally had three Vedas, three classes of
society (varnas), three stages of life (ashramas), and three
goals of a man (purusharthas), the goals or needs of women
being seldom discussed in the ancient texts. To the first three Vedas was
added the Atharva-Veda. The first three classes (Brahman, or
priestly; Kshatriya, or warrior; and Vaisya, or general populace) were
derived from the tripartite division of ancient Indo-European society,
traces of which can be detected in certain social and religious
institutions of ancient Greece and Rome. To the three classes were added
the Shudras, or servants, after the Indo-Aryans settled into the
Punjab and began to move down into the Ganges Valley. The three original ashramas
were the chaste student (brahmachari), the householder (grihastha),
and the forest-dweller (vanaprastha). They were said to owe three
debts: study of the Vedas (owed to the sages); a son (to the ancestors);
and sacrifice (to the gods). The three goals were artha (material
success), dharma (righteous social behavior), and kama
(sensual pleasures). Shortly after the composition of the first
Upanishads, during the rise of Buddhism (6th century BC), a
fourth ashrama and a corresponding fourth goal were added: the
renouncer (sannyasi), whose goal is release (moksha) from
the other stages, goals, and debts.
Each
of these two ways of being Hindu developed its own complementary
metaphysical and social systems. The caste system and its supporting
philosophy of svadharma (one's own dharma) developed within
the worldly way. Svadharma comprises the beliefs that each person
is born to perform a specific job, marry a specific person, eat certain
food, and beget children to do likewise and that it is better to fulfill
one's own dharma than that of anyone else (even if one's own is low or
reprehensible, such as that of the Harijan caste, the Untouchables, whose
mere presence was once considered polluting to other castes). The primary
goal of the worldly Hindu is to produce and raise a son who will make
offerings to the ancestors (the shraddha ceremony). The second,
renunciatory way of Hinduism, on the other hand, is based on the
Upanishadic philosophy of the unity of the individual soul, or atman,
with Brahman, the universal world soul, or godhead. The full realization
of this is believed to be sufficient to release the worshiper from
rebirth; in this view, nothing could be more detrimental to salvation than
the birth of a child. Many of the goals and ideals of renunciatory
Hinduism have been incorporated into worldly Hinduism, particularly the
eternal dharma (sanatana dharma), an absolute and general ethical
code that purports to transcend and embrace all subsidiary, relative,
specific dharmas. The most important tenet of sanatana dharma for
all Hindus is ahimsa, the absence of a desire to injure, which is
used to justify vegetarianism (although it does not preclude physical
violence toward animals or humans, or blood sacrifices in temples).
In
addition to sanatana dharma, numerous attempts have been made to
reconcile the two Hinduisms. The Bhagavad-Gita
describes three paths to religious realization. To the path of works, or
karma (here designating sacrificial and ritual acts), and the path of
knowledge, or jnana (the Upanishadic meditation on the godhead),
was added a mediating third path, the passionate devotion to God, or bhakti,
a religious ideal that came to combine and transcend the other two paths. Bhakti
in a general form can be traced in the epics and even in some of the
Upanishads, but its fullest statement appears only after the Bhagavad-Gita.
It gained momentum from the vernacular poems and songs to local deities,
particularly those of the Alvars, Nayanars, and Virashaivas of southern
India and the Bengali worshipers of Krishna (see below).
In
this way Hindus have been able to reconcile their Vedantic monism (see Vedanta)
with their Vedic polytheism: All the individual Hindu gods (who are said
to be saguna,with attributes) are subsumed under the godhead
(nirguna,without attributes), from which they all emanate.
Therefore, most Hindus are devoted (through bhakti) to gods whom
they worship in rituals (through karma) and whom they understand (through jnana)
as aspects of ultimate reality, the material reflection of which is all an
illusion (maya) wrought by God in a spirit of play (lila). C. Gods Although
all Hindus acknowledge the existence and importance of a number of gods
and demigods, most individual worshipers are primarily devoted to a single
god or goddess, of whom Shiva, Vishnu, and the Goddess are the most
popular.
Shiva
embodies the apparently contradictory aspects of a god of ascetics and a
god of the phallus. He is the deity of renouncers, particularly of the
many Shaiva sects that imitate him: Kapalikas, who carry skulls to reenact
the myth in which Shiva beheaded his father, the incestuous Brahma, and
was condemned to carry the skull until he found release in Benares;
Pashupatas, worshipers of Shiva Pashupati, Lord of Beasts; and
Aghoris, to whom nothing is horrible, yogis who eat ordure or flesh
in order to demonstrate their complete indifference to pleasure or pain.
Shiva is also the deity whose phallus (linga) is the central shrine
of all Shaiva temples and the personal shrine of all Shaiva householders;
his priapism is said to have resulted in his castration and the subsequent
worship of his severed member. In addition, Shiva is said to have appeared
on earth in various human, animal, and vegetable forms, establishing his
many local shrines.
To
his worshipers, Vishnu is all-pervasive and supreme; he is the god from
whose navel a lotus sprang, giving birth to the creator (Brahma). Vishnu
created the universe by separating heaven and earth, and he rescued it on
a number of subsequent occasions. He is also worshiped in the form of a
number of descentsavatars (see Avatar),
or, roughly, incarnations. Several of these are animals that recur in
iconography: the fish, the tortoise, and the boar. Others are the dwarf (Vamana,
who became a giant in order to trick the demon Bali out of the entire
universe); the man-lion (Narasimha, who disemboweled the demon
Hiranyakashipu); the Buddha (who became incarnate in order to teach a
false doctrine to the pious demons); Rama-with-an-Axe (Parashurama, who
beheaded his unchaste mother and destroyed the entire class of Kshatriyas
to avenge his father); and Kalki (the rider on the white horse, who will
come to destroy the universe at the end of the age of Kali). Most popular
by far are Rama (hero of the Ramayana) and Krishna (hero of the Mahabharata
and the Bhagavata-Purana), both of whom are said to be avatars of
Vishnu, although they were originally human heroes.
Along
with these two great male gods, several goddesses are the object of
primary devotion. They are sometimes said to be various aspects of the
Goddess, Devi. In some myths Devi is the prime mover, who commands the
male gods to do the work of creation and destruction. As Durga, the
Unapproachable, she kills the buffalo demon Mahisha in a great battle; as
Kali, the Black, she dances in a mad frenzy on the corpses of those she
has slain and eaten, adorned with the still-dripping skulls and severed
hands of her victims. The Goddess is also worshiped by the Shaktas,
devotees of Shakti, the female power. This sect arose in the medieval
period along with the Tantrists, whose esoteric ceremonies involved a
black mass in which such forbidden substances as meat, fish, and wine were
eaten and forbidden sexual acts were performed ritually. In many Tantric
cults the Goddess is identified as Krishna's consort Radha.
More
peaceful manifestations of the Goddess are seen in wives of the great
gods: Lakshmi, the meek, docile wife of Vishnu and a fertility goddess in
her own right; and Parvati, the wife of Shiva and the daughter of the
Himalayas. The great river goddess Ganga (the Ganges), also worshiped
alone, is said to be a wife of Shiva; a goddess of music and literature,
Sarasvati, associated with the Saraswati River, is the wife of Brahma.
Many of the local goddesses of IndiaManasha, the goddess of snakes, in
Bengal, and Minakshi in Maduraiare married to Hindu gods, while others,
such as Shitala, goddess of smallpox, are worshiped alone. These unmarried
goddesses are feared for their untamed powers and angry, unpredictable
outbursts.
Many
minor gods are assimilated into the central pantheon by being identified
with the great gods or with their children and friends. Hanuman, the
monkey god, appears in the Ramayana as the cunning assistant of
Rama in the siege of Lanka. Skanda, the general of the army of the gods,
is the son of Shiva and Parvati, as is Ganesha, the elephant-headed god of
scribes and merchants, the remover of obstacles, and the object of worship
at the beginning of any important enterprise. D.
Worship and Ritual The
great and lesser Hindu gods are worshiped in a number of concentric
circles of public and private devotion. Because of the social basis of
Hinduism, the most fundamental ceremonies for every Hindu are those that
involve the rites of passage (samskaras). These begin with birth
and the first time the child eats solid food (rice). Later rites include
the first haircutting (for a young boy) and the purification after the
first menstruation (for a girl); marriage; and the blessings upon a
pregnancy, to produce a male child and to ensure a successful delivery and
the child's survival of the first six dangerous days after birth (the
concern of Shashti, goddess of Six). Last are the funeral ceremonies
(cremation and, if possible, the sprinkling of ashes in a holy river such
as the Ganges) and the yearly offerings to dead ancestors. The most
notable of the latter is the pinda, a ball of rice and sesame seeds
given by the eldest male child so that the ghost of his father may pass
from limbo into rebirth. In daily ritual, a Hindu (generally the wife, who
is thought to have more power to intercede with the gods) makes offerings
(puja) of fruit or flowers before a small shrine in the house. She
also makes offerings to local snakes or trees or obscure spirits
(benevolent and malevolent) dwelling in her own garden or at crossroads or
other magical places in the village.
Many
villages, and all sizable towns, have temples where priests perform
ceremonies throughout the day: sunrise prayers and noises to awaken the
god within the holy of holies (the garbagriha, or
womb-house); bathing, clothing, and fanning the god; feeding the god
and distributing the remains of the food (prasada) to worshipers.
The temple is also a cultural center where songs are sung, holy texts read
aloud (in Sanskrit and vernaculars), and sunset rituals performed; devout
laity may be present at most of these ceremonies. In many temples,
particularly those sacred to goddesses (such as the Kalighat temple to
Kali, in Kolkata), goats are sacrificed on special occasions. The
sacrifice is often carried out by a special low-caste priest outside the
bounds of the temple itself. Thousands of simple local temples exist; each
may be nothing more than a small stone box enclosing a formless effigy
swathed in cloth, or a slightly more imposing edifice with a small tank in
which to bathe. In addition, India has many temples of great size as well
as complex temple cities, some hewn out of caves (such as Elephanta and
Ellora), some formed of great monolithic slabs (such as those at
Mahabalipuram), and some built of imported and elaborately carved stone
slabs (such as the temples at Khajuraho, Bhubaneshwar, Madurai, and
Kanjeevaram). On special days, usually once a year, the image of the god
is taken from its central shrine and paraded around the temple complex on
a magnificently carved wooden chariot (ratha).
Many
holy places or shrines (tirthas, literally fords), such as
Rishikesh in the Himalayas or Benares on the Ganges, are the objects of
pilgrimages from all over India; others are essentially local shrines.
Certain shrines are most frequently visited at special yearly festivals.
For example, Prayaga, where the Ganges and Yamuna rivers join at Allahabad,
is always sacred, but it is crowded with pilgrims during the Kumbha Mela
festival each January and overwhelmed by the millions who come to the
special ceremony held every 12 years. In Bengal, the goddess Durga's visit
to her family and return to her husband Shiva are celebrated every year at
Durgapuja, when images of the goddess are created out of papier-mâché,
worshiped for ten days, and then cast into the Ganges in a dramatic
midnight ceremony ringing with drums and glowing with candles. Some
festivals are celebrated throughout India: Diwali, the festival of lights
in early winter; and Holi, the spring carnival, when members of all castes
mingle and let down their hair, sprinkling one another with cascades of
red powder and liquid, symbolic of the blood that was probably used in
past centuries.
III.
History
The
basic beliefs and practices of Hinduism cannot be understood outside their
historical context. Although the early texts and events are impossible to
date with precision, the general chronological development is clear. A. Vedic
Civilization About
2000 BC, a highly developed civilization flourished in the
Indus Valley, around the sites of Harappa and Mohenjo-Daro. By about 1500 BC,
when the Indo-Aryan tribes invaded India, this civilization was in a
serious decline. It is therefore impossible to know, on present evidence,
whether or not the two civilizations had any significant contact. Many
elements of Hinduism that were not present in Vedic civilization (such as
worship of the phallus and of goddesses, bathing in temple tanks, and the
postures of yoga) may have been derived from the Indus civilization,
however. See Indus
Valley Civilization.
By
about 1500 BC, the Indo-Aryans had settled in the Punjab,
bringing with them their predominantly male Indo-European pantheon of gods
and a simple warrior ethic that was vigorous and worldly, yet also
profoundly religious. Gods of the Vedic pantheon survive in later
Hinduism, but no longer as objects of worship: Indra, king of the gods and
god of the storm and of fertility; Agni, god of fire; and Soma, god of the
sacred, intoxicating Soma plant and the drink made from it. By 900 BC
the use of iron allowed the Indo-Aryans to move down into the lush Ganges
Valley, where they developed a far more elaborate civilization and social
system. By the 6th century BC, Buddhism
had begun to make its mark on India and what was to be more than a
millennium of fruitful interaction with Hinduism. B.
Classical Hindu Civilization From
about 200 BC to AD 500 India was invaded by
many northern powers, of which the Shakas (Scythians) and Kushanas had the
greatest impact. This was a time of great flux, growth, syncretism, and
definition for Hinduism and is the period in which the epics, the Dharmashastras,
and the Dharmasutras took final form. Under the Gupta Empire
(320-550?), when most of northern India was under a single power,
classical Hinduism found its most consistent expression: the sacred laws
were codified, the great temples began to be built, and myths and rituals
were preserved in the Puranas.
C. Rise
of Devotional Movements In
the post-Gupta period, a less rigid and more eclectic form of Hinduism
emerged, with more dissident sects and vernacular movements. At this time,
too, the great devotional movements arose. Many of the sects that emerged
during the period from 800 to 1800 are still active in India today.
Most
of the bhakti movements are said to have been founded by
saintsthe gurus by whom the tradition has been handed down in unbroken
lineage, from guru to disciple (chela). This lineage, in addition
to a written canon, is the basis for the authority of the bhakti
sect. Other traditions are based on the teachings of such philosophers as
Shankara and Ramanuja. Shankara was the exponent of pure monism, or
nondualism (Advaita Vedanta), and of the doctrine that all that appears to
be real is merely illusion. Ramanuja espoused the philosophy of qualified
nondualism (Vishishta-Advaita), an attempt to reconcile belief in a
godhead without attributes (nirguna) with devotion to a god with
attributes (saguna), and to solve the paradox of loving a god with
whom one is identical.
The
philosophies of Shankara and Ramanuja were developed in the context of the
six great classical philosophies (darshanas) of India: the Karma
Mimamsa (action investigation); the Vedanta (end of the
Vedas), in which tradition the work of Shankara and Ramanuja should be
placed; the Sankhya system, which describes the opposition between an
inert male spiritual principle (purusha) and an active female
principle of matter or nature (prakriti), subdivided into the three
qualities (gunas) of goodness (sattva), passion (rajas),
and darkness (tamas); the Yoga system; and the highly metaphysical
systems of Vaisheshika (a kind of atomic realism) and Nyaya (logic, but of
an extremely theistic nature).
D.
Medieval Hinduism Parallel
with these complex Sanskrit philosophical investigations, vernacular songs
were composed, transmitted orally, and preserved locally throughout India.
They were composed during the 7th, 8th, and 9th centuries in Tamil and
Kannada by the Alvars, Nayanars, and Virashaivas and during the 15th
century by the Rajasthani poet Mira Bai, in the Braj dialect. In the 16th
century in Bengal, Chaitanya founded a sect of erotic mysticism,
celebrating the union of Krishna and Radha in a Tantric theology heavily
influenced by Tantric Buddhism. Chaitanya believed that both Krishna and
Radha were incarnate within him, and he believed that the village of Vrindaban,
where Krishna grew up, had become manifest once again in Bengal. The
school of the Gosvamins, who were disciples of Chaitanya, developed an
elegant theology of aesthetic participation in the ritual enactment of
Krishna's life.
These
ritual dramas also developed around the village of Vrindaban itself during
the 16th century, and they were celebrated by Hindi poets. The first great
Hindi mystic poet was Kabir, who was said to be the child of a Muslim and
was strongly influenced by Islam, particularly by Sufism.
His poems challenge the canonical dogmas of both Hinduism and Islam,
praising Rama and promising salvation by the chanting of the holy name of
Rama. He was followed by Tulsidas, who wrote a beloved Hindi version of
the Ramayana. A contemporary of Tulsidas was Surdas, whose poems on
Krishna's life in Vrindaban formed the basis of the ras lilas,
local dramatizations of myths of the childhood of Krishna, which still
play an important part in the worship of Krishna in northern India. E. 19th
and 20th Centuries In
the 19th century, important reforms took place under the auspices of
Ramakrishna, Vivekananda, and the sects of the Arya
Samaj and the Brahmo
Samaj. These movements attempted to reconcile traditional Hinduism
with the social reforms and political ideals of the day. So, too, the
nationalist leaders Sri Aurobindo Ghose and Mohandas Gandhi attempted to
draw from Hinduism those elements that would best serve their political
and social aims. Gandhi, for example, used his own brand of ahimsa,
transformed into passive resistance, to obtain reforms for the
Untouchables and to remove the British from India. Similarly, Bhimrao
Ramji Ambedkar revived the myth of the Brahmans who fell from their caste
and the tradition that Buddhism and Hinduism were once one, in order to
enable Untouchables to gain self-respect by reconverting to
Buddhism.
In
more recent times, numerous self-proclaimed Indian religious teachers have
migrated to Europe and the United States, where they have inspired large
followings. Some, such as the Hare Krishna sect founded by Bhaktivedanta,
claim to base themselves on classical Hindu practices. In India, Hinduism
thrives despite numerous reforms and shortcuts necessitated by the gradual
modernization and urbanization of Indian life. The myths endure in the
Hindi cinema, and the rituals survive not only in the temples but also in
the rites of passage. Thus, Hinduism, which sustained India through
centuries of foreign occupation and internal disruption, continues to
serve a vital function by giving passionate meaning and supportive form to
the lives of Hindus today. For information on religious violence in India,
See India.
Special thanks to the
Microsoft Corporation for their contribution to our site.
The information above came from Microsoft Encarta. Here is a
hyperlink to the Microsoft Encarta home page. http://www.encarta.msn.com
Spirituality
Home Page
"It deals with matters connected with Science, Spirituality,
Hinduism, Vedanta, Religion...This is a revised and enlarged version of my
monograph entitled SCIENCE AND SPIRITUALITY."
By Professor V. Krishnamurthy
|
There are also Hindus who worship all the Gods. There
are some Gods who are worshiped all over India like Rama and Krishna and
other Gods who are worshiped more in one region than the other like Ganesh
who is worshiped mainly in west India. Hindus also worship Gods according
to their personal needs. People who engage in wrestling, body building and
other physical sports worship Hanuman, who in Hindu legends was an ape
with lot of physical strength. Businessmen worship Lakshmi, Goddess of
wealth.
Though these Hindus worship different idols, there are many Hindus who
believe in one God and perceive in these different Gods and Goddesses as
different images of the same one God. According to their beliefs idolatry
is the wrong interpretation of Hinduism.
Hindus believe in reincarnation. The basic belief is that a person's
fate is determined according to his deeds. These deeds in Hinduism are
called 'Karma'. A soul who does good Karma in this life will be awarded
with a better life in the next incarnation. Souls who do bad Karma will be
punished for their sins, if not in this incarnation then in the next
incarnation and will continue to be born in this world again and again.
The good souls will be liberated from the circle of rebirth and get
redemption which is called 'Moksha' meaning freedom. Hindus normally
cremate their dead ones, so that the soul of the dead would go to heaven,
except in a few cases of Hindu saints, who are believed to have attained 'Moksha'.
The main Hindu books are the four Vedas. They are Rig Veda, Sama Veda,
Yajur Veda and Atharva Veda. The concluding portions of the Vedas are
called Upanisads. There are also other holy books like Puranas, Ramayana,
Mahabharta etc. The different Gods and Goddesses in the Hindu mythology
are derived from these books.
|
yes
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7166242/
|
Perspectives of Hinduism and Zoroastrianism on abortion: a ...
|
Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
Perspectives of Hinduism and Zoroastrianism on abortion: a comparative study between two pro-life ancient sisters
Assistant Professor, The James F. Drane Bioethics Institute, Edinboro University of Pennsylvania, Edinboro, Pennsylvania, USA; Department of Biology and Health Sciences, College of Science and Health Professions, Edinboro University of Pennsylvania, Edinboro, Pennsylvania, USA.
Copyright 2019 Medical Ethics and History of Medicine Research Center, Tehran University of Medical Sciences. All rights reserved.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, (http://creativecommons.org/licenses/by/3.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Hinduism and Zoroastrianism have strong historical bonds and share similar value-systems. As an instance, both of these religions are pro-life. Abortion has been explicitly mentioned in Zoroastrian Holy Scriptures including Avesta, Shayast-Nashayast and Arda Viraf Nameh. According to Zoroastrian moral teachings, abortion is evil for two reasons: killing an innocent and intrinsically good person, and the contamination caused by the dead body (Nashu). In Hinduism, the key concepts involving moral deliberations on abortion are Ahimsa, Karma and reincarnation. Accordingly, abortion deliberately disrupts the process of reincarnation, and killing an innocent human being is not only in contrast with the concept of Ahimsa, but also places a serious karmic burden on its agent. The most noteworthy similarity between Zoroastrianism and Hinduism is their pro-life approach. The concept of Asha in Zoroastrianism is like the concept of Dharma in Hinduism, referring to a superior law of the universe and the bright path of life for the believers. In terms of differences, Zoroastrianism is a religion boasting a God, a prophet, and a Holy book, while Hinduism lacks all these features. Instead of reincarnation and rebirth, Zoroastrianism, like Abrahamic religions, believes in the afterlife. Also, in contrast with the concept of Karma, in Zoroastrianism, Ahura Mazda can either punish or forgive sins.
Introduction
In the history of human civilization, religions have always been major sources of values with huge impacts on the life decisions of their followers. Originating in the dawn of human civilization, Zoroastrianism and Hinduism are two ancient traditions/religions that have adopted a pro-life approach with an emphasis on reverence for life. Although these two sister religions are not compatible in terms of the number of followers (see below), their approaches and perspectives are important and influential in the life decisions of countless people and families around the world.
Abortion is one of the first topics that appeared in the texts and scriptures related to medical ethics from the early days of this field in ancient times, and still is one of the most debated and divisive issues in the field of bioethics. Followers of religions always try to resolve issues such as abortion according to their religion and make their own and their families’ life decisions based on their religious normative approaches.
Zoroastrianism and Hinduism are two ancient inter-related traditions/religions with strong historical bonds that have developed and taken shape in neighboring countries and societies. Studying the similarities and differences between these two religious traditions with regard to an important life-related issue shows the divergent paths of traditions and religions that have the same (or very similar) origins, but have developed in different societies and locations (1).
This paper is the result of a library-based comparative study that has assessed the perspectives of these two religious traditions toward abortion.
The aim of this paper is to sketch and compare the perspectives of Zoroastrianism and Hinduism on abortion in the light of the unique specifics and characteristics of these two religious traditions, their moral teachings, and their bioethical approaches. For this purpose, these perspectives must be explained by exploring the main sources of Zoroastrian and Hindu bioethics. These sources may either pertain to the theoretical/conceptual teachings of these two religious traditions, or their practical approaches in the real world. By paying attention to the very pro-life nature of these two religious traditions one can clearly see that despite some major differences in the bases of their moral thoughts, both oppose abortion except for certain cases under very distinct conditions.
Two pro-life traditions and a life issue
Zoroastrianism and Hinduism both originated among Aryans after their migration to the Middle East and South Asia. Although the theory of the Indo-Aryan migration has also been the subject of scholarly criticism, the similarities and the existence of many common features between the Vedic and Avestan texts indicate a strong ancient interconnection (2). While these two religious traditions had been interconnected before and at the time of the Great Migration, they took separate paths after the settlement of their followers in different geographic areas. Regardless of the causes of this divergence, nowadays there are a lot of differences between these two religious traditions in addition to their original similarities.
1.Zoroastrianism
Zoroastrianism is an ancient Persian religion that was the official religion of the Persian Empire from 600 BCE to 650 CE (3). Estimations on the lifetime of the prophet of this religion, Zoroaster or Zarathustra (Zartosht in current Persian), vary between 8000 and 700 BCE. However, Moubed Dr. Jahangir Ashidari argues that according to historical facts and events, the most realistic estimate of the year of his birth may be 1768 BCE (4).
Zoroaster was born in the present-day Azerbaijan Province in Iran. He moved to Khorasan and the city of Balkh where he declared his prophet hood, and was successful in establishing a new religion. The king of Balkh was among his followers at that time (4).
The most prominent source of Zoroastrian moral thoughts is the religion’s holy book named Avesta (5). Only a small part of the current Avesta is attributed to Zoroaster himself, as a scripture he brought and left among his people. This part is named Gatha and consists of mystical hymns and no concrete jurisprudential or ethical debates (6:155-205). The other parts of Avesta are as follow:
Yasna: This is the oldest and most important part of Avesta, and includes Gatha. It has been argued that this part of Avesta has been compiled at the same time as RigVeda (see the section on Hinduism below) and there are linguistic similarities between the two (5).
Yashtha: This part of Avesta is mostly poetic and includes verses of worship to Ahura Mazda and Amshaspandan (see below). Yashtha consists of poems and epics, and does not include moral or jurisprudential elements or teachings (5).
Visparad: Visparad means lords and leaders. This part of Avesta includes cosmological and ontological teachings. It also contains general moral wisdom for people, describing the best behavioral models for men and women (6).
Vandidad: This is the jurisprudential part of Avesta. It was compiled centuries after the death of Zoroaster and mostly explains how Zoroastrian clergy thought or acted in issuing jurisprudential decrees. Vandidad is partly related to medical issues such as abortion (5) (see below).
Khordeh Avesta: In 400 CE, Moubed Azarbad MehrAspand compiled this part of Avesta to teach Zoroastrian rituals to people. At that time, Zoroastrianism was the official religion of the Sassanids, who were the last dynasty before Islam and ruled over the Persian Empire for more than 200 years (5).
In addition to the Vandidad part of Avesta, there are other holy scriptures like Arda Viraf Nameh and Shayast-Nashayast that are rich in ethical and jurisprudential teachings. These have been compiled in the centuries after the lifetime of Zoroaster, mostly during the dominance and prevalence of Zoroastrianism in the Persian Empire, from about the 5th century BCE to the 7th century CE (7).
Through the seventh and eighth centuries CE, Persia gradually joined the Muslim world and the dominance of Zoroastrianism ended. Nevertheless, the cultural influence of this religion has persisted until contemporary times (8). Nowadays, the followers of Zoroastrianism mostly live in Iran, India (the Parsis) and Western countries. Estimations of the present population of Zoroastrians worldwide differ between 145,000 and 2.6 million (9). Beyond the community of its formal believers, the current and historical influences of Zoroastrianism on the Iranian culture and even the Iranian version of Shiite Islam have been significant. It has been argued that the Iranian/Persian culture is a mixture of three different heritages: The Islamic/Shiite religion/culture, the ancient Persian/Zoroastrian culture, and the impact of the Western/modern culture in recent centuries (10).
Some foundational features of Zoroastrianism that are very important in understanding the spirit of this religion and its bioethical perspectives are as follow:
Monism vs. Dualism
Zoroastrianism is a monotheistic religion. The dualism of Ahura Mazda and Ahriman in the Zoroastrian cosmology has been translated into a dualistic view in theological and moral perspectives (4). Therefore, Zoroastrian morality is largely based on a type of dualism that believes in the timeless and everlasting combat between good (Ahura Mazda/Sepand Minu/Ashuns) and evil (Ahriman/Angra Minu/Doruj). It is noteworthy that Zoroastrianism in its dualistic moral view is more similar to Abrahamic religions than to Hinduism and other Asian religions (4).
According to the Zoroastrian dualistic view, Ahura Mazda created all the good in the universe, and Ahriman created all the evil (8). Human beings were also the creation of Ahura Mazda, and are therefore considered intrinsically good. However, they have the ability and autonomy to choose between good, which is in concordance with their nature, and evil, which is suggested and encouraged by Ahriman. The former follows Asha as the divine rule of existence and are called the Ashuns, while the latter who choose evil (Doruj) are named the Dorvands (followers of Doruj/evil/lie) (11).
According to the aforementioned beliefs and perspective, which consider every unborn human being as a creature of and a future soldier for Ahura Mazda, Zoroastrianism is a pro-life religion. Some of the newer parts of Avesta explain punishments and difficult steps for purgation of a person who has committed abortion (7).
Amshaspandan and Asha
Before the time of Zoroaster, the Aryans, including the group that moved to India and are called Hindus, used to worship multiple gods and goddesses. Zoroaster introduced a single God named Ahura Mazda, and the previous Aryan gods were then revived as the various reflections or faculties of that single God; these were named the Amshaspandan, and were inseparable from Ahura Mazda. Amshaspand means “the immortal pure” and Amshaspandan is the plural form of Amshaspand. This word is constituted of two parts: Amesha and Sepanta. Amesha means immortal and indestructible, and it also specifies everlasting and beneficent entities such as the four elements, the sun, and Houm (healing plant). Sepanta means generous, merciful, creator and pure (4).
According to Zoroastrian teachings, the Amshaspandan are as follow:
Asha: This is a very important concept in Zoroastrianism and is rather similar to the concept of Dharma in Hinduism (4). Asha means the eternal law, righteousness, and the unchanging rules of the universe and humanity. People who follow Asha and believe in it as the divine rule of existence are the Ashuns, while others who choose evil (Doruj) are the Dorvands (11).
Nashu: Being clean and pure is very important in Zoroastrian teachings and rituals (3). Nashu is uncleanliness or a demon, mainly attributed to dead bodies (3). Any person contaminated with Nashu should be cleaned through a set of sophisticated rituals including being washed with a liquid prepared from cow’s urine (3). Zoroastrians do not bury the bodies of the dead because they believe that this practice contaminates the soil. Instead, they leave corpses in places named dakhma to be eaten by wild animals and degraded by natural forces (3). Since an aborted fetus is a dead body, abortion is considered to contaminate the mother’s body with nashu, which is a great sin (see below for further discussion) (7).
2.Hinduism
Claimed to be the oldest living religion in the world, Hinduism is a huge network of concepts, beliefs and rituals initiated more than two thousand years ago in ancient India. Today, Hinduism has about 900 million followers all around the world. Most Hindus live in India and Nepal, but they also shape large populations in other Asian countries like Cambodia, Thailand, Burma and Indonesia. In addition, in developed countries like the United States and the United Kingdom, Hindus are among sizeable minorities.
Spiritual teachings of Hinduism and its sages and spiritual masters have had a great influence on Western cultures over the recent decades. Hindu spirituality in many direct and indirect forms has changed the culture, spirituality and lifestyle in Western societies. As an example, one can mention Yoga, which originated in Hindu traditions, and has become very popular in Western countries in the past century.
It is interesting to explore the origin of the word “Hindu”. As a huge cultural network, Hinduism was born in ancient India, but the name “Hindu” was acquired in the medieval centuries to differentiate the religion from others such as Islam (12). As a matter of fact, the word “Hindu” comes from Persian literature. Persian geographers coined the name “Hindu” for people who lived beyond the river Indus (Sindhu) (13). Addition of the suffix “-ism” is a legacy of British colonialism in the 19th century.
In ancient India, Hinduism was traditionally called “Sanatana Dharma”, which connotes the most central concept in this tradition, but cannot be fully translated into English. However, some have chosen “eternal law” as an equivalent.
It is difficult, if not impossible, to try to find a set of essentials for all the sects, groups and denominations within the circle of Hinduism. One cannot specify a concept, belief, ritual or other element as the common - or defining - feature of this religion. In fact, features like reverence for Vedas (the ancient Scripture of Hinduism), believing in a system of values named Dharma, and even belonging to the Indian nation have been mentioned as unifying features of Hinduism, but none is common among all Hindus.
Therefore, Hinduism can be understood as a network of inter-related ideas without a single unifying feature. In fact, instead of one or a few essential common and all-embracing features, one can speak about a wide network with a series of overlapping similarities reminiscent of “family resemblance” as explicated by Ludwig Wittgenstein for defining other phenomena such as art (14).
Some scholars argue, however, that the concept of family resemblance cannot solve the problem of lack of common features in the search for Hindu moral principles. Although the above-mentioned “family resemblance” means that no single unifying essential feature can be found for Hinduism, some major characteristics can be identified, which are 1) common among most sects and branches of Hinduism, and 2) essential and representative of the nature and main directions, teachings, key concepts, and values of this tradition. A non-inclusive list of these characteristics is presented below.
Unity in the Midst of Plurality
One characteristic of Hinduism is the existence of numerous forms of supreme beings, as can be seen in the enormous number of deities. Shiva, Shakti, Vishnu, Ganapati, Surya, and Subrahmanya are the deities worshiped by different sects of Hinduism, but can be considered as different manifestations of a single supreme being. This interpretation of the Hindu tradition, which makes it similar to monotheistic religions, is compatible with a famous verse of Rigveda: “Reality is one; sages call it by different names”; or this verse of Bhagvad Gita: “Even those who are devoted to other gods and worship them in full faith, even they, O Kaunteya, worship none but Me” .
This plurality is not confined to the deities. For instance, Hinduism does not have a single founder, but seems to have been created and formed by accumulation of teachings and revelations of numerous sages, gurus and spiritual masters in ancient India (15).
This characteristic provides Hinduism with an inimitable flexibility and respect for plurality and diversity, which (alongside other qualities like the central concept of non-violence, Ahimsa) were very important in the history of this religion and that of India. For example, one can mention the historical acceptance of Jewish and Zoroastrian immigrants whose lands had been invaded by Romans and Muslim Arabs respectively. Another case in point is the specifics of the democracy founded by Mahatma Gandhi in this huge subcontinent with such a unique variety in cultures, religions, and ways of life.
The Concept of Dharma
Dharma holds the human community and the entire world together. As explained above, this concept is very similar to the concept of Asha in Zoroastrianism. In Hinduism, Dharma illuminates humans’ responsibilities and way of life. As mentioned above, in the ancient Indian subcontinent, the followers of Hinduism called their religion/tradition Sanatana Dharma in which the word Sanatana means eternal (15). Also, in Zoroastrianism, the people who are true followers of Zoroaster are called Ashun. Therefore, it seems that attributing followers to the eternal law is a common concept in both Zoroastrianism and Hinduism.
Concepts of Karma, Samsara, and Reincarnation
Karma is one of the most important concepts in Hindu ethics and morality. This concept denotes that a law of cause and effect rules the world of human deeds, both mentally and physically. Each action produces its own reaction in the world. Accordingly, a good action has a good reaction for the human agent in his/her current life or next lives, while a bad action will certainly bring about bad consequences, which, again, can take place in the current or subsequent lives of the human agent. This continuous cycle of action, reaction, birth, death and rebirth is called Samsara. This cycle is not endless. One can break the cycle of Samsara by good deeds that lead to salvation and getting out of the cycle. This salvation, called Muksha (or Nirvana in Buddhism and Jainism), is the ultimate goal of life. Therefore, the final purpose of Hindu ethics is salvation that is manifested in breaking the cycle of Samsara and entering the eternal salvation, sometimes named Muksha (15).
There are serious controversies among scholars on the existence of a Hindu Bioethics. Like other ancient civilizations, the Indian subcontinent had its own medicine and healing tradition called Ayurveda (the science of life), which was a sort of humoral medicine (16). The existence of this medicine and its rich literature, mixed with Hindu teachings and thoughts about humanity and morality, led some scholars to try to derive from it a kind of Hindu biomedical ethics. For example, the ancient Hindu stories about gods with human bodies and animal heads were used to conclude the permissibility of Xenotransplantation in Hindu bioethics (13).
Some scholars, however, do not agree with this method of constructing Hindu bioethics (17). They argue that the mere existence of these traditional schools of medicine in the mostly Hindu ancient Indian subcontinent does not imply that their literature mirrors Hindu bioethics (13).
The key point in this regard is that there is no consensus among Hindus on all of the concepts and principles attributed to this religion. This vast diversity, as mentioned above, is one of the most important characteristics of Hinduism. This characteristic reflects itself in Hindu ethics, applied ethics, and bioethics (13).
The main question is, how can all these sects and branches of Hinduism agree upon a set of principles for applied ethics, since they cover such a diverse variety of beliefs but have no common feature (such as a prophet or a holy book, as is the case with Christianity, Islam, or Buddhism)? Therefore, the existence of a Hindu bioethics with a distinct set of principles has been a subject of controversy and debate. Two kinds of efforts, however, have been made to solve this problem:
1. Some scholars have pointed out common concepts, like Karma, as the core and unifying concept of Hinduism and Hindu ethics. By doing so, however, they have broadened the scope of Hinduism in a way that even Buddhism and Jainism can be considered some sort of Hinduism. It is obvious that this is too wide-ranging to serve the purpose (17).
2. Some other scholars have tried to choose just one sect or group within the wide spectrum of Hinduism, and described Hindu ethics based only on the values and beliefs of that sect or group. They have been successful in finding a set of principles, but the results cannot be called “Hindu Bioethics” as they are too narrow in range (13).
The aforementioned endeavors, however, show a very historically obvious fact: that the impossibility of attributing a set of common and all-encompassing principles and values to Hindu morality and applied ethics does not mean it is impossible to speak about Hindu bioethics. Three main categories of sources can be used to delineate the content of Hindu bioethics, including its values, principles, teachings, and judgments. These categories are as follow:
1- Every system or set of values, moral principles and ethical deliberations that finds its roots in the Hindu religion/tradition can be considered and named Hindu ethics, regardless of how many Hindu sects and groups it is shared among. When it comes to value-judgments about medicine, healthcare and life sciences, these principles definitely shape Hindu bioethics. By the same token, we can reach a set of principles, concepts and values that are not all-encompassing and unifying, but still characterize this very brand of religious bioethics.
2- Ayurveda and other branches of Indian traditional medicine have been used as a rich source of Hindu reflections on Human life, death, suffering and so on. Ayurvedic classical texts like Caraka Samhita and Sustuta Samhita are among the sources of Hindu reflections about human body and self that have major implications for bioethics (16).
3- Deliberations and reflections of Hindu scholars on different sorts of bioethical issues provide another main source for delineation Hindu bioethics. Hindu scholars, sages and spiritual masters have discussed issues like abortion, futile treatment, organ transplantation, contraception and mercy killing. What they have written, taught or told are a rich source for studying Hindu bioethics. Also, one can induct methods of Hindu bioethics by observing the ways in which Hindus have approached the above issues and reached judgments and conclusions about them.
In their bioethical deliberations, Hindu scholars appeal to Hindu concepts like Karma, Dharma (as described above), Ahimsa (non-violence) and respect for life and nature. They also appeal to classic texts and scriptures of the religion/tradition from the oldest existing ones, namely Vedas, to other essential ones like Upanishadha or Bhagvad Gita. One example of such references to classical scripture is described above on the issue of Xenotransplantation (17).
Hindu Bioethics should be seen as a lived experience. From ancient “Vedic healers” to modern healthcare professionals, numerous generations of physicians and clinical practitioners in the Indian subcontinent have sought the values and principles governing their practice in one of the oldest and richest religions and traditions in the world, that is, Hinduism. The spirit of the subcontinent shaped and determined the nature of this value system throughout its long history. This Indian spirit is what gives the Hindu bioethics a sort of unity in the midst of such vast and wide diversity.
Hinduism has its own perspective on fundamental aspects of human life. According to this perspective, the moral energy is preserved in the form of Karma, and death is not the opposite of life, but is the opposite of birth. This characteristic makes Hinduism different from Abrahamic religions in which the will of God determines the consequences of good or bad deeds, rather than a natural rule like Karma (14). In Hinduism, the ultimate purpose of human beings is liberation from the circle of birth, death and rebirth, instead of entering heaven as is the case in Abrahamic religions (15).
Obviously, none of the aforementioned features is unique to and common among all the sects of Hinduism. Altogether, however, these features are the different surfaces of an underlying spirit: the spirit of Hinduism, which is the spirit of the Indian subcontinent. This spirit has been the source of inspiration for successive generations of sages, gurus and spiritual masters.
The reverence for life and a strong tradition of non-violence (Ahimsa) has shaped the perspectives of Hindu bioethicists towards key bioethical issues like abortion, euthanasia and brain death (15).
Virtue ethics also exists in some Hindu ethical teachings. This approach to ethics focuses mainly on the moral agent instead of the act itself or its consequences. Accordingly, going through a process of self-purification results in achieving a moral character that always chooses to perform the ethically right deeds (18).
At the end, the practical results of this type of virtue ethics are somehow different from those of its counterparts in the West or the Middle East. This difference is rooted in the spirit of Hinduism and the Indian subcontinent, and has a great impact on the moral character of the virtuous person.
In sum, one can conclude that despite the diversity, which is one of the main characteristics of Hinduism, it is possible to delineate some major concepts that shape the infrastructures of morality in this religion/tradition. In the same way, one can sketch the principal values and directions of Hindu bioethics. In addition, the present study has pointed out three main sources for bioethical endeavors within the Hindu tradition/religion:
Value-judgments and moral deliberations rooted in and performed within the Hindu tradition
Textbooks and the heritage of ancient Hindu medicine, including Ayurveda
Reflections and deliberations made by Hindu scholars on bioethical issues that have accumulated throughout a long history, including the modern era
Hindu bioethics can be sought and learned as the collective lived experiences of Hindus on traditional and modern issues that are of biomedical nature. These experiences, which have been accumulated collectively throughout the Indian subcontinent and have produced a huge body of literature, are the very nature and unifying umbrella that cover a long history of ethical and moral endeavors of a vast array of sects, branches and groups within the old religion/tradition of Hinduism.
The importance of the issue of abortion
Abortion is the intentional termination of the life of an unborn human embryo or fetus. This act is forbidden and considered as inherently evil in all major religious traditions of the world. In the modern era, however, the situation has changed. Many factors contributed to bringing abortion to the top tier of the most heated ethical debates among the general public and scholars, and making some moral and religious thinkers and authorities rethink and reconsider the absolute evilness of abortion, at least its indirect forms. The issue of population growth in a number of societies has caused some policy-makers to see abortion as a means for population control and prevention of unwanted and unplanned births.
The largest Hindu population in the world lives in the Indian subcontinent, the birthplace of Hinduism (12). In addition, Hinduism reflects the very spirit of the subcontinent. Therefore, when speaking about abortion in Hinduism, it is important to take a look at the realities of its geographical setting. According to the Indian law, abortion is permitted until the twentieth week of pregnancy, only for medical and a very limited number of social reasons.
One of the social reasons for a massive number of abortions in India is the gender of the fetus. When prenatal sex determination by ultrasound became available, many families killed their unborn daughters to get rid of the social and economic burdens of having a daughter and sometimes hoping to have baby boys in the next possible pregnancies.
The selective abortion of female fetuses has increased in India over the past few decades. The 2011 census showed 7.1 million fewer girls than boys aged younger than seven, which showed an increase compared to the 6 million in 2001 and 4.2 million in 1991. The sex ratio in this age group is now 915 girls to 1,000 boys, the lowest since such records began to appear in India in 1961. Parents have little problem with their first child being a girl, but want their second to be a boy. In these families, the gender ratio for second births has fallen from 906 girls per 1,000 boys in 1990 to 836 in 2005, implying that an estimated 3.1 to 6 million female fetuses have been aborted in the past decade. It has even been claimed that approximately eight million female fetuses may have been aborted in the past decade, which has been called a “national shame”.
Similarities
Abortion has been explicitly mentioned in the Zoroastrian Holy Scriptures including Avesta, Shayast-Nashayast and Arda Viraf Nameh. In addition to regarding abortion as evil and forbidding it, these books prescribe some brutal punishments for women who commit abortion in the afterlife (7).
In addition to condemning abortion in the Holy Scriptures, Zoroastrianism provides moral reasoning, according to its own system of beliefs, for regarding abortion as evil. According to the Zoroastrian moral teachings, abortion is evil for two reasons: killing an innocent and intrinsically good person, and the contamination caused by the dead body (Nashu) (7).
On the other hand, as described above, the main sources of Hindu bioethics, which are its concepts and traditions, shape its approaches to ethical issues at the margins of life, including abortion. When it comes to the abortion debate, the principal concepts involving moral deliberations are Ahimsa, Karma, and reincarnation. Accordingly, abortion deliberately disrupts the process of reincarnation and kills an innocent human being; therefore, it is in contrast with the concept of Ahimsa and imposes serious karmic burdens on its agent. In addition, in major resources of Hinduism, abortion has been strongly condemned, which confirms the pro-life approach of this religion/tradition towards abortion. According to Hindu bioethics, abortion is allowed only in cases where it is necessary for saving the life of the mother. The perspective of Hinduism is a very pro-life one, emphasizing Ahimsa and its intrinsic reverence for life.
It should be mentioned that in addition to the similarities explained below, there are others in minor aspects such as rituals. For example, considering the cow as a sacred animal and using its urine for cleaning the body after abortion is common practice in both traditions/religions.
Dharma vs. Asha
The concept of Asha in Zoroastrianism is similar to the concept of Dharma in Hinduism. Both Asha and Dharma refer to a superior law of the universe and the bright path of life, which should be adopted by the believers.
In the Indian subcontinent, before their historical encounter with other religions and traditions, the followers of Hinduism called their religion/tradition Sanatana Dharma. The word Sanatana means eternal (15). Also, in Zoroastrianism, the true followers of Zoroaster are called Ashun. Therefore, it seems that attributing followers to the eternal law is a common concept between Zoroastrianism and Hinduism.
The approaches of these two religions to moral issues like abortion are consistent with this ontological view of the universe. The entire universe is created and ruled in accordance with Dharma/Asha, and all the people should follow these eternal rules. Morality ultimately means consistency and accordance with these higher entities. In both religions, abortion is a violation of the higher and sacred law of the Universe and existence. Therefore, abortion, like murder, robbery and other kinds of immoral behaviors, is wrong and unacceptable.
Reverence for life
The most noteworthy similarity between Zoroastrianism and Hinduism is their pro-life approaches. In both religions/traditions, abortion is considered murder and is forbidden.
Ayurveda and other branches of Indian traditional medicine have been used as a rich source of Hindu reflections on Human life, death, suffering and so on (15). Deliberations and reflections of Hindu scholars on different sorts of bioethical issues provide another main source for delineating Hindu bioethics.
In their bioethical deliberations, Hindu scholars appeal to Hindu concepts like Karma, Dharma (as described above), Ahimsa (non-violence) and respect for life and nature. They also appeal to the classic texts and scriptures of the religion/tradition from the oldest existing ones, namely Vedas, to other essential ones such as Upanishadha or Bhagvad Gita. (19)
Abortion is mentioned in early Vedic scriptures. For example, in Brahmanas, the second major body of Vedic literature, abortion is considered a crime (19: 22-23), and the same approach is adopted by Upanishads (19). Other classical scriptures of Hinduism have also expressed their opposition to abortion in several ways, for instance by comparing abortion with killing a priest, considering abortion a sin worse than killing one’s parents, and threatening the mother to lose her caste.
In the modern world, Hindu sages and scholars have continued to condemn abortion. As Mahatma Gandhi once wrote, "It seems to me clear as daylight that abortion is a crime.” It can be argued that the traditional concepts of reverence for life and non-violence (Ahimsa) have been most influential on the perspectives of Hindu bioethicists towards key bioethical issues such as abortion, euthanasia and brain death (15). As explained above in this paper, Ahimsa is a core concept in the approach of Hinduism to the issue of abortion. As mentioned above, Ahimsa is based on the sacredness of all creatures as manifestations of the Supreme Being.
The reverence and love granted to all manifestations of life results from the very concept of Ahimsa, which has made the Hindu religion/tradition a strongly prolife one. This pro-life attitude has found its way from Hinduism to other Asian religious traditions (18, 20).
In Zoroastrianism, abortion is regarded as killing an innocent and intrinsically good person. Concepts like Ahimsa do not exist in Zoroastrianism, but reverence for human life does. As explained above, morality in Zoroastrianism is based on a polarized account of the Universe as the everlasting battleground of good and evil, that is, Ahura Mazda and Ahriman (4). Since the human being is intrinsically good and has been created by Ahura Mazda, killing an unborn embryo or fetus is a violation against the forces of Ahura Mazda and a contribution to the forces of Ahriman. Therefore, abortion is considered a major sin. Accordingly, it is not surprising that he Holy Scripture of Zoroastrianism equates abortion with murder and rules punishments for persons who commit it. Also, in other parts of Avesta, there are revelations describing brutal punishments for such people in the afterlife (7).
Exceptions for the ban
When it comes to abortion, in addition to adopting a pro-life approach, both religions recognize some exceptions for their ban on abortion. In both traditions/religions abortion is permitted when the life of the mother is in danger. Therefore, both give priority to the mother’s life over the life of her unborn child.
As a matter of fact, although both Zoroastrianism and Hinduism ban abortion except for cases in which mothers’ lives are endangered, the bioethical bases of this ban in these two religions are different from each other. In Zoroastrianism, the ban is based on abortion being the same as killing an innocent person, and the contamination caused by the dead body. But in Hinduism, it is based on the law of Karma and depriving a person from one cycle of his or her rebirth. However, regardless of the theoretical bases and theological justifications, both religions give priority to the lives of the mothers over the lives of their unborn children.
The recognized exceptions raise a question about the moral status and personhood of the embryo. Although not mentioned directly in the original manuscripts, it seems that both these religions regard a moral status for the human embryo from the very first stages of life. This attitude is similar to the perspective of the Catholic Church that believes in recognition of personhood from the time of conception. However, a minority of Hindus believe that incarnation takes place in the 7th month of pregnancy (21). Also, it has been shown that the majority of Zoroastrians are not against sperm and egg donation that necessitates in Vitro Fertilization (22). This position makes Zoroastrianism different from classical Catholicism or other recent pro-life movements (23).
Differences
A comparative study will not be complete without describing the differences between the subjects of comparison. Although Zoroastrianism and Hinduism are ancient sister religions that originated among the same group of people (Aryans) after the Great Migration, their followers settled in two different neighbor countries: Persia and India. Living in separate contexts and conditions naturally has had its consequences. As mentioned above, Zoroastrianism is more similar to Abrahamic religions than to Dharmic ones in many ways. The main differences between these two religious traditions in terms of their perspectives on abortion are described below.
Unity vs. diversity
One of the main differences between Zoroastrianism and Hinduism is in the very fact that Zoroastrianism is a religion with a God, a prophet, a Holy book, and in long periods of its history, a single hierarchical order of clergies. Hinduism, however, lacks all these features. There is no single god, prophet, holy book or system of clergies shared among all the groups, sects and communities who call themselves Hindu. Therefore, in order to find the normative positions of Zoroastrianism, for example their perspective on abortion, one can rely on a single defined set of resources. In Hinduism, however, each expressed viewpoint only belongs to a number of believers and does not reflect the viewpoint of all religions/traditions. Considering this difference between these two religions is important for reading and understanding all the scholarly works that have been published in this regard.
In other words, Zoroastrianism is a typical religion, while Hinduism is a mixture of similar and interrelated traditions/religions. However, considering the familiar resemblance that ties the members of this group to each other, one can consider Hinduism a unique, vast tradition reflecting the spirit of the Indian subcontinent.
Afterlife vs. reincarnation
One of the most important differences pertains to the concepts of rebirth and reincarnation. Unlike Hinduism, Zoroastrianism does not believe in reincarnation and rebirth, but believes in the afterlife, like Abrahamic religions.
Therefore, in Zoroastrianism, abortion is not considered as depriving a person of a cycle of human life, but as denying him or her the only chance of birth and enjoying life on earth.
Karma vs. omnipotent God
In Hinduism, killing a living creature, including a fetus, is regarded as interfering in its spiritual evolution. Such interference places Karmic burdens on its agent. Therefore, according to the natural law of Karma, the agent(s) of such a crime will definitely encounter it’s just punishment/retaliation in their current or next lives.
As an example of how the concept of Karma works with regard to abortion, it has been said that abortion is a kind of punishment for meat-eaters. The fetus was a meat eater in his or her previous life, while the mother was a cow in her previous life, now taking revenge according to the rules of nature. According to this belief, meat-eaters and other people who kill live entities cannot escape the retribution set by the laws of Karma; thus, in their next lives, they will have to undergo the misfortune, and may be recurrently aborted.
It is obvious that the karmic maleficence of abortion is in close relation to reincarnation. This very belief that a human embryo is essentially a human person underlines the karmic effect attributed to abortion in Hinduism (19). The concept of Caraka (Caraka’s theory of causality) shows how the karmic burden/heritage of past lives is transferred to the unborn fetus (19). Therefore, killing the unborn child disrupts this process of transferring the Karma and imposes karmic burdens on its agent who, by his/her act of abortion, has deprived the unborn baby of one of his or her chances to pursue salvation in a human life.
Cleanliness, on the other hand, is a very major concept and an emphasized duty for believers in Zoroastrianism. One of the most offensive contaminants that can affect the cleanliness of the human body is a corpse. Accordingly, there are specific burial rituals in Zoroastrianism to prevent contamination of the soil, fire and living bodies by a corpse. According to Zoroastrian teachings, abortion exposes the body of the mother to contamination caused by the dead body of the aborted fetus. Therefore, in addition to abortion being forbidden, there is a multi-step ritual for purgation of the body of the mother, including washing her womb with a liquid made from cow urine (7).
The concept of Karma, as it exists in Hinduism, has no place in Zoroastrianism. Based on Zoroastrian teachings, Ahura Mazda can punish or forgive sins. Therefore, the punishment or forgiveness of bad deeds do not occur as a result of a natural law, but is attributed to Ahura Mazda, who can either punish or forgive the sinner (4). As a matter of fact, belief in an omnipotent God is not consistent with the concept of Karma, because accepting the inviolability of this concept as a natural law ties the hands of God.
In Zoroastrianism, like Abrahamic religions, the omnipotent God defines what is good and what is evil, and punishes or forgives anyone He wants. Therefore, He is the one who can establish the immorality of abortion and offer punishment or forgiveness.
Conclusion
Zoroastrianism and Hinduism are similar to each other in adopting strong pro-life approaches to issues like abortion. Although with different theoretical bases, both these religious traditions ban abortion and allow it only if the life of the mother is threatened by continuation of the pregnancy. Also, they are fundamentally different in the conceptual and theological bases of their moral approaches.
Zoroastrianism provides moral reasoning for regarding abortion as evil according to its own system of beliefs. On the other hand, in Hindu bioethics, the principal concepts involving moral deliberations on abortion are Ahimsa, Karma, and reincarnation. Accordingly, abortion as deliberately disrupting the process of reincarnation and killing an innocent human being is in contrast with the concept of Ahimsa and brings serious karmic consequences for its agent. Hindu bioethics condemns abortion and allows it only in cases where abortion is necessary for saving the life of the mother.
The concept of Asha in Zoroastrianism is similar to the concept of Dharma in Hinduism. Both these concepts refer to a superior law of the universe and the bright path of life. The most noteworthy similarity between Zoroastrianism and Hinduism, however, is their pro-life approach.
The perspectives of Hindu bioethicists on key bioethical issues such as abortion has been shaped by a strong tradition of non-violence (Ahimsa) and an immense reverence for life. Ahimsa is a core concept in the approach of Hinduism to the issue of abortion and is based on the sacredness of all creatures as manifestations of the Supreme Being. In Zoroastrianism, abortion is regarded as killing an innocent and intrinsically good person. Since the human being is essentially good and has been created by Ahura Mazda, killing an unborn child is a violation against the forces of Ahura Mazda and a facilitator to the forces of Ahriman, hence a major sin. Therefore, both these religions have adopted a pro-life approach toward the abortion debate.
In both traditions/religions abortion is permitted when the life of the mother is in danger. Therefore, both give priority to the mother’s life over the life of her unborn child.
One of the main differences between Zoroastrianism and Hinduism is in the fact that Zoroastrianism is a religion with a God, a prophet, a Holy book, and in long periods of its history a single hierarchical order of clergies. Hinduism, however, lacks all these features.
Another important difference between Zoroastrianism and Hinduism is related to the concepts of rebirth and reincarnation. Like Abrahamic religions, Zoroastrianism believes in the afterlife. Therefore, in Zoroastrianism, abortion is not considered as depriving a person of a cycle of human life, but it is considered as depriving a person of his or her only chance to be born and enjoy life on earth.
In Hinduism, killing a living creature, including a fetus, is regarded as interfering in its spiritual evolution, and places Karmic burdens on its agent. Therefore, according to the natural law of Karma, the agent(s) of such a crime will definitely encounter just punishment/retaliation in their current or next lives.
The concept of Karma, as advocated by Hinduism, has no place in Zoroastrianism. In Zoroastrianism, punishment and forgiveness of bad deeds are not the result of a natural law, but are administered by Ahura Mazda, who can either punish or forgive the sinner.
In sum, one can conclude that Zoroastrianism is similar to Abrahamic religions in its approach to abortion, and this is what makes it different from its Dharmic sister, Hinduism. Although both these ancient sister religions have adopted pro-life approaches, they are very different in many aspects and features. Analyzing the historical course and reasons for the emergence of these differences can be a subject for further studies in the future.
Acknowledgements
The author of this article would like to extend his appreciation to Dr. Joris Gielen for his invaluable guiding comments. Also, special thanks go to the reviewers of the Journal of Medical Ethics and History of Medicine for their helpful insights.
|
Although the above-mentioned “family resemblance” means that no single unifying essential feature can be found for Hinduism, some major characteristics can be identified, which are 1) common among most sects and branches of Hinduism, and 2) essential and representative of the nature and main directions, teachings, key concepts, and values of this tradition. A non-inclusive list of these characteristics is presented below.
Unity in the Midst of Plurality
One characteristic of Hinduism is the existence of numerous forms of supreme beings, as can be seen in the enormous number of deities. Shiva, Shakti, Vishnu, Ganapati, Surya, and Subrahmanya are the deities worshiped by different sects of Hinduism, but can be considered as different manifestations of a single supreme being. This interpretation of the Hindu tradition, which makes it similar to monotheistic religions, is compatible with a famous verse of Rigveda: “Reality is one; sages call it by different names”; or this verse of Bhagvad Gita: “Even those who are devoted to other gods and worship them in full faith, even they, O Kaunteya, worship none but Me” .
This plurality is not confined to the deities. For instance, Hinduism does not have a single founder, but seems to have been created and formed by accumulation of teachings and revelations of numerous sages, gurus and spiritual masters in ancient India (15).
This characteristic provides Hinduism with an inimitable flexibility and respect for plurality and diversity, which (alongside other qualities like the central concept of non-violence, Ahimsa) were very important in the history of this religion and that of India. For example, one can mention the historical acceptance of Jewish and Zoroastrian immigrants whose lands had been invaded by Romans and Muslim Arabs respectively. Another case in point is the specifics of the democracy founded by Mahatma Gandhi in this huge subcontinent with such a unique variety in cultures, religions, and ways of life.
|
yes
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://www.gettysburg.edu/offices/religious-spiritual-life/world-religions-101/what-is-hinduism
|
What is Hinduism? - Center for Religious & Spiritual Life ...
|
What is Hinduism?
How and when did Hinduism begin? While there is no shortage of historical scholars, sages, and teachers in Hinduism, there is no historical founder of the religion as a whole, no figure comparable to Jesus, the Buddha, Abraham, or Muhammad. As a consequence, there is no firm date of origin for Hinduism, either. The earliest known sacred texts of Hinduism, the Vedas, date back to at least 3000 BCE, but some date them back even further, to 8000-6000 BCE; and some Hindus themselves believe these texts to be of divine origin, and therefore timeless.
Related to this, it is worth mentioning here that there is no designated religious hierarchy that determines official Hindu doctrine or practice. Thus, there is no one who can speak for Hindus as a whole, and no single authority regarding what is “truly” Hindu or not. Nevertheless, below is a list of principles that, by practitioner consensus, characterize one as “Hindu.”
Sacred Texts of Hinduism
There is no single, authoritative text in Hinduism that functions like the Bible for Christians, or the Qur’an for Muslims. Instead, there are several different collections of texts. The Vedas are the oldest Hindu sacred texts, and have the most wide-ranging authority. They are believed to have been written anywhere from 1800 to 1200 BCE. The Upanishads describe a more philosophical and theoretical approach to the practice of Hinduism and were written roughly between 800 and 400 BCE, around the same time that the Buddha lived and taught. The Mahabharata is the longest epic poem in the world, the most well-known portion of which is the Bhagavad-Gita, which is perhaps the best-known and widely cited book in all of Hinduism; the Ramayana is the other most important epic poem in Hinduism.
Gods in Hinduism
Hinduism encompasses a lush, expansive understanding of the divine accommodating a vast assortment of dynamic and multifaceted concepts. Hinduism sees the divine as not either one or many, but both; not male or female, but both; not formless or embodied, but both. Some of the most important deities in Hinduism are Vishnu, Shiva, Ganesha, Krishna, Sarasvati, Durga, and Kali.
As a result, there are dozens upon dozens of Hindu festivals honoring and celebrating these multitudinous divinities. Some are celebrated throughout India, and many more are primarily regional. They mark specific seasons, specific events in the lives of the different gods and goddesses, and specific concerns of life—wealth, health, fertility, etc. Two of the most well-known in the United States are Divali and Holi.
Divali, the festival of lights that falls somewhere in October or November, honors Lakshmi, the goddess of wealth and good fortune, and lasts roughly four to five days. Families often visit the temple during this time and make offerings to Lakshmi there, but they also worship at home, perhaps even arranging a special place on their home altar for Lakshmi. Doors are left open to welcome her into the house, and the whole period of celebration is a time of great joy, in which Hindus fill their houses with light.
Holi is celebrated with great abandon and gusto all over India. It inaugurates the coming of spring and is celebrated primarily by throwing colored paste and water on anyone who happens to be out walking around. It, too, is celebrated over a period of days.
Hindu Worship
For Hindus, there is no weekly worship service, no set day or time in which a community is called to gather publicly. Although most Hindus do visit temples regularly, or at least occasionally, to pray and make offerings, a “good” Hindu need never worship in public. Instead, all worship can be performed to icons in the home shrine, which is why the home is a very important place of worship in India.
The best word that describes and summarizes Hindu worship is puja, which means respect, homage, or worship. Most—if not all—Hindus have small altars at home on which they place pictures and/or statues representing different deities, including those to whom the family is particularly devoted. Each morning, one member of the family, usually the father or the mother, will perform a short puja at the altar. This may include saying prayers, lighting a lamp, burning incense, making offerings of fruit and flowers, and ringing a bell. The goal in this worship is to please the gods through all five senses.
Much the same thing happens in temple worship, though the rituals are much more elaborate there, since deities are believed to inhabit the temple images at all times, rather than just when invited, as in a home puja. In temple worship, the priest performs the puja, then on behalf of the god he returns to the people some of what they first brought as offerings—food, flowers, etc. This is called prasad, which means grace, goodwill, or blessing. In this way, the offerings are then received back by the devotees as a blessing. So, for example, small morsels of food are eaten, flowers are worn in the hair, incense is wafted around one’s body, holy water sipped, and colored powders are mixed with water and used to make a tilak, a mark in the center of the forehead above the eyes.
|
Some of the most important deities in Hinduism are Vishnu, Shiva, Ganesha, Krishna, Sarasvati, Durga, and Kali.
As a result, there are dozens upon dozens of Hindu festivals honoring and celebrating these multitudinous divinities. Some are celebrated throughout India, and many more are primarily regional. They mark specific seasons, specific events in the lives of the different gods and goddesses, and specific concerns of life—wealth, health, fertility, etc. Two of the most well-known in the United States are Divali and Holi.
Divali, the festival of lights that falls somewhere in October or November, honors Lakshmi, the goddess of wealth and good fortune, and lasts roughly four to five days. Families often visit the temple during this time and make offerings to Lakshmi there, but they also worship at home, perhaps even arranging a special place on their home altar for Lakshmi. Doors are left open to welcome her into the house, and the whole period of celebration is a time of great joy, in which Hindus fill their houses with light.
Holi is celebrated with great abandon and gusto all over India. It inaugurates the coming of spring and is celebrated primarily by throwing colored paste and water on anyone who happens to be out walking around. It, too, is celebrated over a period of days.
Hindu Worship
For Hindus, there is no weekly worship service, no set day or time in which a community is called to gather publicly. Although most Hindus do visit temples regularly, or at least occasionally, to pray and make offerings, a “good” Hindu need never worship in public. Instead, all worship can be performed to icons in the home shrine, which is why the home is a very important place of worship in India.
The best word that describes and summarizes Hindu worship is puja, which means respect, homage, or worship. Most—if not all—Hindus have small altars at home on which they place pictures and/or statues representing different deities, including those to whom the family is particularly devoted.
|
no
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://daytontemple.com/hinduism/
|
Hinduism – Hindu Temple of Dayton
|
The following key beliefs, though not exhaustive, offer a simple summary of Hindu spirituality.
Hindus believe in a one, all-pervasive Supreme Being, though they call it by many names
Hindus believe in the divinity of the four Vedas, the world's most ancient scripture, and venerate the Agamas as equally revealed. These primordial hymns are God's word and the bedrock of Hindu Dharma, the eternal religion.
Hindus believe that the universe undergoes endless cycles of creation, preservation and dissolution. There is no eternal hell, no damnation, in Hinduism, and no intrinsic evil--no satanic force that opposes the will of God.
Hindus believe in karma, the law of cause and effect by which each individual creates his own destiny by his thoughts, words and deeds.
Hindus believe that the soul reincarnates, evolving through many births until all karmas have been resolved, and moksha, liberation from the cycle of rebirth, is attained. Not a single soul will be deprived of this destiny.
Hindus believe that all life is sacred, to be loved and revered, and therefore practice ahimsa, noninjury, in thought, word and deed.
Hindus believe that no religion teaches the only way to salvation above all others, but that all genuine paths are facets of God's Light, deserving tolerance and understanding.
Hinduism is not just a faith. It is the union of reason and intuition that cannot be defined but is only to be experienced. ― Dr. S. Radhakrishnan (A prominent Hindu scholar and second President of India).
History of Hinduism
Hinduism has no date of origin. The authors and dates of most Hindu sacred texts are unknown, although the oldest text (the Vedas) are estimated to date from as early as 1500 BCE. Scholars describe Hinduism as the product of religious development in India that spans nearly 4,000 years, making it perhaps the oldest surviving world religion. The word “Hindu” essentially comes from the word Sindhu. Anyone who is born in the land of Sindhu is a Hindu.
Hinduism has no human founder. It is a mystical religion, leading the devotee to personally experience and seek the Truth within, finally reaching the pinnacle of consciousness where man and God are one.
There are an estimated 1 billion Hindus worldwide, making Hinduism the third largest religion after Christianity and Islam. About 80 percent of India's population regard themselves as Hindus and 30 million more Hindus live outside of India.
Hindu's view of One God
Hinduism is both monotheistic and henotheistic. Hindus were never polytheistic, in the sense that there are many equal Gods. Henotheism (literally "one God") better defines the Hindu view. It means the worship of one God without denying the existence of other Gods.
We Hindus believe in the one all-pervasive God who energizes the entire universe. This view of God as existing in and giving life to all things is called panentheism.
Hindus also believe in many Gods who perform various functions, like executives in a large corporation. They should not be confused with the Supreme God.
|
Hindus believe that all life is sacred, to be loved and revered, and therefore practice ahimsa, noninjury, in thought, word and deed.
Hindus believe that no religion teaches the only way to salvation above all others, but that all genuine paths are facets of God's Light, deserving tolerance and understanding.
Hinduism is not just a faith. It is the union of reason and intuition that cannot be defined but is only to be experienced. ― Dr. S. Radhakrishnan (A prominent Hindu scholar and second President of India).
History of Hinduism
Hinduism has no date of origin. The authors and dates of most Hindu sacred texts are unknown, although the oldest text (the Vedas) are estimated to date from as early as 1500 BCE. Scholars describe Hinduism as the product of religious development in India that spans nearly 4,000 years, making it perhaps the oldest surviving world religion. The word “Hindu” essentially comes from the word Sindhu. Anyone who is born in the land of Sindhu is a Hindu.
Hinduism has no human founder. It is a mystical religion, leading the devotee to personally experience and seek the Truth within, finally reaching the pinnacle of consciousness where man and God are one.
There are an estimated 1 billion Hindus worldwide, making Hinduism the third largest religion after Christianity and Islam. About 80 percent of India's population regard themselves as Hindus and 30 million more Hindus live outside of India.
Hindu's view of One God
Hinduism is both monotheistic and henotheistic. Hindus were never polytheistic, in the sense that there are many equal Gods. Henotheism (literally "one God") better defines the Hindu view. It means the worship of one God without denying the existence of other Gods.
We Hindus believe in the one all-pervasive God who energizes the entire universe. This view of God as existing in and giving life to all things is called panentheism.
|
yes
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://www.namb.net/apologetics/resource/hinduism/
|
Hinduism - Apologetics
|
Hinduism
Adherents: Worldwide: 820 million; India: 784 million; Bangladesh: 13.4 million; Nepal: 20 million; Indonesia: 4.3 million; Sri Lanka: 2.8 million; Pakistan: 2.6 million. In Fiji, Guyana, Mauritius, Surinam, and Trinidad and Tobago, over 20 percent of their people practice Hinduism. A considerable number of Hindus live in the African Continent, Myanmar, and the United Kingdom. United States and Canada: Estimated 1.2 to 1.5 million.
Scriptures:Vedas, Upanishads, The Epics, Puranas, and The Bhagavad Gita explain the essence of Hinduism.
Hinduism is the world’s oldest living organized religion. It is a complex family of sects whose copious scriptures, written over a period of almost 2,000 years (1500 BC-AD 250), allow a diverse belief system. Hinduism has no single creed and recognizes no final truth. At its core, Hinduism has a pagan background in which the forces of nature and human heroes are personified as gods and goddesses. They are worshiped with prayers and offerings.
Hindu worship has an almost endless variety with color symbolism, offerings, fasting, and dance as integral parts. Most Hindus daily worship an image of their chosen deity, with chants (mantras ), flowers, and incense. Worship, whether in a home or temple, is primarily individualistic rather than congregational.
Hinduism can be divided into Popular Hinduism, characterized by the worship of gods, through offerings, rituals, and prayers; and Philosophical Hinduism, the complex belief system understood by those who can study ancient texts, meditate, and practice yoga.
God
God (Brahman) is the one impersonal, ultimate, but unknowable, spiritual reality. Sectarian Hinduism personalizes Brahman as Brahma (creator, with four heads symbolizing creative energy), Vishnu (preserver, the god of stability and control), and Shiva (destroyer, god of endings). Most Hindus worship two of Vishnu’s 10 mythical incarnations: Krishna and Rama. On special occasions, Hindus may worship other gods, as well as family and individual deities. Hindus claim that there are 330 million gods. In Hinduism, belief in astrology, evil spirits, and curses also prevails.
Christian Response: If God (ultimate reality) is impersonal, then the impersonal must be greater than the personal. Our life experiences reveal that the personal is of more value than the impersonal. Even Hindus treat their children as having more value than a rock in a field.
The Bible teaches that God is personal and describes Him as having personal attributes. The Bible regularly describes God in ways used to describe human personality. God talks, rebukes, feels, becomes angry, is jealous, laughs, loves, and even has a personal name (Gen. 1:3; 6:6, 12; Ex. 3:15; 16:12; 20:5; Lev. 20:23; Deut. 5:9; 1 Sam. 26:19; Pss. 2:4; 59:9; Hos. 1:8-9; Amos 9:4; Zeph. 3:17). The Bible also warns Christians to avoid all forms of idolatry (Gen. 35:2; Ex. 23:13; Josh. 23:7; Ezek. 20:7; 1 Cor. 10:20). No idol or pagan deity is a representation of the true God. They are all false deities and must be rejected.
Creation
Hindus accept various forms of pantheism and reject the Christian doctrine of creation. According to Hinduism, Brahman alone exists; everything is ultimately an illusion (maya ). God emanated itself to cause the illusion of creation. There is no beginning or conclusion to creation, only endless repetitions or cycles of creation and destruction. History has little value since it is based on an illusion.
Christian Response: Christianity affirms the reality of the material world and the genuineness of God’s creation. The Bible declares that all is not God. God is present in His creation but He is not to be confused with it. The Bible teaches that in the beginning God created that which was not God (Gen. 1:1-3; Heb 11:3). The Bible contradicts pantheism by teaching creation rather than pantheistic emanation. The Bible issues strong warnings to those who confuse God with His creation (Rom. 1:22-23). God created the world at a definite time and will consummate His creation (2 Pet. 3:12-13). Christianity is founded upon the historical event of God’s incarnation in Jesus Christ (John 1:1-14).
Man
The eternal soul (atman) of man is a manifestation or “spark” of Brahman mysteriously trapped in the physical body. Samsara, repeated lives or reincarnations are required before the soul can be liberated (moksha) from the body. An individual’s present life is determined by the law of karma (actions, words, and thoughts in previous lifetimes). The physical body is ultimately an illusion (maya ) with little inherent or permanent worth. Bodies generally are cremated, and the eternal soul goes to an intermediate state of punishment or reward before rebirth in another body. Rebirths are experienced until karma has been removed to allow the soul’s re-absorption into Brahman.
Christian Response: People are created in God’s image (Gen. 1:27). The body’s physical resurrection and eternal worth are emphasized in John 2:18-22 and 1 Corinthians 15. The Bible declares, “And as it is appointed unto men once to die, but after this the judgment: so Christ was once offered to bear the sins of many” (Heb. 9:27-28, KJV). Since we only die once, reincarnation cannot be true. Instead of reincarnation, the Bible teaches resurrection (John 5:25). At death, Christians enjoy a state of conscious fellowship with Christ (Matt. 22:32; 2 Cor. 5:8; Phil. 1:23) to await the resurrection and heavenly reward. A person’s eternal destiny is determined by his or her acceptance or rejection of Jesus Christ as Savior and Lord (John 3:36; Rom. 10:9-10).
Sin
Hindus have no concept of rebellion against a Holy God. Ignorance of unity with Brahman, desire, and violation of dharma , (one’s social duty) are humanity’s problems.
Christian Response: Sin is not ignorance of unity with Brahman, but is rather a willful act of rebellion against God and His commandments (Eccl. 7:20; Rom. 1:28-32; 2:1-16; 3:9,19; 11:32; Gal. 3:22; 1 John 1:8-10). The Bible declares, “All have sinned and fall short of the glory of God” (Rom. 3:23, NIV).
Salvation
There is no clear concept of salvation in Hinduism. Moksha (freedom from infinite being and selfhood and final self-realization of the truth), is the goal of existence. Yoga and meditation (especially raja-yoga) taught by a guru (religious teacher) is one way to attain moksha. The other valid paths for moksha are: the way of works (karma marga), the way of knowledge (jnana marga), or the way of love and devotion (bhakti marga). Hindus hope to eventually get off the cycle of reincarnation. They believe the illusion of personal existence will end and they will become one with the impersonal God.
Christian Response: Salvation is a gift from God through faith in Jesus Christ (Eph. 2:8-10). Belief in reincarnation opposes the teaching of the Bible (Heb. 9:27). The Christian hope of eternal life means that all true believers in Christ will not only have personal existence but personal fellowship with God. It is impossible to earn one’s salvation by good works (Titus 3:3-7). Religious deeds and exercises cannot save (Matt. 7:22-23; Rom 9:32; Gal. 2:16; Eph 2:8-9).
Stress the necessity of following Jesus to the exclusion of all other deities.
Keep the gospel presentation Christ-centered.
Share the assurance of salvation that God’s grace gives you and about your hope in the resurrection. Make sure you communicate that your assurance is derived from God’s grace and not from your good works or your ability to be spiritual (1 John 5:13).
Give a copy of the New Testament. If a Hindu desires to study the Bible, begin with the Gospel of John. Point out passages that explain salvation.
|
At its core, Hinduism has a pagan background in which the forces of nature and human heroes are personified as gods and goddesses. They are worshiped with prayers and offerings.
Hindu worship has an almost endless variety with color symbolism, offerings, fasting, and dance as integral parts. Most Hindus daily worship an image of their chosen deity, with chants (mantras ), flowers, and incense. Worship, whether in a home or temple, is primarily individualistic rather than congregational.
Hinduism can be divided into Popular Hinduism, characterized by the worship of gods, through offerings, rituals, and prayers; and Philosophical Hinduism, the complex belief system understood by those who can study ancient texts, meditate, and practice yoga.
God
God (Brahman) is the one impersonal, ultimate, but unknowable, spiritual reality. Sectarian Hinduism personalizes Brahman as Brahma (creator, with four heads symbolizing creative energy), Vishnu (preserver, the god of stability and control), and Shiva (destroyer, god of endings). Most Hindus worship two of Vishnu’s 10 mythical incarnations: Krishna and Rama. On special occasions, Hindus may worship other gods, as well as family and individual deities. Hindus claim that there are 330 million gods. In Hinduism, belief in astrology, evil spirits, and curses also prevails.
Christian Response: If God (ultimate reality) is impersonal, then the impersonal must be greater than the personal. Our life experiences reveal that the personal is of more value than the impersonal. Even Hindus treat their children as having more value than a rock in a field.
The Bible teaches that God is personal and describes Him as having personal attributes. The Bible regularly describes God in ways used to describe human personality.
|
no
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://www.ligonier.org/learn/articles/field-guide-on-false-teaching-hinduism
|
What Is Hinduism?
|
Stay in Touch
Support the Mission
What Is Hinduism?
What is Hinduism?
The religion known as Hinduism is actually a collection of several associated religious traditions that originated in ancient India. The third-largest religion in the world, Hinduism today has more than nine hundred million adherents. Like Buddhism, Hinduism is a monistic religion, which means that it sees all reality as ultimately one. Hindus seek oneness with the Ultimate Reality or Spirit (Brahman). Unlike Buddhism, modern Hinduism tends toward henotheism. Henotheism is the worship of one supreme god, together with manifestations (i.e., avatars) of that god in a plurality of gods and goddesses.1 In Hinduism, religion and society are inseparably connected in a caste system—a fixed social hierarchy. There are four main branches of Hinduism: Vaishnavism, Shaivism, Shaktism, and Smartism. However, Hinduism is an incredibly large and diverse religion, and there is much variety of belief and practice within each of its main branches.
When did it begin?
The word Hindu refers to the land and inhabitants surrounding the Indus River. References to this region in Hindu scriptures have led scholars to conclude that northern India was the birthplace of Hinduism. The absence of a single founding figure distinguishes Hinduism from almost every other world religion. While Hinduism has a set of sacred writings, they are not viewed as divine revelation in the same way that Christians view the Bible as divine revelation or in the way Muslims affirm that the Qur’an is divine revelation. Hinduism originated between 2000 and 1500 BC, making it one of the world’s oldest religions. Hindu beliefs and practices originally spread and were passed down via oral tradition. The earliest body of Hindu sacred writings is the Vedas—from a Sanskrit word meaning “knowledge” or “wisdom”—which take the form of ancient hymns. The Vedas comprise four books—the Rig-Veda, Sama-Veda, Yajur-Veda, and Atharva-Veda. The Rig-Veda is the most ancient of the Vedas. The concluding portions of the Vedas, known as the Upanishads, cover philosophical topics and are the foundational texts for most Hindu spiritual study. The most well-known Hindu text is the Bhagavad Gita, which is part of the ancient Hindu epic Mahabharata. The Bhagavad Gita contains the essence of Hindu devotional teaching.
Who are the key figures?
The eighth-century philosopher Adi Shankara unified Hinduism through a careful study of the Vedas and Upanishads. He is author of the Hindu saying “Atman is Brahman,” which encapsulates the idea that each individual soul (atman) is finally one with the Ultimate Spirit (Brahman).
The nineteenth-century monk Swami Vivekananda represented Hinduism at the World Parliament of Religions in Chicago in 1893. He brought about significant reform in the caste system.
Mohandas Gandhi is arguably the most well-known Hindu to modern people. He is renowned for his teaching on nonviolent civil disobedience to achieve social and political reform in India in the early to mid-twentieth century.
Among popular figures, the Beatles’ George Harrison was a Hindu convert, as are actress Julia Roberts and actor Russell Brand.
What are the main beliefs?
One and many gods. Hindus believe in one impersonal god or Ultimate Reality—Brahman—while affirming the existence of a plurality of gods and goddesses. There are three chief manifestations of Brahman—Brahma, Vishnu, and Shiva—from whom all other gods and goddesses are incarnate manifestations. Brahma, the creator god, is largely ignored in modern Hinduism, while Vishnu, the preserving god, and Shiva, the destroying god, have many worshipers. Many Hindus also render their primary devotion not to Vishnu or Shiva but to Shakti, a feminine representative of Brahman that manifests herself as many different goddesses. For all practical purposes, popular Hindu devotion identifies Vishnu, Shiva, or Shakti as Brahman depending on the Hindu tradition followed. All Hindus believe that Brahman manifests itself in a multitude of avatars—earthly incarnations of gods and goddesses. It has often been said that there are 330 million gods and goddesses (avatars) in Hinduism. This number should not be taken literally but “is an exaggeration meant to emphasize the multitude of the gods.”2
Dharma. The concept of dharma is central to Hinduism. Although it is difficult to translate, dharma represents Hindu duty, conduct, law, order, religion, virtue, justice, and morality. It plays a significant role in the Indian caste system. Each caste has its own rules and regulations by which members must abide. Dharma is related to karma and the cycle of rebirth or reincarnation, as faithful observance of particular duties is necessary for moving into a higher caste in the next life. A person may not move out of the caste, essentially one’s social class, into which he was born during his lifetime.
The third-largest religion in the world, Hinduism today has more than nine hundred million adherents.
Karma. The doctrine of karma is the backbone of the religious and social system of Hinduism.3Karma says that whatever someone has—whether physical appearance, financial status, personality, health, or sorrow—is a result of his past life. One goes through the cycle of reincarnation based on his dharma in a previous life. If someone gives himself to vice and moral degeneration, he will not be destroyed or cease to exist. Rather, he will continue in the cycle of reincarnation—as long as necessary—until his soul reaches nirvana and he becomes one with the Ultimate Reality. If someone lives a life of bad dharma, he will be reborn in a lower caste or as a lower life form in the next cycle.
Why do people believe this form of false teaching?
The spread of Hinduism is due in large part to its antiquity and to its comprehensiveness. Its ideology encompasses the totality of an individual’s familial, social, and religious life, making departure difficult and costly. The Brahmins (priests and teachers of the highest caste) exercise power over the lives of those in lower castes, confining them in the system. In the Western world, elements of Hinduism have spread through the popularity of yoga in gyms and exercise programs. Western popular culture has also long been fascinated by Eastern religions such as Hinduism. For instance, the Beatles popularized Hindu ideas through their travels to India and their advocacy of Hindu-influenced Transcendental Meditation during the 1960s.
How does it hold up against biblical Christianity?
Only one God. Contrary to Hinduism, the Bible reveals that there is only one true and living God. This true God is a personal being. He does not change (Mal. 3:6). The one God subsists in three persons—the Father, Son, and Holy Spirit—who are each fully divine and yet distinct from one another according to each one’s unique personal property. The Son is not an avatar of the Father, and the Father did not become the incarnate Son. Rather, the person of the Son of God united a sinless human nature to His eternal divine nature, thereby becoming the God-man. The Father, Son, and Spirit eternally exist as the one true God. When the New Testament speaks of the members of the Godhead, it places them side by side, distinguishing them according to their personal properties while maintaining that they are identical in terms of the one divine essence (1 Cor. 8:6; 12:4–6; 2 Cor. 13:14; 2 Thess. 2:13–14; 1 Peter 1:2; 1 John 5:4–6; Rev. 1:4–6).
Law and grace. The Bible contains prescriptive duties, laws, rituals, and principles of virtue, justice, and morality. In His law, God reveals His will for the conduct of His people. However, no one is saved by attempting to keep the law. All people, except Christ, are fallen and unable to please God by nature (Rom. 3:10–20; 5:12–21) and are under God’s wrath and curse (Gal. 3:13). In Adam, we are dead in sin and depravity and need a salvation from outside ourselves. God initiates, procures, and provides salvation entirely by His grace. There is no grace in Hindu teaching. People are rewarded or punished exclusively on the basis of good or bad dharma. According to Scripture, God redeems a people for Himself based on the merit of Jesus Christ, the eternal Son of God, who—as our representative—kept the law perfectly and took the punishment we deserve. In Christ, God forgives, accepts, and reconciles believers to Himself (1 Cor. 1:30).
Death, judgment, and salvation. Death is a result of the sin of Adam. God will judge men for what they have done in this life. Apart from grace, we are subject to the eternal wrath of God because of sin (Rom. 1:18; Eph. 5:6; Col. 3:5–6; Rev. 19:15). Only those who trust in Christ will gain eternal life (John 3:16–18). As the writer of Hebrews explains, “It is appointed for man to die once, and after that comes judgment, so Christ, having been offered once to bear the sins of many, will appear a second time, not to deal with sin but to save those who are eagerly waiting for him” (9:27–28).
How can I share the gospel with those who hold to this false teaching?
Focus on sin and judgment. When witnessing to Hindus, explain that sin is not, first and foremost, a violation of social norms or an offense against one’s caste. Sin is primarily an offense against God (Gen. 39:9; Ps. 51:4). Since Hindus typically think of punishment for sin in terms of social degradation and not as justice incurred for a personal offense against the Creator, it is vital to help them think properly about the eternal ramifications of sinning against the eternal God. Scripture is full of references to eternal death and judgment on sin (Gen. 2:17; Ps. 5:5; 11:5; 50:21; 94:10; Rom. 1:18; 2:3; 6:21, 23; Gal. 3:10; Eph. 2:3).
Focus on forgiveness of sins in Christ. Hindus—especially those in lower castes—spend their lives seeking to work their way out of the caste system. Many are burdened with the weight of their failings. Hindus need to hear about the forgiveness that God freely gives in Christ. Jesus said, “Come to me, all who labor and are heavy laden, and I will give you rest” (Matt. 11:28). Explain that God took the punishment for our sin in the person of Jesus Christ (2 Cor. 5:21). Share God’s promises of forgiveness to all who trust in Jesus alone for salvation (Ex. 34:6–7; Ps. 130:4; Jer. 31:34; Dan. 9:9; Acts 5:31; 13:38; 26:18; Rom. 4:7; Eph. 1:7; Col. 1:14).
Focus on Jesus as Mediator. Man’s great need is to be reconciled to God. The Bible teaches that reconciliation happens only through the mediatorial work of Jesus Christ (2 Cor. 5:19). As God and man, Jesus bridges the gap between the infinitely holy God and sinners. Jesus died on the cross to bring us to God (1 Peter 3:18). Jesus is the Great High Priest of believers. He “always lives to make intercession for them” (Heb. 7:25). Jesus is the only Mediator. He said: “I am the way, and the truth, and the life. No one comes to the Father except through me” (John 14:6). Paul also explained, “There is one God, and there is one mediator between God and men, the man Christ Jesus” (1 Tim. 2:5).
|
Among popular figures, the Beatles’ George Harrison was a Hindu convert, as are actress Julia Roberts and actor Russell Brand.
What are the main beliefs?
One and many gods. Hindus believe in one impersonal god or Ultimate Reality—Brahman—while affirming the existence of a plurality of gods and goddesses. There are three chief manifestations of Brahman—Brahma, Vishnu, and Shiva—from whom all other gods and goddesses are incarnate manifestations. Brahma, the creator god, is largely ignored in modern Hinduism, while Vishnu, the preserving god, and Shiva, the destroying god, have many worshipers. Many Hindus also render their primary devotion not to Vishnu or Shiva but to Shakti, a feminine representative of Brahman that manifests herself as many different goddesses. For all practical purposes, popular Hindu devotion identifies Vishnu, Shiva, or Shakti as Brahman depending on the Hindu tradition followed. All Hindus believe that Brahman manifests itself in a multitude of avatars—earthly incarnations of gods and goddesses. It has often been said that there are 330 million gods and goddesses (avatars) in Hinduism. This number should not be taken literally but “is an exaggeration meant to emphasize the multitude of the gods. ”2
Dharma. The concept of dharma is central to Hinduism. Although it is difficult to translate, dharma represents Hindu duty, conduct, law, order, religion, virtue, justice, and morality. It plays a significant role in the Indian caste system. Each caste has its own rules and regulations by which members must abide. Dharma is related to karma and the cycle of rebirth or reincarnation, as faithful observance of particular duties is necessary for moving into a higher caste in the next life. A person may not move out of the caste, essentially one’s social class, into which he was born during his lifetime.
|
yes
|
World Religions
|
Do Hindus believe in a single god?
|
yes_statement
|
"hindus" "believe" in a "single" "god".. hinduism teaches the belief in a "single" "god".
|
https://www.hinduamerican.org/blog/12-things-you-need-to-know-about-hinduism/
|
12 Things You Need to Know About Hinduism - Hindu American ...
|
12 Things You Need to Know About Hinduism
1) Hinduism is at least 5000 years old
Hinduism is one of a few ancient religions to survive into modern times. The collection of traditions that compose modern-day Hinduism have developed over at least the past 5000 years, beginning in the Indus Valley region (in the nations of modern India and Pakistan), in what was the largest civilization of the ancient world. There is no ‘founder’ of Hinduism, nor single prophet or initial teacher. Hindus believe their religion has no identifiable beginning or end and, as such, often refer to it as Sanatana Dharma (the ‘Eternal Way’). As for the name itself, ‘Hindu’ is a word first used by Persians, dating back to the 6th century BCE, to describe the people living beyond the Indus River. Initially it did not have a specific religious connotation. The religious meaning of the term did not develop for roughly another 1000 years.
2) The Vedas are one of Hinduism’s many primary religious texts
Hinduism does not have a single holy book that guides religious practice. Instead, Hinduism has a large body of spiritual texts that guide devotees. First among these are the Vedas (“knowledge” in Sanskrit), a collection of hymns on the divine forces of nature presenting key Hindu teachings. The Vedas, considered to be realized (revealed) eternal truths, were passed down via an oral tradition for thousands of years before being written down. Hindu philosophy was further developed in the Upanishads. This philosophy was restated in the Puranas, the Ramayana, and the Mahabharata (the world’s longest epic poem), as well as the Bhagavad Gita. Countless life stories, devotional poetry, and commentaries by sages and scholars have also contributed to the spiritual understanding and practice of Hindus.
3) Hinduism is one of four ‘Dharmic’ or ‘Indic’ traditions
Hinduism, Buddhism, Jainism, and Sikhism can be referred to as the “Dharmic” or “Indic” traditions. The Dharma traditions share a broadly similar worldview, and share many spiritual concepts, such as dharma, karma, samsara, and moksha—though each religion understands and interprets them differently.
4) Hinduism sees the Divine present in all existence
The deepest single spiritual truth presented through the Vedas is that Brahman (roughly understood in English as ‘the Absolute’ or ‘the Divine’) pervades the entire universe. This divine reality, or its essential nature, is present in all living beings, eternal, and full of bliss. Brahman is understood as the cause of creation, as well as its preservation, and dissolution and transformation, all done in a constant, repeating cycle.
5) The nature of the Divine is understood in different ways in different lineages
Within Hinduism there is a broad spectrum of understandings about the nature of Brahman. Some Hindus believe that Brahman is infinite and formless, and can be worshipped as such, or in different forms. Other Hindus believe that the Divine is infinite and has a transcendental form. For example, some Vaishnavas believe that the one supreme form is Krishna, while Shaivites call this form Shiva.
6) Hinduism worships the Divine in both male and female, animal form
Because Hindus believe that Brahman can take form, they accept that there are a variety of ways in which all human beings can connect with the Divine. This universal Divinity is worshipped in both male and female forms. The female form is known as devi, which is a manifestation of shakti (energy or creative force). Other forms combine male and female aspects together and some resemble animals, such as Ganesh or Hanuman. Each of these forms has a symbolic meaning. Hindus have long told stories about these various forms of the Divine to inspire devotion and instill ethical values.
7) Hindus pray to different aspects of the Divine
Hindus pray to different forms of Brahman as manifestations of particular divine qualities or powers. For example: Ganesh is honored by Hindus (as well as sometimes by followers of other Indian religions) as the remover of obstacles and honored for his great wisdom, and is often invoked before beginning any important task or project; Saraswati is the Goddess associated with learning and wisdom; Lakshmi is worshipped as the Goddess of Prosperity. God is believed to have the taken human form of Rama to show people how to live the path of Dharma. Krishna is said to have come to eradicate evil and protect good. Shiva is worshipped as the lord of time and change. Furthermore, the prominence of each of the aspects of the Divine varies depending on the lineage of the individual Hindu.
8) Hindus use images in worship to make the infinite comprehensible to the human mind
Hindus represent the various forms of God in consecrated images called murti. A murti can be made of wood, stone, or metals (and sometimes can be naturally occurring, rather than fashioned by human hands). Murti offer a way to visualize and meditate upon Brahman, which due to its infinite nature is believed to be beyond the grasp of the human mind. Murti is often inaccurately translated as ‘idol’ but a more accurate translation is ‘embodiment’. Hindu families conduct their daily worship at home altars and also at temples on special occasions. Many Hindus consult gurus (recognized spiritual teachers and guides) for advice or answers to spiritual questions.
9) Hindus believe the soul is eternal and is reborn in different forms
Hindus believe that the soul, atman, is eternal. When the physical body dies the soul is reborn in another body. This continuous cycle of life, death, and rebirth is called samsara. Rebirth is governed by karma: the principle that every action (be it physical or mental) has a result, like cause and effect. What an individual experiences in this life is the result of their past actions, either actions they have already taken in this life or actions from a past life. How an individual acts today impacts the future, both in terms of effects felt later on in this life or in a future birth. Though the effects of karma make certain actions easier or more difficult to take, just as our personal habits influence our lives, this is not a deterministic or fatalistic system. Rather, we all have the ability to freely choose how to act in any situation.
10) Hindus believe we each have four goals in life
Hindus believe we have four goals in life: Dharma (conducting ourselves in a way conducive to spiritual advancement), Artha (the pursuit of material prosperity), Kama (enjoyment of the material world), and Moksha (liberation from the attachments caused by dependence on the material world and from the cycle of birth and rebirth).
11) There are four paths to Moksha
Hindu scripture outline four primary paths to experience God’s presence and ultimately obtain the fourth goal, moksha. These paths are not mutually exclusive and can be pursued simultaneously depending on an individual’s inclination. These paths are: Karma Yoga (performing one’s duties selflessly), Bhakti Yoga (loving God through devotion and service), Jnana Yoga (study and contemplating sacred texts), and Raja Yoga (physically preparing the body and mind to allow deep meditation and introspection, so as to overcome suffering caused by material attachments).
12) Hinduism acknowledges the potential for truth in other religions
Hinduism is a deeply pluralistic tradition, promoting respect for other religions and acknowledges the potential for truth in them. Hindus see the varieties of religions and philosophies as different ways to understand and relate to God. This philosophy leads to pluralism within Hinduism and outside of it. The core philosophy of Hinduism is the search for truth, not the specific path taken. A quote from the Vedas that summarizes the Hindu perspective is, “Truth is one; the wise call it by various names.”
TELL US WHAT YOU’RE INTERESTED IN
Get toolkits and learning guides on Hindu holidays and festivals that you can use at home and take to the classroom; Podcasts, animated videos, and blogs will keep you up to date on all things Hindu.
For the advocate who wants to stay abreast of and take action on policy issues that impact Hindu Americans.
As Hinduism’s spiritual homeland, India will always be relevant to HAF. Stay up to speed on relevant issues relating to India and its neighboring countries. And receive our Hindu Human Rights country reports.
Promoting dignity, mutual respect, and pluralism.
Related posts:
Hinduism is often referred to as Sanatana Dharma (the ‘eternal way’), indicating the religion’s emphasis on eternal truths that are applicable to all of humanity. Thus, it makes sense that a medley of mainstream movies could convey Hindu ideals that resonate strongly with audiences, while not actually talking directly about anything understood by the public as Hindu.
In Groundhog Day, for example, when cynical TV weatherman Phil Collins discovers he is trapped in a time loop, living the same day over and over, only to be released after transforming his character from an egocentric narcissist to a thoughtful and kindhearted philanthropist, it’s hard not to be reminded of the Hindu notion of samsara, a cycle of reincarnation from which a soul attains liberation by realizing its divine nature after lifetimes of spiritual practice.
Or in The Matrix when Neo chooses the red pill of knowledge over the blue pill of ignorance, and is subsequently unplugged from an illusory world and cast into the truth of reality, the film seems to be conveying a foundational Vedic teaching: that we must transcend our own ignorance — a product of maya, literally meaning “illusion” in Sanskrit — to uncover our true nature. Hindu concepts appear to be further exhibited in Neo’s relationship with Morpheus, which starkly reflects that of a disciple and guru, as the latter reveals to the former the knowledge he needs in order to understand this “true nature.” As Neo’s faith in Morpheus’ words develops, so does his capacity to see past the illusion of the matrix, garnering him the ability to manipulate the laws of this false reality, similar to the Jedi and yogis described earlier.
Hindu Americans and the Vedanta philosophy have significantly influenced notable intellectuals such as Henry David Thoreau, Ralph Waldo Emerson, Walt Whitman, J.D. Salinger, Christopher Isherwood, Aldous Huxley, Huston Smith, and Joseph Campbell just to name a few. Some feel that it started back In 1812, when Thomas Jefferson recommended to John Adams the writings of Joseph Priestley, a Unitarian minister who had published works that compared Christianity to other religions — Hinduism in particular — Adam’s interest was piqued.
Going through Priestley’s writings, Adams became riveted by Hindu thought, as he launched into a five-year exploration of Eastern philosophy. As his knowledge of Hinduism and ancient Indian civilization grew, so did his respect for it. This legacy took shape in the 1830s as Transcendentalism, a philosophical, social, and literary movement that emphasized the spiritual goodness inherent in all people despite the corruption imposed on an individual by society and its institutions. Espousing that divinity pervades all of nature and humanity, Transcendentalists believed divine experience existed in the everyday, and held progressive views on women’s rights, abolition, and education. At the heart of this movement were three of America’s most influential authors: Ralph Waldo Emerson, Walt Whitman, and Henry David Thoreau.
Before becoming an Islamic state, Afghanistan was once home to a medley of religious practices, the oldest being Hinduism. A long time ago, much of Afghanistan was part of an ancient kingdom known as Gandhara, which also covered parts of northern Pakistan.Today, many of Afghanistan’s province names, though slightly altered, are clearly Sanskrit in origin, hinting at the region’s ancient past. To cite a few examples, Balkh comes from the Sanskrit Bhalika, Nangarhar from Nagarahara, and Kabul from Kubha. Though Gandhara’s earliest mention can be found in the Vedas, it is better known for its connections to the Hindu epics the Mahabharata and Ramayana. There is also the historic Asamai temple in Kabul located on a hill named after the Hindu Goddess of hope, Asha. The temple has survived numerous conflicts and attacks but it still stands. The temple is a remnant from Hindu Shahi Kings, who ruled from the Kabul Valley as far back as 850 CE. However, Hindus are indigenous but endangered minorities in Afghanistan, numbering approximately 700 out of a community that recently included over 8,000 members. Many have left for new homes, include in New York which is home to a large Afghani Hindu population.
According to the 2021-2022 National Pet Owners Survey, 70% of U.S. households (90.5 million homes) owned a pet as of 2022, with 69 million U.S. households having a pet dog. Recognized for their loyalty, service, companionship, and the special relationship they have with humans, Hinduism’s reverence for dogs is expansive, as they are worshiped in festivals and appreciated in connection to a number of Hindu gods and stories. Observed in Nepal, Bhutan, and the Indian states of Sikkim and West Bengal, Kukar Tihar (the 2nd day of Tihar) honors dogs as messengers that help guide spirits of the deceased across the River of Death. In the Mahabharata, Yudhisthira, his brothers, and the queen Draupadi renounced their kingdom to ascend to the heavens. However, Yudhisthira was the only one that survived along with a dog that had joined them. Yudhisthira refused to go to heaven without the dog, who turned out to be Yamaraj, the God of Death. Sarama, the “female dog of the gods,” was famously asked by Indra to retrieve a herd of cows that were stolen. When the thieves were caught, they tried to bribe Sarama but she refused and now represents those who do not wish to possess but instead find what has been lost. The symbolic import of dogs is further driven in connection with Dattatreya, as he is commonly depicted with four of them to represent the Vedas, the Yugas, the stages of sound, and the inner forces of a human being (will, faculty, hope, and desire).
In 2018, the long-running Marvel comic series Black Panther, was brought to the big screen. A more prominent scene is when M’baku, a character vying for the throne of the fictional country of Wakanda, challenges T’Challa/Black Panther, and yells, “Glory to Hanuman.” However, despite dharma as an unsaid aspect of the characters’ interactions, Black Panther relies slightly more on Hindu symbolism than philosophy. But the significance of Hanuman as a transcendent deity cannot be overlooked, especially at a time when dialogues about global migration, the right to worship, and access to natural resources are becoming more overtly racialized. The film provides more than just an entertainment escape: it reimagines a world in which the current racial and theological paradigms are challenged forcefully. With the film expected to have at least several sequels, there will be more opportunities to reference Hinduism and Hindu iconography.
One of the most celebrated Hindu festivals, Diwali (dee-VAH-lee) or Deepavali (dee-PAH-va-lee) commemorates the victory of good over evil during the course of five days. The word refers to rows of diyas — or clay lamps — which are put all around homes and places of worship. The light from these lamps symbolizes the illumination within all of us, which can overcome ignorance, represented by darkness. Devotees gather in local temples, homes, or community centers, to spend time with loved ones, make positive goals, and appreciate life.
On this day, because Diwali is a time for dana (charitable giving) and seva (selfless service), Hindus traditionally perform a deep cleaning of their homes and surroundings, as cleanliness is believed to invoke the presence and blessings of Goddess Lakshmi who, as mentioned earlier, is the Goddess of wealth and prosperity. Many will also make rangoli or kolum (colored patterns of flowers, powder, rice, or sand made on the floor), which are also said to invite auspiciousness. Observers thus begin Diwali by cultivating a spirit of generosity, doing things like giving money to charities, feeding the hungry, and endeavoring to help those in need.
The spread of Hinduism to Southeast Asia established powerful Hindu kingdoms in the region, most notably the Khmer Empire that encompassed modern Cambodia and Thailand, and influential kingdoms in the Indonesia archipelago. Though Buddhism and Hinduism co-existed in the region for several centuries, Buddhism (and Islam in Indonesia) eventually replaced Hinduism as a primary religion. Today, there are approximately five million Hindus in Indonesia, primarily in Bali. As Bali is roughly 90 percent Hindu, this makes it a religious enclave in a country that contains the world’s largest Muslim population. There are also roughly 60,000 Cham Hindus in Vietnam, and smaller numbers in Thailand. Hinduism in Fiji, Malaysia, and Singapore is a much more recent phenomenon, with Hindus arriving in the 19th and early 20th centuries as indentured laborers. Today, Hindus are prominent in politics and business in all three countries, though they continue to experience discrimination as religious minorities.
10/21/22Smithsonian/American History Exhibit - American Indian experience
In 2014, the first Smithsonian exhibition chronicling the experiences of Indian Americans, many of whom are Hindus, in the US was unveiled at their National Museum of Natural History in Washington, DC. This exhibit was one of the largest ever produced by the Smithsonian Asian Pacific American Center, occupying 5,000 square feet and reaching millions of visitors. The message behind “Beyond Bollywood: Indian Americans Shape the Nation,” aimed to dispel stereotypes and myths that have followed Indian immigrants since they first arrived in the U.S. in 1790. The exhibit explored the heritage, daily experiences, and the many diverse contributions that immigrants and Indian Americans have made to the United States. The exhibition at the Museum of Natural History includes historical and contemporary images and artifacts, including those that document histories of discrimination and resistance, convey daily experiences, and symbolize achievements across the professions. Music and visual artworks provide commentary on the Indian American experience and form an important component of the exhibition. In 2017, this exhibit went on the road, traveling from city to city so that all could see the impact of Indians on American culture.
Paramahansa Yogananda was a Hindu monk and yogi who came to the United States in 1920 and lived here for the last 32 years of his life. He is considered to be the first major Hindu Guru to settle in the United States. When Swami Yogananda arrived in the US, he made his first speech, made to the International Congress of Religious Liberals, on “The Science of Religion,” and was enthusiastically received. It was soon after that he founded the Self-Realization Fellowship (also known as Yogoda Satsanga Society (YSS) of India) and introduced millions of Americans to the ancient science and philosophy of meditation and Kriya yoga (path of attainment). In 1927, he was invited to the White House by President Calvin Coolidge, making Swami Yogananda the first prominent Indian and Hindu to be hosted in the White House.
For those of us who are Hindu, we have noticed that some of the biggest Hollywood films produced in the last several decades have mirrored many of Hinduism's most fundamental philosophical ideas. One example is Avatar, a film named for the Sanskrit word avatāra (‘descent’), in which the protagonist, Jake Sully, enters and explores an alien world called Pandora by inhabiting the body of an indigenous 10-foot, blue-skinned being, an idea taken from Hinduism’s depictions of the various avatars of the blue god Vishnu, who are said to descend into our world for upholding dharma. Instead of aligning with the interests of the humans, who merely want to mine Pandora for the valuable mineral unobtanium, Sully fights alongside the alien humanoids native to the world, called Na’vi, who live in harmony with nature, believe all life is sacred, and that all life is connected by a divine force — teachings synonymous with Hinduism. Thus, similar to the avatars of Vishnu, Sully defends and preserves a spiritual culture by defeating those who would destroy it for materialistic pursuit. While this film doesn’t indicate in any direct way that they have anything to do with Hinduism, it’s clear they are communicating Hindu ideas that everyone relates to and understands on a profound level.
The International Society for Krishna Consciousness (ISKCON), also known as the Hare Krishna movement, was founded in 1966 by A.C. Bhaktivedanta Swami Prabhupada, a highly respected Vaishnava (devotion to the god Vishnu and his incarnations avatars) scholar and monk. At the age of 70, Swami Prabhupada traveled from India to New York City to bring the Bhakti tradition, or Krishna Consciousness, to the west. In the 11 years before his passing in 1977, Srila Prabhupada translated, with elaborate commentaries, 60 volumes of Vaishnava literature; established more than 100 temples on six continents; and initiated 5,000 disciples. Today, his writings are studied in universities around the globe and are translated into nearly 100 languages. To date, ISKCON has over 400 temples, dozens of rural communities and eco-sustainable projects, and nearly 100 vegetarian restaurants world-wide with 56 of them in the US.
Hinduism came in waves to Africa, with Southern Africa getting Hindu workers during the early years of British colonization, while East and West Africa experienced Hindu migration during the 20th century. Hinduism’s roughly 0.2% presence in Africa is seen as so inconsequential, most data organizations don’t even bother explicitly mentioning it in their census reports. But Hinduism is Ghana's fastest growing religion and one in which there are steady populations in both Northern and Southern African states. Durban is now home to most of South Africa’s 1.3 million Indians, making it, according to some sources, the largest Indian city outside of India, and thus a most powerful hub of Hindu practice. In the US, there are both communities of African Hindus who have migrated, as well as Black Hindus, who according to the 2019 Pew Survey, make up 2% of the Hindu population in the US.
George Lucas, the creator of Star Wars, drew much of the inspiration for this major cultural phenomenon from the teachings of his mentor who was a lifelong student of Vedanta. In these films, many aspects of Hinduism are interwoven with the story. Some include Hanuman (Chewbaca and Ewoks), Shakti (force,energy), Yodha (Yoda), Brahman (infinite being). Besides the many philosophical parallels that can be highlighted between Star Wars and Hinduism, Star Wars also exhibits similarities in story structure and character roles to one of India’s famous epics, the Ramayana. Never seen the movie? Now might be the time to see how universally relatable Hindu thought can truly be.
The term Ayurveda is derived from the Sanskrit words ayur (life) and veda (science or knowledge), translation to the knowledge of life. Ayurveda is considered to be the oldest healing science, originating in 1000 BCE. Based on the five elements that comprise the universe (space, air, fire, water, and earth), they combine and permutate to create three health principles that govern the functioning and interplay of a person’s body, mind, and consciousness. These energies are referred to as doshas in Sanskrit. Ayurveda can be used in conjunction with Western medicine and Ayurvedic schools have gained approval as educational institutions in several states.
While it’s synonymous to meditation, and seen simply as a doorway to tranquility for yogic practitioners, the true meaning of Om is deeply embedded in Hindu philosophy.
The word Om is defined by Hindu scripture as being the original vibration of the universe, which all other vibrations are able to manifest. Within Hinduism, the meaning and connotations of Om is perceived in a variety of ways. Though heard and often written as “om,” due to the way it sounds when it is repeatedly chanted, the sacred syllable is originally and more accurately spelled as “aum.” Broken down, the three letters of A – U – M represent a number of sacred trinities such as different conditions of consciousness (waking state, dreaming state, and deep sleep state), the deities in charge of the creation, preservation, and destruction of the universe ( Brahma, Vishnu, and Shiva), aspects of time (past, present, and future), among many others.
Dr. Anandi Gopal Joshi is credited with being the first woman from India to study medicine in the United States. Born in Bombay in 1865, she was married at the age of ten to an older man who had been her teacher. Dr. Joshi had a child at the age of 13, but the child died when only 10 days old. She believed that with better medical care, the child would have lived, and she frequently cited this as motivation for her desire to attend medical school. Her husband encouraged her in her academic pursuits and in 1883, Joshee joined the Woman’s Medical College of Pennsylvania, now known as the Drexel University College of Medicine in Philadelphia. She graduated in 1886 with her degree in medicine; her M.D. thesis focused on Hindu obstetrics. Unfortunately, Dr. Joshi was only able to practice medicine for a few months before passing away from tuberculosis.
Hinduism is the religion of almost 25% of Guyana’s population, making it the country with the highest percentage of Hindus in the Western Hemisphere. But from British professional recruiting agents targeting rural and uneducated Indians, to the aggressiveness of Christian proselytization of Hindus with a promise of a better life, Hinduism has been in a steady decline for many decades with many escaping to the United States for better opportunities and to practice their religion freely. Today, over 80% of Guyanese Americans live in the Northeastern United States with heavy concentrations in New Jersey and in New York, where a “Little Guyana” helps these immigrants stay connected to their Guyanese roots.
Karwa Chauth or Karva Chauth (kuhr-vah-CHOATH) is a North Indian holiday in which wives fast for the longevity and health of their husbands, however, many unmarried women celebrate in hopes of meeting their ideal life partner. Typically, wives spend the day preparing gifts to exchange, and fasting until the moon is visible. It is believed that its light symbolizes love and blessings of a happy life. While there are varying legends behind this holiday’s traditions and meaning, the message of honoring the relationships women form with their family and community prevails.
As sound vibration can affect the most subtle element of creation, it is interpreted in Hindu scriptures that spiritual sound vibrations can affect the atman (soul) in a particularly potent way. Such spiritual sound vibrations are said to have the ability to awaken our original spiritual consciousness and help us remember that we are beyond the ambivalence of life, and actually originate from the Divine. As such, the main goal of many types of Hindu musical expression is to help stir us out of our spiritual slumber by evoking feelings of love and connection that help us to better perceive the presence of the Divine within all. Some of the more popular examples of musical expressions within Hinduism include shlokas (verse, or poem), mantras (sacred syllables repeated in prayer), kirtans (congregational singing of mantras), and bhajans (devotional songs). You can find musical spiritual expressions through the US in temples, Mandirs, and community centers.
Yoga is considered Hinduism’s gift to humanity. At its broadest, yoga, from the root word “yuj” in Sanskrit, means to unite. Most Hindu texts discuss yoga as a practice to control the senses and ultimately, the mind. The most famous is the Bhagavad Gita (dating back to 6th-3rd Century BCE), in which Krishna speaks of four types of yoga – bhakti, or devotion; jnana, or knowledge; karma, or action; and dhyana, or concentration (often referred to as raja yoga, though not all sources agree on the term) – as paths to achieve moksha (enlightenment), the ultimate goal according to Hindu understanding. According to a 2016 study, in the United States there are an estimated 36.7 million people currently practicing yoga in the United States.
According to Vedic cosmology, 108 is the basis of creation, representing the universe and all our existence. As the soul is encased in two types of bodies: the physical body (made of earth, water, fire, air, and ether) and the subtle body (composed of intelligence, mind and ego), Swami Viveknanda is often attributed with bringing Hindu teachings and practices — such as yoga and transcendental meditation — to Western audiences. In 1893, he was officially introduced to the United States at the World’s Parliament of Religions in Chicago, where in his speech he called for religious tolerance and described Hinduism as “a religion which has taught the world both tolerance and universal acceptance.” The day that Swami Vivekananda delivered his speech at the Parliament of Religions is now known as ‘World Brotherhood Day.’ And his birthday, known as Swami Vivekananda Jayanti, is honored on January 12th each year. On this day he is commemorated and recognized for his contributions as a modern Hindu monk and respected guru of the Vedanta philosophy of Hinduism. In 1900, Swami Viveknanda founded the Vedanta Society in California and to date there are 36 Vedanta Society Centers in the United States.
According to Vedic cosmology, 108 is the basis of creation, representing the universe and all our existence. As the soul is encased in two types of bodies: the physical body (made of earth, water, fire, air, and ether) and the subtle body (composed of intelligence, mind and ego), 108 plays a significant role in keeping these two bodies healthily connected. Hindus believe the body holds seven chakras, or pools of energy, which begin at the bottom of the spine and go all the way down to the top of the head and it is believed there are 108 energy lines that converge to form the heart chakra. Ayurveda says there are 108 hidden spots in the body called marma points, where various tissues like muscles, veins, and ligaments meet. These are vital points of life force, and when they are out of balance, energy cannot properly flow throughout the body. Sun salutations, yogic asanas that honor the sun god Surya, are generally completed in nine rounds of 12 postures, totaling 108. Mantra meditation is usually chanted on a set of 108 beads. In Hinduism there are 108 Upanishads, the sacred texts of wisdom from ancient sages. Additionally, in the Sanskrit alphabet, there are 54 letters. Each letter has a feminine, or Shakti, and masculine, or Shiva, quality. 54 multiplied by 2 equals 108. Ultimately, breathwork, chanting, studying scripture, and asana’s help harmonize one’s energy with the energy of the supreme spiritual source. These processes become especially effective when they are performed in connection with the number 108. Hindu scriptures strive to remind people of this divine commonality by continuously highlighting the innumerable threads connecting everything in existence. One of these threads is the number 108.
A decade after slavery was abolished in 1834, the British government began importing indentured labor from India to work on their estates in other countries such as Trinidad and Tobago. From 1845 to 1917, the ships would continue to arrive, carrying over 140,000 Indians to the island, facilitating Trinidad's population growth from Indian laborers. Today, there are roughly 240,000 declared Hindus in Trinidad and Tobago, comprising about 18% of the island’s population. There are a total of about 300 temples on the island, welcoming all who wish to enter and where many beloved Hindu festivals take place. But for some, the migration journey doesn’t end as New York and Florida have seen the development of large Indo-Caribbean communities.
From ancient tribes to present-day devotees, tattoos have held a special place in Hinduism for centuries. In the Indian states of Bihar and Madhya Pradesh, the Ramnaami community invoked Rama’s protection with tattoos of the name “Rama” in Sanskrit on every inch of their skin, including the tongue and inside the lips.The Mahabharata tells the story of the Pandavas that were exiled to the Kutch district of Gujarat. Today, their descendants - members of the Ribari tribe - live as their ancestors did, with women covered in tattoos that symbolize their people’s strong spirit for survival. Some Hindus consider tattoos as protective emblems,such as tattoos of Hanuman are often used to relieve physical or mental pain. People will often get tattoos of other deities to invoke their blessings. Mehndi, a plant-based temporary tattoo, is commonly done at weddings and religious ceremonies as a form of celebration of love and spirituality. While tattoos have been in Hindu communities for centuries, tattoos as symbols of honor, devotion, and even fashion are incredibly popular today. Hindus and non Hindus alike adorn themselves with Hindu emblems and tattoos that reflect Hindu teachings.
Navaratri (nuhv-uh-RA-three) is a nine night celebration of the feminine divine that occurs four times a year — the spring and fall celebrations being amongst the more widely celebrated. Some traditions honor the nine manifestations of Goddess Durga, while others celebrate the three goddesses (Durga, Lakshmi, and Saraswati) with three days dedicated to each. This is a time to recognize the role in which the loving, compassionate, and gentle — yet sometimes powerful and fierce — feminine energy plays in our lives.
Dussehra (duh-sheh-RAH) or Vijayadashmi (vi-juhyuh-dushuh-mee) celebrates the victory of Lord Rama over the ten-headed demon King Ravana. This also marks the end of Ramalila — a brief retelling of the Ramayana and the story of Rama, Sita, and Lakshman in the form of dramatic reading or dance. It also signifies the end of negativity and evil within us (vices, biases, prejudices) for a fresh new beginning. Dussehra often coincides with the end of Navratri and Duga Puja, and celebrations can last ten days, with huge figures of Ravana set ablaze as a reminder that good always prevails over evil.
Many Hindus hold reverence for the cow as a representation of mother earth, fertility, and Hindu values of selfless service, strength, dignity, and non-harming. Though not all Hindus are vegetarian, for this reason many traditionally abstain from eating beef. This is often linked with the concept of ahimsa (non-violence), which can be applied to diet choices and our interactions with the environment, and potentially determine our next birth, according to the doctrine of karma. This is part of the reason that some Hindus may choose a vegetarian lifestyle as an expression of ahimsa as well as explains the growing number of cow protection projects that are led by individuals who have felt compelled to put their Hindu values into practice. The US is home to several cow protection projects and sanctuaries
Gandhi Jayanti marks the birthday of Mahatma Gandhi, the ‘Father of the Nation’ for India and the Indian Diaspora. To honor Gandhi’s message of ahimsa (non-violence), volunteer events and commemorative ceremonies are conducted and statues of Gandhi are also decorated with flower garlands. Gandhi and the satyagraha (truth force) has inspired many of America’s most prominent civil rights and social impact movements and leaders, including Martin Luther King Jr., and Cesar Chavez. The United Nations declared October 2 as the International Day of Non-Violence in honor of Gandhi, whose work continues to inspire civil rights movements across the world.
The Immigration and Nationality Act of 1965 facilitated the journey of many Indian immigrants to the United States. In this new land, many created home shrines and community temples to practice and hold pujas (services). As Hindu American populations grew in metropolitan and rural areas, so did the need to find a permanent temple site for worship. In 1906, the Vedanta Society built the Old Temple in San Francisco, California but as this was not considered a formal temple, many don’t credit this with being the first. Others believe it is the Shiva Murugan Temple built in 1957 in Concord, California, whereas others believe it is the Maha Vallabha Ganapati Devanstanam in New York that should be considered the first. Today, there are nearly 1,000 temples in the United States . Regardless of where you live, you have the right to practice your faith.
|
This divine reality, or its essential nature, is present in all living beings, eternal, and full of bliss. Brahman is understood as the cause of creation, as well as its preservation, and dissolution and transformation, all done in a constant, repeating cycle.
5) The nature of the Divine is understood in different ways in different lineages
Within Hinduism there is a broad spectrum of understandings about the nature of Brahman. Some Hindus believe that Brahman is infinite and formless, and can be worshipped as such, or in different forms. Other Hindus believe that the Divine is infinite and has a transcendental form. For example, some Vaishnavas believe that the one supreme form is Krishna, while Shaivites call this form Shiva.
6) Hinduism worships the Divine in both male and female, animal form
Because Hindus believe that Brahman can take form, they accept that there are a variety of ways in which all human beings can connect with the Divine. This universal Divinity is worshipped in both male and female forms. The female form is known as devi, which is a manifestation of shakti (energy or creative force). Other forms combine male and female aspects together and some resemble animals, such as Ganesh or Hanuman. Each of these forms has a symbolic meaning. Hindus have long told stories about these various forms of the Divine to inspire devotion and instill ethical values.
7) Hindus pray to different aspects of the Divine
Hindus pray to different forms of Brahman as manifestations of particular divine qualities or powers.
|
no
|
World Religions
|
Do Hindus believe in a single god?
|
no_statement
|
"hindus" do not "believe" in a "single" "god".. hinduism does not promote the belief in a "single" "god".
|
https://en.wikipedia.org/wiki/Monotheism
|
Monotheism - Wikipedia
|
Monotheism is the belief that there is only one deity, an all-supreme being that is universally referred to as God.[1][2][3][4][5][6][7] A distinction may be made between exclusive monotheism, in which the one God is a singular existence, and both inclusive and pluriform monotheism, in which multiple gods or godly forms are recognized, but each are postulated as extensions of the same God.[1]
Monotheism is distinguished from henotheism, a religious system in which the believer worships one God without denying that others may worship different gods with equal validity, and monolatrism, the recognition of the existence of many gods but with the consistent worship of only one deity.[8] The term monolatry was perhaps first used by Julius Wellhausen.[9]
In the Iron-Age South Asian Vedic period,[18] a possible inclination towards monotheism emerged. The Rigveda exhibits notions of monism of the Brahman, particularly in the comparatively late tenth book,[19] which is dated to the early Iron Age, e.g. in the Nasadiya Sukta. Later, ancient Hindu theology was monist, but was not strictly monotheistic in worship because it still maintained the existence of many gods, who were envisioned as aspects of one supreme God, Brahman.[20]
In China, the orthodox faith system held by most dynasties since at least the Shang Dynasty (1766 BCE) until the modern period centered on the worship of Shangdi (literally "Above Sovereign", generally translated as "God") or Heaven as an omnipotent force.[21] However, this faith system was not truly monotheistic since other lesser gods and spirits, which varied with locality, were also worshiped along with Shangdi. Still, later variants such as Mohism (470 BCE–c.391 BCE) approached true monotheism, teaching that the function of lesser gods and ancestral spirits is merely to carry out the will of Shangdi, akin to the angels in Abrahamic religions which in turn counts as only one god.
Since the sixth century BCE, Zoroastrians have believed in the supremacy of one God above all: Ahura Mazda as the "Maker of All"[22] and the first being before all others.[23][24][25][26] While this is true, Zoroastrianism is not considered monotheistic as it has a dualistic cosmology, with a pantheon of lesser "gods" or Yazats, such as Mithra, who are worshipped as lesser divinities alongside Ahura Mazda. Along with this, Ahura Mazda is not fully omnipotent engaged in a constant struggle with Angra Mainyu, the force of evil, although good will ultimately overcome evil.[27]
Post-exilic[28] Judaism, after the late 6th century BCE, was the first religion to conceive the notion of a personal monotheistic God within a monist context.[20] The concept of ethical monotheism, which holds that morality stems from God alone and that its laws are unchanging,[29] first occurred in Judaism,[30] but is now a core tenet of most modern monotheistic religions, including Christianity, Islam, Sikhism, and Baháʼí Faith.[31]
The Himba people of Namibia practice a form of monotheistic panentheism, and worship the god Mukuru. The deceased ancestors of the Himba and Herero are subservient to him, acting as intermediaries.[36]
The Igbo people practice a form of monotheism called Odinani.[37] Odinani has monotheistic and panentheistic attributes, having a single God as the source of all things. Although a pantheon of spirits exists, these are lesser spirits prevalent in Odinani expressly serving as elements of Chineke (or Chukwu), the supreme being or high god.
Amenhotep IV initially introduced Atenism in Year 5 of his reign (1348/1346 BCE) during the 18th dynasty of the New Kingdom. He raised Aten, once a relatively obscure Egyptian solar deity representing the disk of the sun, to the status of Supreme God in the Egyptian pantheon.[38] To emphasise the change, Aten's name was written in the cartouche form normally reserved for Pharaohs, an innovation of Atenism. This religious reformation appears to coincide with the proclamation of a Sed festival, a sort of royal jubilee intended to reinforce the Pharaoh's divine powers of kingship. Traditionally held in the thirtieth year of the Pharaoh's reign, this possibly was a festival in honour of Amenhotep III, who some Egyptologists[who?] think had a coregency with his son Amenhotep IV of two to twelve years.
Year 5 is believed to mark the beginning of Amenhotep IV's construction of a new capital, Akhetaten (Horizon of the Aten), at the site known today as Amarna.[39] Evidence of this appears on three of the boundary stelae used to mark the boundaries of this new capital.[citation needed] At this time, Amenhotep IV officially changed his name to Akhenaten (Agreeable to Aten) as evidence of his new worship.[39] The date given for the event has been estimated to fall around January 2 of that year.[citation needed] In Year 7 of his reign (1346/1344 BCE), the capital was moved from Thebes to Akhetaten (near modern Amarna), though construction of the city seems to have continued for two more years.[40] In shifting his court from the traditional ceremonial centres Akhenaten was signalling a dramatic transformation in the focus of religious and political power.[citation needed]
The move separated the Pharaoh and his court from the influence of the priesthood and from the traditional centres of worship, but his decree had deeper religious significance too—taken in conjunction with his name change, it is possible that the move to Amarna was also meant as a signal of Akhenaten's symbolic death and rebirth.[citation needed] It may also have coincided with the death of his father and the end of the coregency.[citation needed] In addition to constructing a new capital in honor of Aten, Akhenaten also oversaw the construction of some of the most massive temple complexes in ancient Egypt, including one at Karnak and one at Thebes, close to the old temple of Amun.[citation needed]
In Year 9 (1344/1342 BCE), Akhenaten declared a more radical version of his new religion, declaring Aten not merely the supreme god of the Egyptian pantheon, but the only God of Egypt, with himself as the sole intermediary between the Aten and the Egyptian people.[citation needed] Key features of Atenism included a ban on idols and other images of the Aten, with the exception of a rayed solar disc, in which the rays (commonly depicted ending in hands) appear to represent the unseen spirit of Aten.[citation needed] Akhenaten made it however clear that the image of the Aten only represented the god, but that the god transcended creation and so could not be fully understood or represented.[41] Aten was addressed by Akhenaten in prayers, such as the Great Hymn to the Aten: "O Sole God beside whom there is none".
The details of Atenist theology are still unclear. The exclusion of all but one god and the prohibition of idols was a radical departure from Egyptian tradition, but scholars[who?] see Akhenaten as a practitioner of monolatry rather than monotheism, as he did not actively deny the existence of other gods; he simply refrained from worshiping any but Aten.[citation needed] Akhenaten associated Aten with Ra and put forward the eminence of Aten as the renewal of the kingship of Ra.[42]
Under Akhenaten's successors, Egypt reverted to its traditional religion, and Akhenaten himself came to be reviled as a heretic.[citation needed]
Some researchers have interpreted Aztec philosophy as fundamentally monotheistic or panentheistic. While the populace at large believed in a polytheistic pantheon, Aztec priests and nobles might have come to an interpretation of Teotl as a single universal force with many facets.[47] There has been criticism to this idea, however, most notably that many assertions of this supposed monotheism might actually come from post-Conquistador bias, imposing an Antiquity pagan model onto the Aztec.[48]
The orthodox faith system held by most dynasties of China since at least the Shang Dynasty (1766 BCE) until the modern period centered on the worship of Shangdi (literally "Above Sovereign", generally translated as "High-god") or Heaven as a supreme being, standing above other gods.[49] This faith system pre-dated the development of Confucianism and Taoism and the introduction of Buddhism and Christianity. It has some features of monotheism in that Heaven is seen as an omnipotent entity, a noncorporeal force with a personalitytranscending the world. However, this faith system was not truly monotheistic since other lesser gods and spirits, which varied with locality, were also worshiped along with Shangdi.[49] Still, later variants such as Mohism (470 BCE–c.391 BCE) approached true monotheism, teaching that the function of lesser gods and ancestral spirits is merely to carry out the will of Shangdi. In Mozi's Will of Heaven (天志), he writes:
I know Heaven loves men dearly not without reason. Heaven ordered the sun, the moon, and the stars to enlighten and guide them. Heaven ordained the four seasons, Spring, Autumn, Winter, and Summer, to regulate them. Heaven sent down snow, frost, rain, and dew to grow the five grains and flax and silk that so the people could use and enjoy them. Heaven established the hills and rivers, ravines and valleys, and arranged many things to minister to man's good or bring him evil. He appointed the dukes and lords to reward the virtuous and punish the wicked, and to gather metal and wood, birds and beasts, and to engage in cultivating the five grains and flax and silk to provide for the people's food and clothing. This has been so from antiquity to the present.
Worship of Shangdi and Heaven in ancient China includes the erection of shrines, the last and greatest being the Temple of Heaven in Beijing, and the offering of prayers. The ruler of China in every Chinese dynasty would perform annual sacrificial rituals to Shangdi, usually by slaughtering a completely healthy bull as sacrifice. Although its popularity gradually diminished after the advent of Taoism and Buddhism, among other religions, its concepts remained in use throughout the pre-modern period and have been incorporated in later religions in China, including terminology used by early Christians in China. Despite the rising of non-theistic and pantheistic spirituality contributed by Taoism and Buddhism, Shangdi was still praised up until the end of the Qing Dynasty as the last ruler of the Qing declared himself son of heaven.
In Chinese and Turco-Mongol traditions, the Supreme God is commonly referred to as the ruler of Heaven, or the Sky Lord granted with omnipotent powers, but it has largely diminished in those regions due to ancestor worship, Taoism's pantheistic views and Buddhism's rejection of a creator God. On some occasions in the mythology, the Sky Lord as identified as a male has been associated to mate with an Earth Mother, while some traditions kept the omnipotence of the Sky Lord unshared.
In Eastern Europe, the ancient traditions of the Slavic religion contained elements of monotheism. In the sixth century AD, the Byzantine chronicler Procopius recorded that the Slavs "acknowledge that one god, creator of lightning, is the only lord of all: to him do they sacrifice an ox and all sacrificial animals."[60] The deity to whom Procopius is referring is the storm god Perún, whose name is derived from *Perkwunos, the Proto-Indo-European god of lightning. The ancient Slavs syncretized him with the Germanic god Thor and the Biblical prophet Elijah.[61]
The surviving fragments of the poems of the classical Greek philosopher Xenophanes of Colophon suggest that he held views very similar to those of modern monotheists.[62] His poems harshly criticize the traditional notion of anthropomorphic gods, commenting that "...if cattle and horses and lions had hands or could paint with their hands and create works such as men do,... [they] also would depict the gods' shapes and make their bodies of such a sort as the form they themselves have."[63] Instead, Xenophanes declares that there is "...one god, greatest among gods and humans, like mortals neither in form nor in thought."[64] Xenophanes's theology appears to have been monist, but not truly monotheistic in the strictest sense.[20] Although some later philosophers, such as Antisthenes, believed in doctrines similar to those expounded by Xenophanes, his ideas do not appear to have become widely popular.[20]
Although Plato himself was a polytheist, in his writings, he often presents Socrates as speaking of "the god" in the singular form. He does, however, often speak of the gods in the plural form as well. The Euthyphro dilemma, for example, is formulated as "Is that which is holy loved by the gods because it is holy, or is it holy because it is loved by the gods?"[65]
The development of pure (philosophical) monotheism is a product of the Late Antiquity. During the 2nd to 3rd centuries, early Christianity was just one of several competing religious movements advocating monotheism.
"The One" (Τὸ Ἕν) is a concept that is prominent in the writings of the Neoplatonists, especially those of the philosopher Plotinus.[66] In the writings of Plotinus, "The One" is described as an inconceivable, transcendent, all-embodying, permanent, eternal, causative entity that permeates throughout all of existence.[67]
A number of oracles of Apollo from Didyma and Clarus, the so-called "theological oracles", dated to the 2nd and 3rd century CE, proclaim that there is only one highest god, of whom the gods of polytheistic religions are mere manifestations or servants.[68] 4th century CE Cyprus had, besides Christianity, an apparently monotheistic cult of Dionysus.[69]
The Hypsistarians were a religious group who believed in a most high god, according to Greek documents. Later revisions of this Hellenic religion were adjusted towards monotheism as it gained consideration among a wider populace. The worship of Zeus as the head-god signaled a trend in the direction of monotheism, with less honour paid to the fragmented powers of the lesser gods.
The tetragrammaton in Paleo-Hebrew (10th century BCE to 135 CE), old Aramaic (10th century BCE to 4th century CE), and square Hebrew (3rd century BCE to present) scripts
Judaism is traditionally considered one of the oldest monotheistic religions in the world,[70] although in the 8th century BCE the Israelites were polytheistic, with their worship including the gods El, Baal, Asherah, and Astarte.[71][72] Yahweh was originally the national god of the Kingdom of Israel and the Kingdom of Judah.[73] During the 8th century BCE, the worship of Yahweh in Israel was in competition with many other cults, described by the Yahwist faction collectively as Baals. The oldest books of the Hebrew Bible reflect this competition, as in the books of Hosea and Nahum, whose authors lament the "apostasy" of the people of Israel, threatening them with the wrath of God if they do not give up their polytheistic cults.[74][75]
As time progressed, the henotheistic cult of Yahweh grew increasingly militant in its opposition to the worship of other gods.[71] Later, the reforms of King Josiah imposed a form of strict monolatrism. After the fall of Judah and the beginning of the Babylonian captivity, a small circle of priests and scribes gathered around the exiled royal court, where they first developed the concept of Yahweh as the sole God of the world.[20]
God, the Cause of all, is one. This does not mean one as in one of a pair, nor one like a species (which encompasses many individuals), nor one as in an object that is made up of many elements, nor as a single simple object that is infinitely divisible. Rather, God is a unity, unlike any other possible unity.[78]
Some in Judaism and Islam reject the Christian idea of monotheism.[79] Modern Judaism uses the term shituf to refer to the worship of God in a manner which Judaism deems to be neither purely monotheistic (though still permissible for non-Jews) nor polytheistic (which would be prohibited).[80]
Christians overwhelmingly assert that monotheism is central to the Christian faith, as the Nicene Creed (and others), which gives the orthodox Christian definition of the Trinity, begins: "I believe in one God". From earlier than the times of the Nicene Creed, 325 CE, various Christian figures advocated[83] the triune mystery-nature of God as a normative profession of faith. According to Roger E. Olson and Christopher Hall, through prayer, meditation, study and practice, the Christian community concluded "that God must exist as both a unity and trinity", codifying this in ecumenical council at the end of the 4th century.[84]
Some Christian faiths, such as Mormonism, argue that the Godhead is in fact three separate individuals which include God the Father, His Son Jesus Christ, and the Holy Ghost.[85] Each individual having a distinct purpose in the grand existence of human kind.[86] Furthermore, Mormons believe that before the Council of Nicaea, the predominant belief among many early Christians was that the Godhead was three separate individuals. In support of this view, they cite early Christian examples of belief in subordinationism.[87]
Unitarianism is a theological movement, named for its understanding of God as one person, in direct contrast to Trinitarianism.[88]
Some in Judaism and some in Islam do not consider Trinitarian Christianity to be a pure form of monotheism due to the pluriform monotheistic Christian doctrine of the Trinity, classifying it as shituf in Judaism and as shirk in Islam.[89][80][90] Trinitarian Christians, on the other hand, argue that the doctrine of the Trinity is a valid expression of monotheism, citing that the Trinity does not consist of three separate deities, but rather the three persons, who exist consubstantially (as one substance) within a single Godhead.[91][92]
The Quran asserts the existence of a single and absolute truth that transcends the world; a unique and indivisible being who is independent of the creation.[113] The Quran rejects binary modes of thinking such as the idea of a duality of God by arguing that both good and evil generate from God's creative act. God is a universal god rather than a local, tribal or parochial one; an absolute who integrates all affirmative values and brooks no evil.[114]Ash'ari theology, which dominated Sunni Islam from the tenth to the nineteenth century, insists on ultimate divine transcendence and holds that divine unity is not accessible to human reason. Ash'arism teaches that human knowledge regarding it is limited to what has been revealed through the prophets, and on such paradoxes as God's creation of evil, revelation had to accept bila kayfa (without [asking] how).[115]
Tawhid constitutes the foremost article of the Muslim profession of faith, "There is no god but God, Muhammad is the messenger of God.[116] To attribute divinity to a created entity is the only unpardonable sin mentioned in the Quran.[114] The entirety of the Islamic teaching rests on the principle of tawhid.[117]
Medieval Islamic philosopher Al-Ghazali offered a proof of monotheism from omnipotence, asserting there can only be one omnipotent being. For if there were two omnipotent beings, the first would either have power over the second (meaning the second is not omnipotent) or not (meaning the first is not omnipotent); thus implying that there could only be one omnipotent being.[118]
As they traditionally profess a concept of monotheism with a singular entity as God, Judaism[79] and Islam reject the Christian idea of monotheism. Judaism uses the term Shituf to refer to non-monotheistic ways of worshiping God. Although Muslims venerate Jesus (Isa in Arabic) as a prophet, they do not accept the doctrine that he was a begotten son of God.
Mandaeism or Mandaeanism (Arabic: مندائيةMandāʼīyah), sometimes also known as Sabianism, is a monotheistic, Gnostic, and ethnic religion.[119][120]: 1 Mandaeans consider Adam, Seth, Noah, Shem and John the Baptist to be prophets, with Adam being the founder of the religion and John being the greatest and final prophet.[121]: 45 The Mandaeans believe in one God commonly named Hayyi Rabbi meaning 'The Great Life' or 'The Great Living God'.[122] The Mandaeans speak a dialect of Eastern Aramaic known as Mandaic. The name 'Mandaean' comes from the Aramaic manda meaning "knowledge", as does Greek gnosis.[123][124] The term 'Sabianism' is derived from the Sabians (Arabic: الصابئة, al-Ṣābiʾa), a mysterious religious group mentioned three times in the Quran alongside the Jews, the Christians and the Zoroastrians as a 'people of the book', and whose name was historically claimed by the Mandaeans as well as by several other religious groups in order to gain the legal protection (dhimma) offered by Islamic law.[125] Mandaeans recognize God to be the eternal, creator of all, the one and only in domination who has no partner.[126]
God in the Baháʼí Faith is taught to be the Imperishable, uncreated Being Who is the source of existence, too great for humans to fully comprehend. Human primitive understanding of God is achieved through his revelations via his divine intermediary Manifestations.[127][128] In the Baháʼí faith, such Christian doctrines as the Trinity are seen as compromising the Baháʼí view that God is single and has no equal,[129]
and the very existence of the Baháʼí Faith is a challenge to the Islamic doctrine of the finality of Muhammad's revelation.[130]
God in the Baháʼí Faith communicates to humanity through divine intermediaries, known as Manifestations of God.[131] These Manifestations establish religion in the world.[128] It is through these divine intermediaries that humans can approach God, and through them God brings divine revelation and law.[132]
The Oneness of God is one of the core teachings of the Baháʼí Faith. The obligatory prayers in the Baháʼí Faith involve explicit monotheistic testimony.[133][134] God is the imperishable, uncreated being who is the source of all existence.[135] He is described as "a personal God, unknowable, inaccessible, the source of all Revelation, eternal, omniscient, omnipresent and almighty".[136][137] Although transcendent and inaccessible directly, his image is reflected in his creation. The purpose of creation is for the created to have the capacity to know and love its creator.[138] God communicates his will and purpose to humanity through intermediaries, known as Manifestations of God, who are the prophets and messengers that have founded religions from prehistoric times up to the present day.[131]
Rastafari, sometimes termed Rastafarianism, is classified as both a new religious movement and social movement. It developed in Jamaica during the 1930s. It lacks any centralised authority and there is much heterogeneity among practitioners, who are known as Rastafari, Rastafarians, or Rastas.
Rastafari refer to their beliefs, which are based on a specific interpretation of the Bible, as "Rastalogy". Central is a monotheistic belief in a single God—referred to as Jah—who partially resides within each individual. The former emperor of Ethiopia, Haile Selassie, is given central importance. Many Rastas regard him as an incarnation of Jah on Earth and as the Second Coming of Christ. Others regard him as a human prophet who fully recognised the inner divinity within every individual.
Faravahar (or Ferohar) is one of the primary symbols of Zoroastrianism, believed to be the depiction of a Fravashi (guardian spirit).
Zoroastrianism combines cosmogonic dualism and eschatological monotheism which makes it unique among the religions of the world. It is contested whether they are monotheistic, due to the presence of Ahura Mainyu, and the existence of worshipped lesser divinities such as Aharaniyita.[139][140][ε]
By some Zoroastrianism is considered a monotheistic religion,[141] but this is contested as both true and false by both scholars, and Zoroastrians themselves. Although Zoroastrianism is often regarded[142] as dualistic, duotheistic or bitheistic, for its belief in the hypostasis of the ultimately good Ahura Mazda(Wise Lord) and the ultimately evil Angra Mainyu(destructive spirit). Zoroastrianism was once one of the largest religions on Earth, as the official religion of the Persian Empire. By some scholars,[who?] the Zoroastrians ("Parsis" or "Zartoshtis") are sometimes credited with being some of the first monotheists and having had influence on other world religions. Gathered statistics estimates the number of adherents at between 100,000 and 200,000,[143] with adherents living in many regions, including South Asia.
God in Yazidism created the world and entrusted it into the care of seven Holy Beings, known as Angels.[144][145][146] The Yazidis believe in a divine Triad.[144][146][147] The original, hidden God of the Yazidis is considered to be remote and inactive in relation to his creation, except to contain and bind it together within his essence.[144] His first emanation is the Angel Melek Taûs (Tawûsê Melek), who functions as the ruler of the world and leader of the other Angels.[144][146][147] The second hypostasis of the divine Triad is the Sheikh 'Adī ibn Musafir. The third is Sultan Ezid. These are the three hypostases of the one God. The identity of these three is sometimes blurred, with Sheikh 'Adī considered to be a manifestation of Tawûsê Melek and vice versa; the same also applies to Sultan Ezid.[144] Yazidis are called Miletê Tawûsê Melek ("the nation of Tawûsê Melek").[148]
God is referred to by Yazidis as Xwedê, Xwedawend, Êzdan, and Pedsha ('King'), and, less commonly, Ellah and Heq.[149][150][145][144][151] According to some Yazidi hymns (known as Qewls), God has 1,001 names, or 3,003 names according to other Qewls.[152][153]
Aboriginal Australians are typically described as polytheistic in nature.[154] Although some researchers shy from referring to Dreamtime figures as "gods" or "deities", they are broadly described as such for the sake of simplicity.[155]
In Southeastern Australian cultures, the sky father Baiame is perceived as the creator of the universe (though this role is sometimes taken by other gods like Yhi or Bunjil) and at least among the Gamilaraay traditionally revered above other mythical figures.[156] Equation between him and the Christian god is common among both missionaries and modern Christian Aboriginals.[157]
The Yolnguhad extensive contact with the Makassans and adopted religious practises inspired by those of Islam. The god Walitha'walitha is based on Allah (specifically, with the wa-Ta'ala suffix), but while this deity had a role in funerary practises it is unclear if it was "Allah-like" in terms of functions.[158]
The religion of the Andamanese peoples has at times been described as "animistic monotheism", believing foremost in a single deity, Paluga, who created the universe.[159] However, Paluga is not worshipped, and anthropomorphic personifications of natural phenomena are also known.[160]
Hindu views are broad and range from monism, through pantheism and panentheism (alternatively called monistic theism by some scholars) to monotheism and even atheism. Hinduism cannot be said to be purely polytheistic. Hindu religious leaders have repeatedly stressed that while God's forms are many and the ways to communicate with him are many, God is one. The puja of the murti is a way to communicate with the abstract one god (Brahman) which creates, sustains and dissolves creation.[165]
When Krishna is recognized to be Svayam Bhagavan, it can be understood that this is the belief of Gaudiya Vaishnavism,[170] the Vallabha Sampradaya,[171] and the Nimbarka Sampradaya, where Krishna is accepted to be the source of all other avatars, and the source of Vishnu himself. This belief is drawn primarily "from the famous statement of the Bhagavatam"[172] (1.3.28).[173] A viewpoint differing from this theological concept is the concept of Krishna as an avatar of Narayana or Vishnu. It should be however noted that although it is usual to speak of Vishnu as the source of the avataras, this is only one of the names of the God of Vaishnavism, who is also known as Narayana, Vasudeva and Krishna and behind each of those names there is a divine figure with attributed supremacy in Vaishnavism.[174]
The Nyaya school of Hinduism has made several arguments regarding a monotheistic view. The Naiyanikas have given an argument that such a god can only be one. In the Nyaya Kusumanjali, this is discussed against the proposition of the Mimamsa school that let us assume there were many demigods (devas) and sages (rishis) in the beginning, who wrote the Vedas and created the world. Nyaya says that:
[If they assume such] omniscient beings, those endowed with the various superhuman faculties of assuming infinitesimal size, and so on, and capable of creating everything, then we reply that the law of parsimony bids us assume only one such, namely Him, the adorable Lord. There can be no confidence in a non-eternal and non-omniscient being, and hence it follows that according to the system which rejects God, the tradition of the Veda is simultaneously overthrown; there is no other way open.[citation needed]
In other words, Nyaya says that the polytheist would have to give elaborate proofs for the existence and origin of his several celestial spirits, none of which would be logical, and that it is more logical to assume one eternal, omniscient god.[182]
Many other Hindus, however, view polytheism as far preferable to monotheism. The famous Hindu revitalist leader Ram Swarup, for example, points to the Vedas as being specifically polytheistic,[183] and states that, "only some form of polytheism alone can do justice to this variety and richness."[184]
I had an occasion to read the typescript of a book [Ram Swarup] had finished writing in 1973. It was a profound study of Monotheism, the central dogma of both Islam and Christianity, as well as a powerful presentation of what the monotheists denounce as Hindu Polytheism. I had never read anything like it. It was a revelation to me that Monotheism was not a religious concept but an imperialist idea. I must confess that I myself had been inclined towards Monotheism till this time. I had never thought that a multiplicity of Gods was the natural and spontaneous expression of an evolved consciousness.[185]
Sikhi is a monotheistic[186][187] and a revealed religion.[188]
God in Sikhi is called Akal Purakh (which means "the true immortal") or Vāhigurū the Primal being. However, other names like Ram, Allah etc. are also used to refer to the same god, who is shapeless, timeless, and sightless: niraṅkār, akaal, and alakh. Sikhi presents a unique perspective where God is present (sarav viāpak) in all of its creation and does not exist outside of its creation. God must be seen from "the inward eye", or the "heart". Sikhs follow the Aad Guru Granth Sahib and are instructed to meditate on the Naam (Name of God - Vāhigurū) to progress towards enlightenment, as its rigorous application permits the existence of communication between God and human beings.[189]
The word "ੴ" ("Ik ōaṅkār") has two components. The first is ੧, the digit "1" in Gurmukhi signifying the singularity of the creator. Together the word means: "One Universal creator God".
It is often said that the 1430 pages of the Guru Granth Sahib are all expansions on the Mul Mantra. Although the Sikhs have many names for God, some derived from Islam and Hinduism, they all refer to the same Supreme Being.
The Sikh holy scriptures refer to the One God who pervades the whole of space and is the creator of all beings in the universe. The following quotation from the Guru Granth Sahib highlights this point:
Chant, and meditate on the One God, who permeates and pervades the many beings of the whole Universe. God created it, and God spreads through it everywhere. Everywhere I look, I see God. The Perfect Lord is perfectly pervading and permeating the water, the land and the sky; there is no place without Him.
— Guru Granth Sahib, Page 782
However, there is a strong case for arguing that the Guru Granth Sahib teaches monism due to its non-dualistic tendencies:
Sikhs believe that God has been given many names, but they all refer to the One God, VāhiGurū. Sikh holy scripture (Guru Granth Sahib) speaks to all faiths and Sikhs believe that members of other religions such as Islam, Hinduism and Christianity all worship the same God, and the names Allah, Rahim, Karim, Hari, Raam and Paarbrahm are, therefore, frequently mentioned in the Sikh holy scripture (Guru Granth Sahib) . God in Sikhism is most commonly referred to as Akal Purakh (which means "the true immortal") or Waheguru, the Primal Being.
^Duchesne-Guillemin, Jacques (13 November 2020). "Zoroastrianism (religion)". Encyclopedia Britannica. Archived from the original on 31 December 2021. Retrieved 24 December 2021. Though Zoroastrianism was never, even in the thinking of its founder, as insistently monotheistic as, for instance, Judaism or Islam, it does represent an original attempt at unifying under the worship of one supreme god a polytheistic religion
^ abcWells, Colin (2010). "How Did God Get Started?". Arion. 18.2 (Fall). Archived from the original on 2021-05-08. Retrieved 2020-12-26. ...as any student of ancient philosophy can tell you, we see the first appearance of a unitary God not in Jewish scripture, but in the thought of the Greek philosopher Plato...
^Armstrong, Karen (1994). A History of God: The 4,000-Year Quest of Judaism, Christianity and Islam. New York City, New York: Ballantine Books. ISBN978-0345384560.
^
Compare: Theissen, Gerd (1985). "III: Biblical Monotheism in an Evolutionary Perspective". Biblical Faith: An Evolutionary Approach. Translated by Bowden, John. Minneapolis: Fortress Press (published 2007). p. 64. ISBN9781451408614. Retrieved 2017-01-13. Evolutionary interpretations of the history of religion are usually understood to be an explanation of the phenomenon of religion as a result of a continuous development. The model for such development is the growth of living beings which leads to increasingly subtle differentiation and integration. Within such a framework of thought, monotheism would be interpreted as the result of a continuous development from animism, polytheism, henotheism and monolatry to belief in the one and only God. Such a development cannot be proved. Monotheism appeared suddenly, though not without being prepared for.
^ abMcLaughlin, Elsie (22 September 2017). "The Art of the Amarna Period". World History Encyclopedia. Archived from the original on 2 May 2021. Retrieved 4 July 2020. In Regnal Year 5, the pharaoh dropped all pretense and declared Aten the official state deity of Egypt, directing focus and funding away from the Amun priesthood to the cult of the sun disk. He even changed his name from Amenhotep ('Amun is Satisfied') to Akhenaten ('Effective for the Aten,') and ordered the construction of a new capital city, Akhetaten ('The Horizon of Aten') in the desert. Located at the modern site of Tell el-Amarna, Akhetaten was situated between the ancient Egyptian cities of Thebes and Memphis on the east bank of the Nile.
^The spelling Tengrism is found in the 1960s, e.g. Bergounioux (ed.), Primitive and prehistoric religions, Volume 140, Hawthorn Books, 1966, p. 80.
Tengrianism is a reflection of the Russian term, Тенгрианство. It is reported in 1996 ("so-called Tengrianism") in Shnirelʹman (ed.), Who gets the past?: competition for ancestors among non-Russian intellectuals in Russia, Woodrow Wilson Center Press, 1996, ISBN978-0-8018-5221-3, p. 31 in the context of the nationalist rivalry over Bulgar legacy. The spellings Tengriism and Tengrianity are later, reported (deprecatingly, in scare quotes) in 2004 in Central Asiatic journal, vol. 48-49 (2004), p. 238Archived 2023-03-26 at the Wayback Machine. The Turkish term Tengricilik is also found from the 1990s. Mongolian Тэнгэр шүтлэг is used in a 1999 biography of Genghis Khan (Boldbaatar et al., Чингис хаан, 1162-1227, Хаадын сан, 1999, p. 18Archived 2023-04-20 at the Wayback Machine).
^"There is no doubt that between the 6th and 9th centuries Tengrism was the religion among the nomads of the steppes" Yazar András Róna-Tas, Hungarians and Europe in the early Middle Ages: an introduction to early Hungarian history, Yayıncı Central European University Press, 1999, ISBN978-963-9116-48-1, p. 151Archived 2023-04-06 at the Wayback Machine.
^E. Kessler, Dionysian Monotheism in Nea Paphos, Cyprus: "two monotheistic religions, Dionysian and Christian, existed contemporaneously in Nea Paphos during the 4th century C.E. [...] the particular iconography of Hermes and Dionysos in the panel of the Epiphany of Dionysos [...] represents the culmination of a pagan iconographic tradition in which an infant divinity is seated on the lap of another divine figure; this pagan motif was appropriated by early Christian artists and developed into the standardized icon of the Virgin and Child. Thus the mosaic helps to substantiate the existence of pagan monotheism." [(AbstractArchived 2008-04-21 at the Wayback Machine)
^MonotheismArchived 2022-04-12 at the Wayback Machine, My Jewish Learning, "Many critical scholars think that the interval between the Exodus and the proclamation of monotheism was much longer. Outside of Deuteronomy the earliest passages to state that there are no gods but the Lord are in poems and prayers attributed to Hannah and David, one and a half to two and a half centuries after the Exodus at the earliest. Such statements do not become common until the seventh century B.C.E., the period to which Deuteronomy is dated by the critical view."
Hence all the power of magic became dissolved; and every bond of wickedness was destroyed, men's ignorance was taken away, and the old kingdom abolished God Himself appearing in the form of a man, for the renewal of eternal life.
— St. Ignatius of Antioch in Letter to the Ephesians, ch.4, shorter version, Roberts-Donaldson translation
We have also as a Physician the Lord our God Jesus the Christ the only-begotten Son and Word, before time began, but who afterwards became also man, of Mary the virgin. For 'the Word was made flesh.' Being incorporeal, He was in the body; being impassible, He was in a passable body; being immortal, He was in a mortal body; being life, He became subject to corruption, that He might free our souls from death and corruption, and heal them, and might restore them to health, when they were diseased with ungodliness and wicked lusts
— St. Ignatius of Antioch in Letter to the Ephesians, ch.7, shorter version, Roberts-Donaldson translation
The Church, though dispersed throughout the whole world, even to the ends of the earth, has received from the apostles and their disciples this faith: ...one God, the Father Almighty, Maker of heaven, and earth, and the sea, and all things that are in them; and in one Christ Jesus, the Son of God, who became incarnate for our salvation; and in the Holy Spirit, who proclaimed through the prophets the dispensations of God, and the advents, and the birth from a virgin, and the passion, and the resurrection from the dead, and the ascension into heaven in the flesh of the beloved Christ Jesus, our Lord, and His manifestation from heaven in the glory of the Father 'to gather all things in one,' and to raise up anew all flesh of the whole human race, in order that to Christ Jesus, our Lord, and God, and Savior, and King, according to the will of the invisible Father, 'every knee should bow, of things in heaven, and things in earth, and things under the earth, and that every tongue should confess; to him, and that He should execute just judgment towards all...'
^"Islamic Practices". Universal Life Church Ministries. Archived from the original on 2016-03-07. Retrieved 2016-01-20. It is the Islamic belief that Christianity is not monotheistic, as it claims, but rather polytheistic with the trinity-the father, son and the Holy Ghost.
^Lesson 10: Three Persons are Subsistent RelationsArchived 2017-07-31 at the Wayback Machine, International Catholic University: "The fatherhood constitutes the Person of the Father, the sonship constitutes the Person of the Son, and the passive spiration constitutes the Person of the Holy Spirit. But in God "everything is one where there is no distinction by relative opposition." Consequently, even though in God there are three Persons, there is only one consciousness, one thinking and one loving. The three Persons share equally in the internal divine activity because they are all identified with the divine essence. For, if each divine Person possessed his own distinct and different consciousness, there would be three gods, not the one God of Christian revelation. So you will see that in this regard there is an immense difference between a divine Person and a human person."
^TrinityArchived 2021-04-30 at the Wayback Machine, Britannica: "The Council of Nicaea in 325 stated the crucial formula for that doctrine in its confession that the Son is “of the same substance [homoousios] as the Father,” even though it said very little about the Holy Spirit. Over the next half century, Athanasius defended and refined the Nicene formula, and, by the end of the 4th century, under the leadership of Basil of Caesarea, Gregory of Nyssa, and Gregory of Nazianzus (the Cappadocian Fathers), the doctrine of the Trinity took substantially the form it has maintained ever since. It is accepted in all of the historic confessions of Christianity, even though the impact of the Enlightenment decreased its importance."
^Omarkhali, Khanna (2017). The Yezidi religious textual tradition, from oral to written : categories, transmission, scripturalisation, and canonisation of the Yezidi oral religious texts: with samples of oral and written religious texts and with audio and video samples on CD-ROM. Harrassowitz Verlag. ISBN978-3-447-10856-0. OCLC994778968.
^ Swaminarayan bicentenary commemoration volume, 1781-1981. p. 154: ...Shri Vallabhacharya [and] Shri Swaminarayan... Both of them designate the highest reality as Krishna, who is both the highest avatara and also the source of other avataras. To quote R. Kaladhar Bhatt in this context. "In this transcendental devotieon (Nirguna Bhakti), the sole Deity and only" is Krishna. New Dimensions in Vedanta Philosophy - Page 154Archived 2023-04-20 at the Wayback Machine, Sahajānanda, Vedanta. 1981
^Dimock Jr, E.C.; Dimock, E.C. (1989). The Place of the Hidden Moon: Erotic Mysticism in the Vaisnava-Sahajiya Cult of Bengal. University Of Chicago Press.page 132Archived 2023-04-20 at the Wayback Machine
^Flood, Gavin D. (1996). An introduction to Hinduism. Cambridge, UK: Cambridge University Press. p. 341. ISBN0-521-43878-0. Retrieved 2008-04-21. gavin flood. "Early Vaishnava worship focuses on three deities who become fused together, namely Vasudeva-Krishna, Krishna-Gopala, and Narayana, who in turn all become identified with Vishnu. Put simply, Vasudeva-Krishna and Krishna-Gopala were worshiped by groups generally referred to as Bhagavatas, while Narayana was worshipped by the Pancaratra sect."
^Matchett, Freda (2000). Krsna, Lord or Avatara? the relationship between Krsna and Visnu: in the context of the Avatara myth as presented by the Harivamsa, the Visnupurana and the Bhagavatapurana. Surrey: Routledge. p. 4. ISBN0-7007-1281-X.
^Goel, Sita Ram (1987). Defence of Hindu Society. New Delhi, India: Voice of India. Archived from the original on 2016-03-03. Retrieved 2011-08-23. "In the Vedic approach, there is no single God. This is bad enough. But the Hindus do not have even a supreme God, a fuhrer-God who presides over a multiplicity of Gods." – Ram Swarup
|
There can be no confidence in a non-eternal and non-omniscient being, and hence it follows that according to the system which rejects God, the tradition of the Veda is simultaneously overthrown; there is no other way open.[citation needed]
In other words, Nyaya says that the polytheist would have to give elaborate proofs for the existence and origin of his several celestial spirits, none of which would be logical, and that it is more logical to assume one eternal, omniscient god.[182]
Many other Hindus, however, view polytheism as far preferable to monotheism. The famous Hindu revitalist leader Ram Swarup, for example, points to the Vedas as being specifically polytheistic,[183] and states that, "only some form of polytheism alone can do justice to this variety and richness. "[184]
I had an occasion to read the typescript of a book [Ram Swarup] had finished writing in 1973. It was a profound study of Monotheism, the central dogma of both Islam and Christianity, as well as a powerful presentation of what the monotheists denounce as Hindu Polytheism. I had never read anything like it. It was a revelation to me that Monotheism was not a religious concept but an imperialist idea. I must confess that I myself had been inclined towards Monotheism till this time. I had never thought that a multiplicity of Gods was the natural and spontaneous expression of an evolved consciousness.[185]
Sikhi is a monotheistic[186][187] and a revealed religion.[188]
God in Sikhi is called Akal Purakh (which means "the true immortal") or Vāhigurū the Primal being. However, other names like Ram, Allah etc. are also used to refer to the same god, who is shapeless, timeless, and sightless: niraṅkār, akaal, and alakh.
|
no
|
World Religions
|
Do Hindus believe in a single god?
|
no_statement
|
"hindus" do not "believe" in a "single" "god".. hinduism does not promote the belief in a "single" "god".
|
https://www.bbc.co.uk/religion/religions/hinduism/concepts/concepts_1.shtml
|
Religions - Hinduism: Hindu concepts - BBC
|
On this page
Page options
Atman
Atman
Atman means 'eternal self'. The atman refers to the real self beyond ego or false self. It is often referred to as 'spirit' or 'soul' and indicates our true self or essence which underlies our existence.
There are many interesting perspectives on the self in Hinduism ranging from the self as eternal servant of God to the self as being identified with God. The understanding of the self as eternal supports the idea of reincarnation in that the same eternal being can inhabit temporary bodies.
The idea of atman entails the idea of the self as a spiritual rather than material being and thus there is a strong dimension of Hinduism which emphasises detachment from the material world and promotes practices such as asceticism. Thus it could be said that in this world, a spiritual being, the atman, has a human experience rather than a human being having a spiritual experience.
Dharma
Dharma
Dharma is an important term in Indian religions. In Hinduism it means 'duty', 'virtue', 'morality', even 'religion' and it refers to the power which upholds the universe and society. Hindus generally believe that dharma was revealed in the Vedas although a more common word there for 'universal law' or 'righteousness' is rita. Dharma is the power that maintains society, it makes the grass grow, the sun shine, and makes us moral people or rather gives humans the opportunity to act virtuously.
But acting virtuously does not mean precisely the same for everyone; different people have different obligations and duties according to their age, gender, and social position. Dharma is universal but it is also particular and operates within concrete circumstances. Each person therefore has their own dharma known as sva-dharma. What is correct for a woman might not be for a man or what is correct for an adult might not be for a child.
The importance of sva-dharma is illustrated well by the Bhagavad Gita. This text, set before the great battle of the Mahabharata, depicts the hero Arjuna riding in his chariot driven by his charioteer Krishna between the great armies. The warrior Arjuna questions Krishna about why he should fight in the battle. Surely, he asks, killing one's relatives and teachers is wrong and so he refuses to fight.
Krishna assures him that this particular battle is righteous and he must fight as his duty or dharma as a warrior. Arjuna's sva-dharma was to fight in the battle because he was a warrior, but he must fight with detachment from the results of his actions and within the rules of the warriors' dharma. Indeed, not to act according to one's own dharma is wrong and called adharma.
Correct action in accordance with dharma is also understood as service to humanity and to God. The idea of what has become known as sanatana dharma can be traced back to the puranas - texts of antiquity. Those who adhere to this idea of one's eternal dharma or constitution, claim that it transcends other mundane dharmas - that it is the para dharma, the ultimate dharma of the self. It is often associated with bhakti movements, who link an attitude of eternal service to a personal deity.
Varna
Varna
An important idea that developed in classical Hinduism is that dharma refers especially to a person's responsibility regarding class (varna) and stage of life (ashrama). This is called varnashrama-dharma. In Hindu history the highest class, the Brahmins, adhered to this doctrine. The class system is a model or ideal of social order that first occurs in the oldest Hindu text, the Rig Veda and the present-day caste (jati) system may be rooted in this. The four classes are:
Brahmans or Brahmins - the intellectuals and the priestly class who perform religious rituals
Kshatriya (nobles or warriors) - who traditionally had power
Vaishyas (commoners or merchants) - ordinary people who produce, farm, trade and earn a living
Shudras (workers) - who traditionally served the higher classes, including labourers, artists, musicians, and clerks
People in the top three classes are known as 'twice born' because they have been born from the womb and secondly through initiation in which boys receive a sacred thread as a symbol of their high status. Although usually considered an initiation for males it must be noted that there are examples of exceptions to this rule, where females receive this initiation.
The twice born traditionally could go through four stages of life or ashramas. The ashrama system is as follows:
grihastha - 'householder' in which the twice born male can experience the human purposes (purushartha) of responsibility, wealth, and sexual pleasure
Vanaprastha - 'hermit' or 'wilderness dweller' in which the twice born male retires from life in the world to take up pilgrimage and religious observances along with his wife
Samnyasa - 'renunciation' in which the twice born gives up the world, takes on a saffron robe or, in some sects, goes naked, with a bowl and a staff to seek moksha (liberation) or develop devotion
Correct action in accordance with dharma is also understood as service to humanity and to God. The idea of what has become known as sanatana dharma can be traced back to the puranas. Those who adhere to this idea, addressing one’s eternal dharma or constitution, claim that it transcends other mundane dharmas – that it is the para dharma, the ultimate dharma. It is often associated with bhakti movements, who propose that we are all eternal servants of a personal Deity, thus advocating each act, word, and deed to be acts of devotion. In the 19th Century the concept of sanatana dharma was used by some groups to advocate a unified view of Hinduism.
In order to see this content you need to have both Javascript enabled and Flash installed. Visit BBC Webwise for full instructions
Karma and Samsara
Karma and Samsara
Karma is a Sanskrit word whose literal meaning is 'action'. It refers to the law that every action has an equal reaction either immediately or at some point in the future. Good or virtuous actions, actions in harmony with dharma, will have good reactions or responses and bad actions, actions against dharma, will have the opposite effect.
In Hinduism karma operates not only in this lifetime but across lifetimes: the results of an action might only be experienced after the present life in a new life.
Hindus believe that human beings can create good or bad consequences for their actions and might reap the rewards of action in this life, in a future human rebirth or reap the rewards of action in a heavenly or hell realm in which the self is reborn for a period of time.
This process of reincarnation is called samsara, a continuous cycle in which the soul is reborn over and over again according to the law of action and reaction. At death many Hindus believe the soul is carried by a subtle body into a new physical body which can be a human or non-human form (an animal or divine being). The goal of liberation (moksha) is to make us free from this cycle of action and reaction, and from rebirth.
Purushartha
Purushartha
Hinduism developed a doctrine that life has different goals according to a person's stage of life and position. These goals became codified in the 'goals of a person' or 'human goals', the purusharthas, especially in sacred texts about dharma called 'dharma shastras' of which the 'Laws of Manu' is the most famous. In these texts three goals of life are expressed, namely virtuous living or dharma, profit or worldly success, and pleasure, especially sexual pleasure as a married householder and more broadly aesthetic pleasure. A fourth goal of liberation (moksha) was added at a later date. The purusharthas express an understanding of human nature, that people have different desires and purposes which are all legitimate in their context.
Over the centuries there has been discussion about which goal was most important. Towards the end of the Mahabharata (Shantiparvan 12.167) there is a discussion about the relative importance of the three goals of dharma, profit and pleasure between the Pandava brothers and the wise sage Vidura. Vidura claims that dharma is most important because through it the sages enter the absolute reality, on dharma the universe rests, and through dharma wealth is acquired. One of the brothers, Arjuna, disagrees, claiming that dharma and pleasure rest on profit. Another brother, Bhima, argues for pleasure or desire being the most important goal, as only through desire have the sages attained liberation. This discussion recognises the complexity and varied nature of human purposes and meanings in life.
Brahman and God
Brahman
Brahman is a Sanskrit word which refers to a transcendent power beyond the universe. As such, it is sometimes translated as 'God' although the two concepts are not identical. Brahman is the power which upholds and supports everything. According to some Hindus this power is identified with the self (atman) while others regard it as distinct from the self.
Most Hindus agree that Brahman pervades everything although they do not worship Brahman. Some Hindus regard a particular deity or deities as manifestations of Brahman.
God
Most Hindus believe in God but what this means varies in different traditions. The Sanskrit words Bhagavan and Ishvara mean 'Lord' or 'God' and indicate an absolute reality who creates, sustains and destroys the universe over and over again. It is too simplistic to define Hinduism as belief in many gods or 'polytheism'. Most Hindus believe in a Supreme God, whose qualities and forms are represented by the multitude of deities which emanate from him. God, being unlimited, can have unlimited forms and expressions.
God can be approached in a number of ways and a devoted person can relate to God as a majestic king, as a parent figure, as a friend, as a child, as a beautiful woman, or even as a ferocious Goddess. Each person can relate to God in a particular form, the ishta devata or desired form of God. Thus, one person might be drawn towards Shiva, another towards Krishna, and another towards Kali. Many Hindus believe that all the different deities are aspects of a single, transcendent power.
In the history of Hinduism, God is conceptualised in different ways, as an all knowing and all pervading spirit, as the creator and force within all beings, their 'inner controller' (antaryamin) and as wholly transcendent. There are two main ideas about Bhagavan or Ishvara:
Bhagavan is an impersonal energy. Ultimately God is beyond language and anything that can be said about God cannot capture the reality. Followers of the Advaita Vedanta tradition (based on the teachings of Adi Shankara) maintain that the soul and God are ultimately identical and liberation is achieved once this has been realised. This teaching is called non-dualism or advaita because it claims there is no distinction between the soul and the ultimate reality.
Bhagavan is a person. God can be understood as a supreme person with qualities of love and compassion towards creatures. On this theistic view the soul remains distinct from the Lord even in liberation. The supreme Lord expresses himself through the many gods and goddesses. The theologian Ramanuja (also in the wider Vedanta tradition as Shankara) makes a distinction between the essence of God and his energies. We can know the energies of God but not his essence. Devotion (bhakti) is the best way to understand God in this teaching.
For convenience Hindus are often classified into the three most popular Hindu denominations, called paramparas in Sanskrit. These paramparas are defined by their attraction to a particular form of God (called ishta or devata):
Vaishnavas focus on Vishnu and his incarnations (avatara, avatars). The Vaishanavas believe that God incarnates into the world in different forms such as Krishna and Rama in order to restore dharma. This is considered to be the most popular Hindu denomination.
Shaivas focus on Shiva, particularly in his form of the linga although other forms such as the dancing Shiva are also worshipped. The Shaiva Siddhanta tradition believes that Shiva performs five acts of creation, maintenance, destruction, concealing himself, revealing himself through grace.
Shaktas focus on the Goddess in her gentle forms such as Lakshmi, Parvati, and Sarasvati, or in her ferocious forms such as Durga and Kali.
Guru
Guru
The terms guru and acharya refer to a teacher or master of a tradition. The basic meaning is of a teacher who teaches through example and conveys knowledge and wisdom to his disciples. The disciple in turn might become a teacher and so the lineage continues through the generations. One story that captures the spirit of the teacher is that a mother asks the teacher to stop her son eating sugar for he eats too much of it. The master tells her to come back in a week. She returns and he tells the child to do as his mother says and the child obeys. Asked by the mother why he delayed for a week, he replied 'a week ago I had not stopped eating sugar!'
Gurus are generally very highly revered and can become the focus of devotion (bhakti) in some traditions. A fundamentally important teaching is that spiritual understanding is conveyed from teacher to disciple through a lineage and when one guru passes away he or she is usually replaced by a successor. One guru could have more than one successor which leads to a multiplication of traditions.
BBC links
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.
|
'Lord' or 'God' and indicate an absolute reality who creates, sustains and destroys the universe over and over again. It is too simplistic to define Hinduism as belief in many gods or 'polytheism'. Most Hindus believe in a Supreme God, whose qualities and forms are represented by the multitude of deities which emanate from him. God, being unlimited, can have unlimited forms and expressions.
God can be approached in a number of ways and a devoted person can relate to God as a majestic king, as a parent figure, as a friend, as a child, as a beautiful woman, or even as a ferocious Goddess. Each person can relate to God in a particular form, the ishta devata or desired form of God. Thus, one person might be drawn towards Shiva, another towards Krishna, and another towards Kali. Many Hindus believe that all the different deities are aspects of a single, transcendent power.
In the history of Hinduism, God is conceptualised in different ways, as an all knowing and all pervading spirit, as the creator and force within all beings, their 'inner controller' (antaryamin) and as wholly transcendent. There are two main ideas about Bhagavan or Ishvara:
Bhagavan is an impersonal energy. Ultimately God is beyond language and anything that can be said about God cannot capture the reality. Followers of the Advaita Vedanta tradition (based on the teachings of Adi Shankara) maintain that the soul and God are ultimately identical and liberation is achieved once this has been realised. This teaching is called non-dualism or advaita because it claims there is no distinction between the soul and the ultimate reality.
Bhagavan is a person. God can be understood as a supreme person with qualities of love and compassion towards creatures. On this theistic view the soul remains distinct from the Lord even in liberation. The supreme Lord expresses himself through the many gods and goddesses. The theologian Ramanuja (also in the wider Vedanta tradition as Shankara) makes a distinction between the essence of God and his energies. We can know the energies of God but not his essence.
|
yes
|
World Religions
|
Do Hindus believe in a single god?
|
no_statement
|
"hindus" do not "believe" in a "single" "god".. hinduism does not promote the belief in a "single" "god".
|
https://www.imb.org/2018/08/10/the-basics-of-hinduism/
|
Do You Know the Basics of Hinduism? - IMB
|
Do You Know the Basics of Hinduism?
Mahatma Gandhi, the famous nonviolent Hindu reformer, explained that Hinduism is not an exclusive religion. Gandhi said, “If a man reaches the heart of his own religion, he has reached the heart of the others too. There is only one God, and there are many paths to him.” Although some ideas unify Hinduism, it is an extremely tolerant religion that allows its followers full freedom to choose their own belief system and way of life.
It’s rare that two Hindus believe exactly the same thing. What follows is a general summary of Hinduism, but it’s always best to ask individuals what they personally believe.
What Do Hindus Believe about God?
Hinduism has traditionally been considered polytheistic—the worship of many gods—but may better be described as henotheistic—the worship of one particular god without disbelieving in the existence of others. Hinduism recognizes up to 333 million gods, but many Hindus believe this vast number represents the infinite forms of god—god is in everyone, god is in everything.
Many Hindus believe in and worship three gods that make up the Hindu “trinity”: Brahma the creator of the universe, Vishnu the preserver of the universe, and Shiva the destroyer of the universe. These gods, along with the other millions of deities, are considered manifestations of either one supreme god or a single, transcendent power called Brahman (not to be confused with Brahmins, the priestly social class). Many Hindus would even say Jesus was a manifestation of one of their gods.
No matter what form of Hinduism they follow, most Hindus are also active animists. They attempt to appease good and bad spirits by worshiping at auspicious times, studying horoscopes, and wearing amulets to guard against diseases and evil.
Hindu Holy Texts
Many Hindu practices today somewhat rely on the spiritual literature and authority of the Vedas—texts of sacred truth revealed from an absolute power to the inhabitants of northern India. The Sanskrit texts that make up the Vedas were composed and orally transmitted by ancient poets and sages as early as 1700 BC. However, many people neither read, adhere to, nor know how to interpret these holy texts. High-caste Brahmins—members of the priestly social class by birth—have closely guarded knowledge of the Vedas to preserve their dominant position in society. Therefore, many Hindus instead choose to follow family traditions and guidance provided by their spiritual teachers, called gurus.
Salvation According to Hindus
Hindus believe in the soul, or true self, called atman. According to Hindus, the soul goes through reincarnation—a rebirth of the soul into a new body after death. Life, birth, death, and rebirth is an endless cycle called samsara. Rebirth is affected by karma—the result of deeds or actions—in the present life.
There is no concept of sin in Hinduism as it is perceived in Western thought. Instead, there is the law of karma that says every good thought, word, or deed affects the next life favorably while every bad thought, word, or deed leads to suffering in the next life. The law of karma does not allow for the possibility of forgiveness but only the accumulation of inescapable consequences—good or bad, according to right or wrong action. Karma does not affect a Hindu’s relationship with the universal power, Brahman. Whether a person’s karma is good or bad has no impact on their intrinsic oneness with Brahman.
Individuals are born into a particular caste depending on their actions in the previous life. Good karma leads to rebirth in a higher caste and bad karma to a lower caste. One can only become a member of a different caste through death and rebirth. Eventually, the soul will attain moksha—alternately called salvation, enlightenment, or liberation from rebirth—and become one with the universal power, Brahman.
What Is the Purpose of Life for a Hindu?
Hindus have four specific goals in human life.
Dharma: pursuing virtuous behavior and fulfilling one’s duty in life
Artha: pursuing and acquiring success and wealth
Kama: pursuing pleasure in all its forms
Moksha: pursuing salvation
The first three goals of human life deal mainly with the quality of life and are very important to Hindus. But moksha is arguably the most significant goal. Hinduism offers at least three paths to pursue moksha: the way of ritual and action, the way of knowledge and meditation, and the way of devotion. Hindus usually prioritize or adopt one path over the others.
The way of ritual and action claims that performing one’s duty in this life is the sacred and moral responsibility of the individual. Each caste has a duty or function that helps to sustain society as a whole. If someone deviates from fulfilling his or her function, it is interpreted as bringing disaster to both the individual and to society. Similar to Buddhism, the way of ritual and action focuses on detachment from desire in order to attain salvation. This path is primarily followed by high-caste Hindus, such as Brahmins.
The way of knowledge and meditation says that humans are trapped in an illusion that keeps us from realizing we are a part of god. When this illusion is dispelled, we will reach salvation by becoming one with the ultimate reality. Followers of this path practice yoga, meditation, and are also encouraged to study philosophy in their pursuit to dispel illusion. Modern-day gurus of this path claim they are god and suggest all of us can be god too. This path is followed mainly by the intellectual elite, and its philosophy has been widely embraced by non-Hindus with New Age beliefs.
The way of devotion is characterized by acts of devotion to one’s personal god in the hopes of receiving mercy and instant salvation. These acts range from ascetic practices, singing hymns, and repeating the name of god (the word om), to pilgrimages and sacrifices. This path is open to all—even low-castes, outcasts, women, and children.
Holy Cows, Vegetarianism, and Yoga
Despite the broad spectrum of faith and practice, Hinduism has several common cultural elements: the veneration of cows, vegetarianism, and yoga.
During the Vedic Period of northern India, the cow was a symbol of wealth and prosperity, as well as one of the animals offered in ritual religious sacrifices. But over time, possibly through the influence of Buddhism and Jainism, animal sacrifices waned, and the cow emerged as a sacred symbol to Hindus. The five products of the cow—milk, curds, butter, urine, and dung—are used in purifying and healing rituals. One popular Hindu god, Krishna, spent his early life as a cowherd, further elevating the status of the cow. In some states of India, there are bans and strict regulations concerning the slaughter of cows and eating of beef.
Many rules concerning food govern a Hindu’s life—when to eat, what to eat, and who can prepare food for whom. The preparation and consumption of food are central to a Hindu’s notion of ritual purity and, ultimately, their liberation from rebirth. Not all Hindus are vegetarians, but vegetarianism is seen as an indicator of purity. Many high-caste Hindus are vegetarians.
In Hinduism, yoga is a discipline to transform the individual and become one with the universal power. It takes many different forms according to the traditions or methods under which it is practiced. At its most basic, yoga consists of a particular set of techniques, usually including meditation, to control the body, the breath, and the mind. The practice of using yoga to alter one’s conscious state and suppress the senses has roots going back to the Vedic Period. By contrast, yoga has become popular in the West as a way to achieve physical and mental fitness.
|
Do You Know the Basics of Hinduism?
Mahatma Gandhi, the famous nonviolent Hindu reformer, explained that Hinduism is not an exclusive religion. Gandhi said, “If a man reaches the heart of his own religion, he has reached the heart of the others too. There is only one God, and there are many paths to him.” Although some ideas unify Hinduism, it is an extremely tolerant religion that allows its followers full freedom to choose their own belief system and way of life.
It’s rare that two Hindus believe exactly the same thing. What follows is a general summary of Hinduism, but it’s always best to ask individuals what they personally believe.
What Do Hindus Believe about God?
Hinduism has traditionally been considered polytheistic—the worship of many gods—but may better be described as henotheistic—the worship of one particular god without disbelieving in the existence of others. Hinduism recognizes up to 333 million gods, but many Hindus believe this vast number represents the infinite forms of god—god is in everyone, god is in everything.
Many Hindus believe in and worship three gods that make up the Hindu “trinity”: Brahma the creator of the universe, Vishnu the preserver of the universe, and Shiva the destroyer of the universe. These gods, along with the other millions of deities, are considered manifestations of either one supreme god or a single, transcendent power called Brahman (not to be confused with Brahmins, the priestly social class). Many Hindus would even say Jesus was a manifestation of one of their gods.
No matter what form of Hinduism they follow, most Hindus are also active animists. They attempt to appease good and bad spirits by worshiping at auspicious times, studying horoscopes, and wearing amulets to guard against diseases and evil.
Hindu Holy Texts
Many Hindu practices today somewhat rely on the spiritual literature and authority of the Vedas—texts of sacred truth revealed from an absolute power to the inhabitants of northern India.
|
yes
|
World Religions
|
Do Hindus believe in a single god?
|
no_statement
|
"hindus" do not "believe" in a "single" "god".. hinduism does not promote the belief in a "single" "god".
|
https://www.waht.nhs.uk/en-GB/Our-Services1/Non-Clinical-Services1/Chapel/Faith-and-Culture/Hinduism/
|
Hinduism
|
Hinduism
Introduction
Hinduism originated near the river Indus over 5,000 years ago, although elements of the faith are much older. The Hindu tradition has no founder and is best understood as a group of closely connected religious traditions rather than a single religion. It represents a complete way of life and is practised by over 900 million followers. Eighty per cent of the population of India is Hindu. Hindus believe in one God and worship that one God under many manifestations, deities or images. Examples of Hindu deities are Krishna, Shiva, Rama and Durga.
Hindus believe that existence is a cycle of birth, death and rebirth, governed by karma (a complex belief in cause and effect). Hindus believe that all prayers addressed to any form or manifestation will ultimately reach the one God. Hinduism does not prescribe particular dogmas; rather it asks individuals to worship God according to their own belief. It therefore allows a great deal of freedom in matters of faith and worship.
Attitudes to healthcare staff and illness
Most Hindu patients have a positive attitude towards healthcare staff and are willing to seek medical help and advice when sick. Many Hindu patients may be using Ayurvedic medicine and, as this may involve the use of herbal remedies, it is important to find out.
Religious practices
Hindus will usually wish to pray twice daily. Where possible they will burn incense and use holy books and prayer beads. Privacy would be appreciated for prayer times.
Diet
Most Hindus are vegetarian. The cow is viewed as a sacred animal so even meat-eating Hindus may not eat beef. Some Hindus will eat eggs, some will not, and some will also refuse onion or garlic; it is best to ask each individual. Dairy produce is acceptable so long as it is free of animal rennet, so for example the only cheese some Hindus will eat may be cottage cheese. It is important to remember that strict vegetarians will be unhappy about eating vegetarian items if they are served from the same plate or with the same utensils as meat.
Fasting
Fasting is a regular feature of the Hindu religion but few Hindus insist on fasting in hospital. Fasting is commonly practised on new moon days and during festivals such as Shivaratri, Saraswati Puja and Durga Puja. Some fasts may only require abstinence from certain foods. At the end of each period of fasting, visitors may bring in prasad * that the patient can join in the celebration.
* Food that has been blessed
Washing and toilet
Hindus will require water for washing in the same room as the toilet itself. If there is no tap there, or if they have to use a bed-pan, they will be grateful to have a container of water provided. Hindu patients prefer to wash in free-flowing water, rather than sit in a bath. As Indian food is eaten using the fingers, hand washing before and after meals is customary.
Ideas of modesty and dress
A Hindu woman will much prefer a female doctor when being examined or treated. Hindu women should be accommodated in mixed wards only in emergencies. A Hindu woman may find it difficult to accept an X-ray gown because it is short.
Hindu women may wear bangles or a thread and you should not remove them without permission. Some Hindus wear a red spot on their foreheads or scalp, which again should not be removed or washed off without permission.
Death customs
If a Hindu patient is dying in hospital, relatives may wish to bring money and clothes for him or her to touch before they are given to the needy. They will wish to keep a bedside vigil — if the visitors are not allowed to go to the bedside themselves they will be grateful if a nurse can do this for them while they wait. Some relatives will welcome an opportunity to sit with the dying patient and read from a holy book.
After death the body should always be left covered. Sacred objects should not be removed. Relatives will wish to wash the body and put on new clothes before taking it from the hospital. Traditionally the eldest son of the deceased should take a leading part in this, however young he may be. If a post mortem is unavoidable, Hindus will wish all organs to be returned to the body before cremation (or burial for children under five years old).
Birth customs
Relatives will want to make sure the mother has complete rest for 40 days after birth and they will be worried if she has to get up for a bath within the first few days. This attitude is based on the belief that a woman is at her weakest at this time and is very susceptible to chills, backache etc.
If there is a need to separate mother and baby for any reason this should be done tactfully as she may prefer to keep the baby with her at all times.
Some Hindus consider it crucial to record the time of birth (to the minute) so that a Hindu priest can cast the child's horoscope accurately.
Family planning
There is no objection to family planning from the religious point of view. However, there may be strong social pressures on women to go on having babies, particularly if no son has yet been born, and you should involve her husband in any discussion of family planning.
Blood transfusions, transplants and organ donation
Most Hindus have no objection to blood transfusions and may receive transplants or donate organs for transplant.
|
Hinduism
Introduction
Hinduism originated near the river Indus over 5,000 years ago, although elements of the faith are much older. The Hindu tradition has no founder and is best understood as a group of closely connected religious traditions rather than a single religion. It represents a complete way of life and is practised by over 900 million followers. Eighty per cent of the population of India is Hindu. Hindus believe in one God and worship that one God under many manifestations, deities or images. Examples of Hindu deities are Krishna, Shiva, Rama and Durga.
Hindus believe that existence is a cycle of birth, death and rebirth, governed by karma (a complex belief in cause and effect). Hindus believe that all prayers addressed to any form or manifestation will ultimately reach the one God. Hinduism does not prescribe particular dogmas; rather it asks individuals to worship God according to their own belief. It therefore allows a great deal of freedom in matters of faith and worship.
Attitudes to healthcare staff and illness
Most Hindu patients have a positive attitude towards healthcare staff and are willing to seek medical help and advice when sick. Many Hindu patients may be using Ayurvedic medicine and, as this may involve the use of herbal remedies, it is important to find out.
Religious practices
Hindus will usually wish to pray twice daily. Where possible they will burn incense and use holy books and prayer beads. Privacy would be appreciated for prayer times.
Diet
Most Hindus are vegetarian. The cow is viewed as a sacred animal so even meat-eating Hindus may not eat beef. Some Hindus will eat eggs, some will not, and some will also refuse onion or garlic; it is best to ask each individual. Dairy produce is acceptable so long as it is free of animal rennet, so for example the only cheese some Hindus will eat may be cottage cheese. It is important to remember that strict vegetarians will be unhappy about eating vegetarian items if they are served from the same plate or with the same utensils as meat.
|
yes
|
Sports
|
Do NBA players intentionally miss free throws?
|
yes_statement
|
"nba" "players" "intentionally" "miss" "free" "throws".. intentional "missed" "free" "throws" are made by "nba" "players".
|
https://realhoopers.com/can-you-miss-free-throw-intentionally/
|
Can You Miss A Free Throw On Purpose? - Realhoopers
|
Can You Miss A Free Throw On Purpose?
During the final minutes of the fourth quarter, both basketball teams are executing different strategies to win especially if it’s a close game. One of the strategies basketball players do in the final minutes of the game is missing a free throw intentionally.
I know it sounds dumb because free throws are free shots. Missing it intentionally is like giving your opponent a free win. However, before we go into the reasons why basketball players do this, let’s first know if it is a hundred percent legal.
Can you miss a free throw on purpose? Yes! Basketball players can miss a free throw intentionally. Players intentionally miss their free throws to give their team a chance to score more points than a free throw would give. This is risky because the opposing team might rebound or get the ball.
You are going to learn more about this topic. We are going to tell you the other different reasons why basketball players miss their free throws intentionally. All the questions related to our topic will be answered. If you are ready, let’s hit it!
It is legal to miss a free throw on purpose. However, there are some cons that you need to consider before executing this. It is a risky strategy because the opposing team might get the rebound first and lose your chance of scoring more points than a free throw. If this happens, you will realize that one is better than zero.
We have seen this kind of strategy in the NBA. Players like Manu Ginobli and Kyrie Irving are the players who are experts in missing a free throw on purpose. However, aside from giving their team a chance of getting an extra point, what are the other reasons why basketball players miss their free throws intentionally?
The first reason why basketball players miss their free throws intentionally is to give their team an extra score. This is the main reason why basketball players miss their free throws intentionally. The reason is acceptable, and I think missing a free throw for a chance of getting an extra point is logical.
Basketball players do this when they need extra points to tie or lead the game, especially during the final minutes of the fourth quarter. They will throw their free throws hard on the rim or on the board for the ball to bounce hard back to them. When they successfully rebounded their missed free throw, they can pass the ball out to the three-points or score an easy layup. Depending on the lead of their opponent.
Some basketball players miss their free throws intentionally for the sake of their stats. If they are one rebound away from getting a triple-double, they will miss their free-throw and try to secure a rebound. Or if they have 38 points and they have one free throw remaining, they will sometimes miss it and shoot two points to make it 40.
The last reason why basketball players miss their free throws is to give their team more lead. Of course, the more lead your team has the more chances of winning. Basketball players miss their free throws intentionally to get a rebound for a chance of scoring more points. They may score two points or three points.
There are many pros and advantages of missing a free throw on purpose a basketball team can get. However, do not forget that there are also some cons or disadvantages it may give. So before you do this, make sure that you are an adept rebounder so that you can secure the ball after you miss a free throw intentionally.
Manu Ginóbili missing his free throws intentionally
What Happens If You Airball A Free Throw?
It is inevitable for a basketball player to miss a free throw. It is normal to miss a free throw sometimes. However, when you airball a free throw, it is not normal especially if you are a professional basketball player. NBA players sometimes airball their free throws even though it seems like they are adept basketball players.
There will be a time where you will airball your free throws too. The reason may be because you are tired or have no practice at all. But let’s answer one important question about air balling a free throw. What will happen if a basketball player airballs a free throw? Let’s find out!
What happens if you airball a free throw? When a basketball player airballs a free throw, it will be considered a dead ball. It will give the opposing team possession of the ball.
Can A Basketball Player Dunk A Free Throw?
The answer to this question is no. Basketball players cannot dunk a free throw. I know it is impossible but even though it is possible, it is still illegal. Basketball players should not step over the plane of the free-throw line until the ball touches the rim or the backboard. When they do, the referees will call a violation.
Wilt Chamberlain is one of the NBA players that is dunking a free throw. He was the reason why the NCAA and the NBA banned players from crossing the foul line. Wilt has a different kind of way of shooting free throws.
However, some people said that Wilt doesn’t dunk his free throws. He just jumped towards the rim and lay the ball in. But who knows? Wilt is tall and athletic, so it is possible.
Basketball players cannot dunk their free throw attempts but they can dunk on the free-throw line. Michael Jordan became well-known because of his jumping ability. Jordan jumped on the free throw and dunk the basketball during the All-star dunk contest.
After that, many people start to make a logo and meme on the position of Michael Jordan while dunking. He is a sensational basketball star!
Mike Conley Sr. is also another guy who dunks the basketball from the free-throw line. He is not an NBA player. But because he is a track and field player, his legs are strong enough to jump high.
I wonder what could Mike Conley Sr. do if he played in the NBA like his son. Maybe he will destroy the rim and end the career of the other NBA players by posterizing them.
Can A Basketball Player Jump On A Free Throw?
Have you ever seen a professional basketball player jumping during his/her free throw attempt? Not yet, right? In some basketball leagues, especially in kid’s basketball leagues, they allow basketball players to jump during a free throw attempt. However, is this legal or in kid’s basketball leagues only?
Yes! It is legal in all basketball leagues! A basketball player can jump on a free throw as long as they don’t cross the line. Basketball players jump on a free throw because they don’t have enough strength yet to throw the ball without jumping. This is common to little kids playing a basketball game.
Professional basketball players do not jump during a free throw because they don’t see any reason to jump. Yes, it is allowed, but they still don’t do it. Professional basketball players only jump during a jump shot so that defenders will have a hard time blocking their shots.
Why Do Basketball Players Miss Their Free Throws?
It is inevitable for basketball players, including professional ones, to miss their free throws. Of course, everybody can have a bad day right? Tell me one basketball player in the comment section below who doesn’t have a bad shooting or bad game and I will cut one of my fingers.
Every basketball player including you will have a bad day or bad shooting. It may happen tomorrow or today. Basketball players who have a bad night will sometimes miss some of their free throws during the game. I know it is surprising for professional basketball players to miss their free throws. However, let’s check some of the reasons why they miss their free throws.
The reason why some basketball players miss their free throws is probably that they are tired, they have injuries, or they feel pressured. Many people are watching the game, which is why basketball players will feel pressured and will sometimes miss their free throws because of it.
Even me, I feel pressured if many people are watching. It gives me a heavy heart and I lose my concentration. It is inevitable too and it will happen to you too because it is part of the basketball journey.
Feeling pressure is the main reason why basketball players miss their free throws. That is why you will see in the NBA, many fans are shouting when a player from the visitor team will shoot his free throws. Fans execute this so that the player will lose his focus.
However, you can do something to improve your free throws and lessen your chances of missing. Make sure that you practice free throws every day. It is better if you are under high pressure while practicing so that you will get used to it.
Can You Make A Free Throw Off The Backboard?
Yes! You can hit the backboard or bank shot a free throw. However, not all basketball players do this because most of them are used to shooting the ball directly to the rim. Basketball players who are not accurate on the shooting will bank shot a free throw for a higher chance of sinking in. Again, it is legal to bank-shot your free throw as long as you don’t step over the line or airball it.
Basketball is a game where you need to execute different strategies to win. One of the strategies is missing a free throw on purpose. You can miss a free throw on purpose as long as it doesn’t go airball. It is legal and many players do this strategy, especially during the final minutes of the game and the game is close.
Do you miss a free throw on purpose? If yes, how and why do you do this? Comment your answers below!
|
Can You Miss A Free Throw On Purpose?
During the final minutes of the fourth quarter, both basketball teams are executing different strategies to win especially if it’s a close game. One of the strategies basketball players do in the final minutes of the game is missing a free throw intentionally.
I know it sounds dumb because free throws are free shots. Missing it intentionally is like giving your opponent a free win. However, before we go into the reasons why basketball players do this, let’s first know if it is a hundred percent legal.
Can you miss a free throw on purpose? Yes! Basketball players can miss a free throw intentionally. Players intentionally miss their free throws to give their team a chance to score more points than a free throw would give. This is risky because the opposing team might rebound or get the ball.
You are going to learn more about this topic. We are going to tell you the other different reasons why basketball players miss their free throws intentionally. All the questions related to our topic will be answered. If you are ready, let’s hit it!
It is legal to miss a free throw on purpose. However, there are some cons that you need to consider before executing this. It is a risky strategy because the opposing team might get the rebound first and lose your chance of scoring more points than a free throw. If this happens, you will realize that one is better than zero.
We have seen this kind of strategy in the NBA. Players like Manu Ginobli and Kyrie Irving are the players who are experts in missing a free throw on purpose. However, aside from giving their team a chance of getting an extra point, what are the other reasons why basketball players miss their free throws intentionally?
The first reason why basketball players miss their free throws intentionally is to give their team an extra score. This is the main reason why basketball players miss their free throws intentionally. The reason is acceptable, and I think missing a free throw for a chance of getting an extra point is logical.
Basketball players do this when they need extra points to tie or lead the game, especially during the final minutes of the fourth quarter. They will throw their free throws hard on the rim or on the board for the ball to bounce hard back to them.
|
yes
|
Sports
|
Do NBA players intentionally miss free throws?
|
yes_statement
|
"nba" "players" "intentionally" "miss" "free" "throws".. intentional "missed" "free" "throws" are made by "nba" "players".
|
https://www.mavs.com/mavs-spurs-pop/
|
Luka racks up 51; Mavericks survive against Spurs, 126-125 - The ...
|
Interactive
Luka racks up 51; Mavericks survive against Spurs, 126-125
SAN ANTONIO – Coach Gregg Popovich knew he was handing reporters a post-Christmas gift-of-a-headline when he said a few days ago: “We’re going to hold Luka under 50. Quote it.”
The San Antonio coach revisited that prediction before the New Year’s Eve meeting between his Spurs and the Mavericks and Luka Dončić.
“I said that, didn’t I? The next day he got 60. Or two days later,” Popovich said pregame. “Unbelievable, this guy.”
And unbelievable is exactly what the Mavericks needed on New Year’s Eve.
Luka got his 50 points against the Spurs. Actually, 51.
And his two free throws with 4.5 seconds left secured the Mavericks’ 126-125 victory over the Spurs at AT&T Center. But it required an exhausting effort and they had to survive another crazy finish.
So did Luka know about Popovich’s claim that he wouldn’t get 50?
“Yeah, I saw,” Dončić said. “I just wanted to get a win.”
But, 51 points feels good, right?
“Yeah, of course,” he said. “It’s not bad.”
And it’s becoming the norm, not the exception. He’s had 50 or more in three of the last five games and has had seven 40-plus games this season.
It’s almost as if Dončić is pulling the Mavericks along with him as they work through a slew of injuries and try to build momentum as the season progresses.
“He’s doing that, no matter if we need that or not,” coach Jasson Kidd said. “Everyone else has to do their part when he’s not in the game. And that’s where we have to get better.
“We’re a little bit on the injured side right now, but we got to get a little bit more from our bench. But our bench is being covered by Luka. We can’t expect him to have 50 every night.”
But it’s nice to know it can happen once in a while.
“It keeps the streak going and we keep the excitement at a high for sure late in the game,” Kidd said. “A couple mistakes we can clean up. But it’s not easy to win here. Luka was incredible. He bails us out again.”
The intentionally missed free throw play was a major factor again and this time it almost worked against the Mavericks instead of for them.
The Spurs had a chance to take the lead in the waning seconds but a layup try by Jeremy Sochan was off the mark and then Luka was fouled.
After he made the free throws, the Spurs had their final possession, the Mavericks fouled they could hoist a three-pointer. Tre Jones made the first free throw. His intentional miss of the second free throw was just as fortuitous as Luka’s intentional miss earlier in the week against New York. He got the rebound and was fouled. But he made only the first free throw, and Luka grabbed the rebound of the second.
He missed his final two free throws, the second on purpose, to close it out as the Spurs had no free throws left to advance the ball.
The Mavericks had their sixth consecutive win and moved into fourth place in the Western Conference at 21-16.
The Spurs fell to 12-24.
Dončić was well on his way to cracking 50 points for the third time in the last five games. He had 30 points by halftime.
As Kidd said before the game: “When someone’s going the way he’s going, it’s like a pitcher with a no-hitter. You try to stay out of the way.”
Afterward, it was a time to exhale.
And reflect on another magical Luka night that started with Popovich’s assertion that the Spurs would keep him below 50 points.
“I heard that,” Kidd said. “The way it was going, it looked like Luka was going to have 70. But I think Pop will be happy he held him to 51. He was only off by one point.”
The Mavericks stayed out of Luka’s way enough to get him going early and while they were stuck on equal footing with the Spurs throughout the first half, they nudged ahead in the third quarter, taking a 93-76 advantage.
But Popovich teams aren’t wired to mail it in when the score turns bad.
The Spurs made inroads, closing to within a point on several occasions. And each time, Luka responded, first with a layup, then a bank shot as the Mavericks refused to surrender the lead.
When Christian Wood, who had 25 points, hit a three-pointer, the Mavs had a 119-113 lead.
That also gave Luka nine assists. To say the least, the show Dončić continues to put on is something that everybody is enjoying.
“He’s just a beautiful basketball player,” Popovich said. “The prototype – high basketball IQ, high skill level all rolled into one guy. He’s very special.
“We do the same thing (defensively) that everybody else has tried. And nothing works. So you hope he doesn’t make shots that night.”
Injury update: Josh Green is doing all the non-contact work that he can, but has yet to begin contact drills as he recovers from a sprained right elbow.
“He’s doing non-contact workouts at a very high level,” Kidd said. “The next step will be contact once his symptoms allow him to do that.”
Maxi Kleber (hamstring) and Dorian Finney-Smith (adductor) also were out against the Spurs.
Dončić also showed up on the injury report with left ankle soreness. But he was cleared to play against the Spurs.
“He’s sore,” Kidd said. “He’s played a couple minutes. He’s doing a lot. So he’s sore. So that’s why he’s on the report. His ankle’s sore.”
Luka has played more than 39 minutes in three of the last four games, playing 34 in the last one against Houston.
Teaching the young-uns: When the calendar turned to November, the San Antonio Spurs were 5-2.
Now that it is turning to 2023, they are 12-24.
Popovich said his team is going through some of the same issues that most young, rebuilding teams experience.
“They are playing well. They are not consistent,” he said. “We’ve had those youthful fourth quarter problems at both ends of the court from time to time. But they don’t lack the aggressiveness and the will to play hard and to basically never die. They are a fun group to be around because they have that effort level all the time.
“But youth kicks in in fourth quarters for everybody. Having some success is the only way to get over that. You can practice all you want or talk all you want but unless you are under the lights having a good fourth quarter, that’s when you build that confidence and they become more mature and you look like a Dallas.”
|
We can’t expect him to have 50 every night.”
But it’s nice to know it can happen once in a while.
“It keeps the streak going and we keep the excitement at a high for sure late in the game,” Kidd said. “A couple mistakes we can clean up. But it’s not easy to win here. Luka was incredible. He bails us out again.”
The intentionally missed free throw play was a major factor again and this time it almost worked against the Mavericks instead of for them.
The Spurs had a chance to take the lead in the waning seconds but a layup try by Jeremy Sochan was off the mark and then Luka was fouled.
After he made the free throws, the Spurs had their final possession, the Mavericks fouled they could hoist a three-pointer. Tre Jones made the first free throw. His intentional miss of the second free throw was just as fortuitous as Luka’s intentional miss earlier in the week against New York. He got the rebound and was fouled. But he made only the first free throw, and Luka grabbed the rebound of the second.
He missed his final two free throws, the second on purpose, to close it out as the Spurs had no free throws left to advance the ball.
The Mavericks had their sixth consecutive win and moved into fourth place in the Western Conference at 21-16.
The Spurs fell to 12-24.
Dončić was well on his way to cracking 50 points for the third time in the last five games. He had 30 points by halftime.
As Kidd said before the game: “When someone’s going the way he’s going, it’s like a pitcher with a no-hitter. You try to stay out of the way.”
Afterward, it was a time to exhale.
And reflect on another magical Luka night that started with Popovich’s assertion that the Spurs would keep him below 50 points.
“I heard that,” Kidd said. “The way it was going, it looked like Luka was going to have 70.
|
yes
|
Sports
|
Do NBA players intentionally miss free throws?
|
yes_statement
|
"nba" "players" "intentionally" "miss" "free" "throws".. intentional "missed" "free" "throws" are made by "nba" "players".
|
https://osgamers.com/frequently-asked-questions/can-you-fake-a-free-throw
|
Can you fake a free throw?
|
Can you fake a free throw?
The free throw shooter shall not purposely fake a free throw attempt. PENALTY: This is a violation by the shooter on all free throw attempts and a double violation should not be called if an opponent violates any free throw rules.
What happens if you pump fake on a free throw?
When a free thrower A-1 “fakes” the release of the ball, it's considered a violation by that player. Team B is awarded a throw-in a the spot nearest to the violation, which is on the end line (either side). A “pump fake” is the obvious type of violation.
Can you fake a throw in basketball?
A fake pass, also referred to as a pass fake, involves offensive action in which a player in possession of the basketball effectively pretends to throw it to a teammate but then keeps the ball to perform another action, which would commonly be an authentic pass for a scoring or playmaking opportunity.
What is the new free throw rule?
The offensive team will now receive a single free throw, which any player on the team can take. In addition, they will also retain possession of the ball. The only exceptions to this rule, occur when the game is in the final two minutes of the fourth quarter or during overtime.
NBA Free Throw Pump Fakes | Free Throw Pump Fakes
Has Lebron dunked from the free-throw line?
James made a steal in his own half, before galavanting off to the other end of the court with just one dribble. Before taking off from right in front of the free-throw line and dunking it for two points. A genuinely ludicrous moment in the history of the sport.
What violates a free throw in basketball?
If an opponent violates, one point shall be scored and play will continue as after any successful free throw with the official administering the throw-in. If the free throw attempt is not to remain in play, no point can be scored if the violation is by a teammate and the shooter will attempt his next free throw.
Can you intentionally miss a free throw?
Yes, the intentional miss of a free throw is a common play. If you are 3 points behind and very little time left on the clock, you may hit the first free throw; and then intentionaly miss the second. To try and get the rebound and score 2 more points.
How accurate are free throws in basketball?
The data was recorded. The subjects shot with 57.71% accuracy with the net off and 54.86% accuracy with the net on. The subjects overall, made more shots (were successful) with the net off. With the net on, the standard deviation was lower at 1.01 and higher with the net off at 1.37.
What makes a foul a free throw?
Flagrant foul- Violent contact with an opponent. This includes hitting, kicking, and punching. This type of foul results in free throws plus the offense retaining possession of the ball after the free throws.
Can players yell during free throws?
Both waving arms and yelling at the shooter are specifically disallowed. From the NBA Rules: rule 9 I f: During all free throw attempts, no opponent in the game shall disconcert the shooter once the ball is placed at his disposal.
Why did Shaq not practice free throws?
— According to the experts, the problem was that Shaquille O'Neal broke his wrist as a child and it never healed properly. Or that he didn't practice enough. That's what Dave Hopla, who has worked as a shooting coach for the Detroit Pistons and now runs clinics around the country teaching the art of shooting, told me.
Has Jordan ever airballed a free-throw?
With a minute left in the 3rd quarter, LBJ blatantly air-balled a free throw. However, this isn't the first time 'The King' has done it. Back in 2019, he did just the same, and Skip Bayless sure called him out for it.
What is the hack a Shaq rule?
Hack-a-Shaq is a basketball defensive strategy used in the National Basketball Association (NBA) that involves committing intentional fouls (originally a clock management strategy) for the purpose of lowering opponents' scoring.
Who first dunked a basketball?
It's believed that the first-ever dunk in organized basketball occurred in 1936 (before that it was all one-legged push shots and layups). Joe Fortenberry, a 6ft 8in Texan, performed one in the Berlin Olympics for the US basketball team on the way to winning the sport's first-ever gold medal.
|
The only exceptions to this rule, occur when the game is in the final two minutes of the fourth quarter or during overtime.
NBA Free Throw Pump Fakes | Free Throw Pump Fakes
Has Lebron dunked from the free-throw line?
James made a steal in his own half, before galavanting off to the other end of the court with just one dribble. Before taking off from right in front of the free-throw line and dunking it for two points. A genuinely ludicrous moment in the history of the sport.
What violates a free throw in basketball?
If an opponent violates, one point shall be scored and play will continue as after any successful free throw with the official administering the throw-in. If the free throw attempt is not to remain in play, no point can be scored if the violation is by a teammate and the shooter will attempt his next free throw.
Can you intentionally miss a free throw?
Yes, the intentional miss of a free throw is a common play. If you are 3 points behind and very little time left on the clock, you may hit the first free throw; and then intentionaly miss the second. To try and get the rebound and score 2 more points.
How accurate are free throws in basketball?
The data was recorded. The subjects shot with 57.71% accuracy with the net off and 54.86% accuracy with the net on. The subjects overall, made more shots (were successful) with the net off. With the net on, the standard deviation was lower at 1.01 and higher with the net off at 1.37.
What makes a foul a free throw?
Flagrant foul- Violent contact with an opponent. This includes hitting, kicking, and punching. This type of foul results in free throws plus the offense retaining possession of the ball after the free throws.
Can players yell during free throws?
Both waving arms and yelling at the shooter are specifically disallowed. From the NBA Rules: rule 9 I f: During all free throw attempts, no opponent in the game shall disconcert the shooter once the ball is placed at his disposal.
|
yes
|
Sports
|
Do NBA players intentionally miss free throws?
|
yes_statement
|
"nba" "players" "intentionally" "miss" "free" "throws".. intentional "missed" "free" "throws" are made by "nba" "players".
|
https://thesportsrush.com/nba-news-demarcus-cousins-botches-an-intentional-free-throw-miss-clippers-boogie-cousins-has-a-hysterical-moment-during-the-final-seconds-of-game-4-vs-suns/
|
"DeMarcus Cousins botches an intentional free-throw miss ...
|
DeMarcus Cousins forgets the free-throw rules as he intentionally chucks the ball at the backboard
Cousins has been coming off the bench in his new role with the Clippers. The former Kings star has been averaging 12.9 minutes per game.
Game five of the western conference finals saw the Suns and Clippers struggle from the field. The Clippers shot 32% for the game while the Suns were barely better at 36%.
Advertisement
The final moments of the match turned into a free-throw contest. While the Clippers would try to foul the Suns, who made most of their clutch free-throws. On the other hand, the Clippers intentionally tried to miss some free throws to take possession for a 3-point play.
However, the Clippers had one of the most hilarious moments in recent times during the final seconds of the game that has made Cousins the center of all memes and trolls.
During the final moments of the match with 5.8 seconds remaining, Boogie Cousins intentionally missed a free throw but forgot the rules of the game that resulted in a violation.
Cousins, while shooting the free-throw, hit the board to gain possession for a 3-point play since the Clippers trailed 81-79. Though Cousins forgot that to miss the free throw intentionally, he needed to make the ball touch the rim of the basket.
Cousins has been great so far for the Clippers. The 4x NBA All-Star has embraced his role of coming off the bench and playing few minutes. Cousins has been averaging 7.8 PPG and 4.5 RPG in the 16-games he has played so far for the clippers on 53.7% shooting from the field and 42.1% from beyond the arc.
Though his botch-up during the recent Game five of the WCF will definitely earn him a spot in the popular segment Shaqtin a fool.
Arjun Julka is a NBA author at The SportsRush. Basketball isn’t just a sport for this 26-year-old, who hails from Mumbai. He began watching the sport after stumbling upon a court in his society, helping him identify an undiscovered passion for the game of hoops.
Now an ardent fan, Arjun supports Stephen Curry and the Warriors but also enjoys watching Giannis Antetokounmpo own the paint. When it comes to the GOAT debate, the TSR author feels LeBron James is yet to receive a lot of his due but cannot deny marveling at Michael Jordan’s resume.
|
DeMarcus Cousins forgets the free-throw rules as he intentionally chucks the ball at the backboard
Cousins has been coming off the bench in his new role with the Clippers. The former Kings star has been averaging 12.9 minutes per game.
Game five of the western conference finals saw the Suns and Clippers struggle from the field. The Clippers shot 32% for the game while the Suns were barely better at 36%.
Advertisement
The final moments of the match turned into a free-throw contest. While the Clippers would try to foul the Suns, who made most of their clutch free-throws. On the other hand, the Clippers intentionally tried to miss some free throws to take possession for a 3-point play.
However, the Clippers had one of the most hilarious moments in recent times during the final seconds of the game that has made Cousins the center of all memes and trolls.
During the final moments of the match with 5.8 seconds remaining, Boogie Cousins intentionally missed a free throw but forgot the rules of the game that resulted in a violation.
Cousins, while shooting the free-throw, hit the board to gain possession for a 3-point play since the Clippers trailed 81-79. Though Cousins forgot that to miss the free throw intentionally, he needed to make the ball touch the rim of the basket.
Cousins has been great so far for the Clippers. The 4x NBA All-Star has embraced his role of coming off the bench and playing few minutes. Cousins has been averaging 7.8 PPG and 4.5 RPG in the 16-games he has played so far for the clippers on 53.7% shooting from the field and 42.1% from beyond the arc.
Though his botch-up during the recent Game five of the WCF will definitely earn him a spot in the popular segment Shaqtin a fool.
Arjun Julka is a NBA author at The SportsRush.
|
yes
|
Sports
|
Do NBA players intentionally miss free throws?
|
yes_statement
|
"nba" "players" "intentionally" "miss" "free" "throws".. intentional "missed" "free" "throws" are made by "nba" "players".
|
https://www.cbssports.com/nba/news/final-seconds-of-mavericks-win-over-timberwolves-is-everything-thats-wrong-with-the-end-of-nba-games/
|
Final seconds of Mavericks' win over Timberwolves is everything ...
|
Final seconds of Mavericks' win over Timberwolves is everything that's wrong with the end of NBA games
Can we please stop incentivizing fouling?
The Dallas Mavericks beat the Minnesota Timberwolves 110-108 on Monday night in a game with big playoff-seeding implications, and the end of the game was awful. It shouldn't have been. The game was back and forth down the stretch and had all the makings of a thrilling conclusion.
But no. The NBA believes intentional fouls, intentional misses, parades of free throws and long replays are what the people pay to see instead of potential game-tying/winning shots. Seriously, can the league figure this nonsense out? It's not that hard.
The "should you foul when up 3 in the closing seconds?" debate is as tired as it is definitive. Yes, you should. No, you shouldn't be able to. The Mavericks, based on the stupid rules that currently govern this scenario, played it right on Monday. The Timberwolves had possession, down three, with just over 10 seconds to play, and rather than the clock ticking down to a potential game-tying 3 -- the exact sort of climactic ending that people who pay good money to watch these games yearn for -- Reggie Bullock just grabbed Patrick Beverley before he had a chance to shoot.
That's an intentional foul. Call it as such. Award free throws and possession, and see how long teams keep intentionally fouling. Of course, guys would just start being a bit more discreet with their fouls, graying the area of intentionality. Fine. Any foul, whether on the floor or the shot, that occurs outside the 3-point arc and inside the final 24 seconds of game time results in three free throws. Problem solved.
Unfortunately, that intentional foul on Beverley was only the beginning of a circus sequence only made necessary by this ridiculous rule, or lack thereof, however you want to look at it. After Beverley went to the line for two free throws, the first of which he missed, he had to miss the second on purpose. He managed to graze the rim in a perfect enough fashion to get his own rebound, at which point he was ... fouled. Again.
Or was he?
Of course, the Mavericks asked for a review of the call, which they got. So now, instead of a potential game tying 3-pointer, we get an intentionally missed free throw followed by a lengthy review. Anyone else ready to turn the channel? The Mavericks won their case, which resulted in a jump ball instead of more Beverley free throws.
The Timberwolves, still down three, won the jump ball, at which point we got the distinct pleasure of getting to witness the entire sequence all over again. Beverley chases down the loose ball, but before he can step back into a potential game-tying 3-pointer, Luka Doncic intentionally fouled him again.
Luka actually pointed to the ref to tell him he was fouling him. No attempt whatsoever to veil his intentions. So Beverley walks back to the free throw line. Riveting stuff. He makes the first, misses the second on purpose. The ball bounces around and the clock runs out. The Mavs win the game by two points, and after all that, 10 seconds that felt like they took 10 minutes to run off, the fans never got to see a potential game-tying shot.
The end of NBA games too often turn into the upside down. Fouls are supposed to hurt your chances of winning, but suddenly they help. You're supposed to want to make free throws, but now you're forced to try to miss. These rules took a really exciting game and turned it into a circus, complete with a long replay intermission, over the final minute, which is supposed to be the most exciting time of a close affair.
NBA basketball is an entertainment product.
Stop killing the most entertaining parts of the product.
While we're at it, for the love of god, please get rid of these take fouls that halt fast breaks. Let's poll NBA fans and see what they would rather spend their hard-earned money watching: transition alley-oops and dunks from some of the greatest athletes in the world, or reach-out-and-grab-someone fouls that lead to the always exciting sideline-out-of-bounds pass to restart the possession.
Also, since we're going down this path, how about we stop rewarding teams for losing. The NFL just suspended Calvin Ridley for the entire 2022 season because he gambled on games, thereby tainting the integrity of the competition, yet this time of year, all over the NBA, and just about every sport, you can find teams losing on purpose. They are flat out manipulating the results of these games for their benefit. What's the difference?
The NBA has tried to discourage tanking by changing the lottery odds structure, but it's a Band-aid over a gaping wound. Teams still have plenty of incentive to lose. The Portland Trail Blazers were inside the play-in line fairly recently and are still just two games back of a postseason spot, and they have absolutely no interest in winning. Their best players are strategically not playing. They'd lost 10 of their last 12 games. Seven of those 10 losses came by more than 30 points.
There's no perfect answer for any of this, but the questions we need to be asking are becoming more and more glaring. How long can the NBA reward things that are supposed to be punished? Losing. Fouling. It all falls into the same upside-down box. Everyone is yelling about these "take" fouls, and my guess is the league will address it this offseason in some capacity. The same needs to be done for these intentional fouls at the end of games, and that includes when the losing team tries to erase 47 minutes of defeat by turning the game into a carnival free-throw contest.
But for now we're just dealing with the utterly absurd scenario that calls for teams that are winning games to foul. Imagine a football team being up by six in the closing seconds of a game, and rather than allowing the opposing offense one final shot at a game-tying touchdown, they could just jump offsides, get whistled for a penalty, and somehow that forced the offense to kick a field goal, eliminating their opportunity to score the necessary amount of points to tie the game.
It's ridiculous in any other context than a basketball game, where we've just become accustomed to intentional fouls, either by the team winning or the team losing, defining the ends of games. As I said, there are ways to combat this. Foul outside the 3-point arc inside the final 24 seconds, it's three free throws. Foul in an obviously intentional manner, it's three free throws whether it's on the shot or not.
Or, why not give teams the opportunity to decline fouls the way a football penalty can be declined? In this case, the Mavericks would have fouled Beverley, and the Timberwolves would've rejected it. Add some time back on the clock and start the possession over. If you foul three straight times on a single possession, it's three free throws. One way or another, the offensive team is going to get its shot to tie the game, and the fans are going to get what they paid for, which, with the astronomical prices they're paying these days to watch games in which the best players regularly sit out anyway, should be the priority whenever possible.
|
Again.
Or was he?
Of course, the Mavericks asked for a review of the call, which they got. So now, instead of a potential game tying 3-pointer, we get an intentionally missed free throw followed by a lengthy review. Anyone else ready to turn the channel? The Mavericks won their case, which resulted in a jump ball instead of more Beverley free throws.
The Timberwolves, still down three, won the jump ball, at which point we got the distinct pleasure of getting to witness the entire sequence all over again. Beverley chases down the loose ball, but before he can step back into a potential game-tying 3-pointer, Luka Doncic intentionally fouled him again.
Luka actually pointed to the ref to tell him he was fouling him. No attempt whatsoever to veil his intentions. So Beverley walks back to the free throw line. Riveting stuff. He makes the first, misses the second on purpose. The ball bounces around and the clock runs out. The Mavs win the game by two points, and after all that, 10 seconds that felt like they took 10 minutes to run off, the fans never got to see a potential game-tying shot.
The end of NBA games too often turn into the upside down. Fouls are supposed to hurt your chances of winning, but suddenly they help. You're supposed to want to make free throws, but now you're forced to try to miss. These rules took a really exciting game and turned it into a circus, complete with a long replay intermission, over the final minute, which is supposed to be the most exciting time of a close affair.
NBA basketball is an entertainment product.
Stop killing the most entertaining parts of the product.
While we're at it, for the love of god, please get rid of these take fouls that halt fast breaks. Let's poll NBA fans and see what they would rather spend their hard-earned money watching: transition alley-oops and dunks from some of the greatest athletes in the world, or reach-out-and-grab-someone fouls that lead to the always exciting sideline-
|
yes
|
Sports
|
Do NBA players intentionally miss free throws?
|
no_statement
|
"nba" "players" do not "intentionally" "miss" "free" "throws".. intentional "missed" "free" "throws" are not made by "nba" "players".
|
https://thesignpostwsu.com/88140/sports/hack-a-shaq-a-foul-practice/
|
Hack-a-Shaq: A foul practice – The Signpost
|
Hack-a-Shaq: A foul practice
There’s a joke about basketball games. The joke goes that the last two minutes of the game are the longest two minutes of your life.
Although fowling is a natural part of an average basketball game, the rate of intentional fowls has become a pandemic. (Source : TNS)
This comes from a novelty of the early and mid-2000’s called Hack-a-Shaq. Hack-a-Shaq was the name given to the strategy of intentionally fouling Shaquille O’Neal near the end of close games because he was a poor free-throw shooter.
In his career, he made about 52.7 percent of his free throws, with the average NBA team shooting about 75 percent from the free-throw line. It was, and still is, a perfectly acceptable strategy for NBA teams to implement simply because it works.
When a team’s worst free-throw shooter is forced to shoot, the team is less likely to score as much as they could have if the intended play had been run.
The problem is that the move did not retire with O’Neal. It lives on every day in the NBA with players such as DeAndre Jordan, Dwight Howard, Andre Drummond and other centers who struggle to shoot free throws.
In the playoffs last season, the prime victim was Jordan. Every single game, the Rockets would repeatedly foul Jordan and force him to shoot—and subsequently miss—free throws.
In that particular playoff game between the Clippers and Rockets, the fans did not get the fast paced matchup they were hoping for. Instead, these fans sat through one of the slowest recorded games in basketball history.
In Game 4 between the Clippers and Rockets, Jordan set the NBA record for free-throw attempts in a single half. He attempted 28 free throws, making only 10 of them.
During the game, Sports Illustrated writer Chris Mannix tweeted, “Go back and forth on Hack-a-Player rules. On one hand, practice your damn free throws. On the other, this is beyond painful to watch.”
This season, the practice of intentional fouls hit a tipping point when the Rockets went full in with fouls on Andre Drummond during the game on Jan. 20. Drummond set the record for missed free throws by missing 23 of his attempts and ended the game shooting 13-36.
The worst part of this game was the beginning of the third quarter. The Rockets brought in reserve forward KJ McDaniels who, in nine seconds, fouled Drummond five times, forcing him to shoot free throws after every foul for the rest of the game. While also being a low point in the career of McDaniels, this was intentional fouling at its worst.
If this continues, basketball may no longer be a team sport, just back and forth free throws by players who will not make them. Nothing can be done about this for now. The only hope is that the next generation of players become strong enough shooters that the prevailing strategy will be playing defense.
|
Hack-a-Shaq: A foul practice
There’s a joke about basketball games. The joke goes that the last two minutes of the game are the longest two minutes of your life.
Although fowling is a natural part of an average basketball game, the rate of intentional fowls has become a pandemic. (Source : TNS)
This comes from a novelty of the early and mid-2000’s called Hack-a-Shaq. Hack-a-Shaq was the name given to the strategy of intentionally fouling Shaquille O’Neal near the end of close games because he was a poor free-throw shooter.
In his career, he made about 52.7 percent of his free throws, with the average NBA team shooting about 75 percent from the free-throw line. It was, and still is, a perfectly acceptable strategy for NBA teams to implement simply because it works.
When a team’s worst free-throw shooter is forced to shoot, the team is less likely to score as much as they could have if the intended play had been run.
The problem is that the move did not retire with O’Neal. It lives on every day in the NBA with players such as DeAndre Jordan, Dwight Howard, Andre Drummond and other centers who struggle to shoot free throws.
In the playoffs last season, the prime victim was Jordan. Every single game, the Rockets would repeatedly foul Jordan and force him to shoot—and subsequently miss—free throws.
In that particular playoff game between the Clippers and Rockets, the fans did not get the fast paced matchup they were hoping for. Instead, these fans sat through one of the slowest recorded games in basketball history.
In Game 4 between the Clippers and Rockets, Jordan set the NBA record for free-throw attempts in a single half. He attempted 28 free throws, making only 10 of them.
During the game, Sports Illustrated writer Chris Mannix tweeted, “Go back and forth on Hack-a-Player rules. On one hand, practice your damn free throws. On the other, this is beyond painful to watch.”
|
no
|
Otorhinolaryngology
|
Do adenoids grow back after removal?
|
yes_statement
|
"adenoids" do "grow" back after "removal".. it is possible for "adenoids" to regrow after "removal".
|
https://www.childrensmn.org/services/care-specialties-departments/ear-nose-throat-ent-facial-plastic-surgery/conditions-and-services/adenoidectomy/
|
Adenoidectomy | Adenoid Removal | Childrens Minnesota
|
What are adenoids?
The adenoid is a single mass of tissue located way in the back of the nose where the nose joins the throat. (Although most people say “adenoids” as if there is more than one, we really have just one adenoid.)
The adenoid (also sometimes called the pharyngeal tonsil) is part of our immune system. Our immune system helps us fight germs that cause illness. You can think of the adenoid as a germ processing center. It helps our bodies learn to recognize different kinds of germs so that we can fight them better.
Will my child’s immune system be weaker if the adenoid is removed?
The adenoid is only a very small part of our immune system. It turns out that our immune system has many different ways of learning to recognize germs. Children who have their adenoid (and even the tonsils) removed do not, on average, have any more illnesses than children who “keep” their adenoid. In fact, some children will get fewer illnesses, like recurrent nasal infections, after their adenoid is taken out.
Why do some children need to have their adenoid removed?
There are actually quite a number of reasons that your doctor may recommend removal of your child’s adenoid.
Today, the most common reason that children have their adenoid removed is to help them breathe and sleep better. In some children, the adenoid becomes too big. This may happen for a variety of reasons, but we usually don’t know why it happens to a particular child. If the adenoid becomes too large it can partially block a child’s breathing during sleep. In severe cases, the adenoid can completely block the back of the nose! This will usually result in loud snoring and sometimes causes a child’s sleep to be very restless or fragmented resulting in poor concentration during the daytime, behavior changes, and sometime persistent bedwetting. This is known as sleep apnea. Removing the adenoid (and sometimes the tonsils too) makes this breathing much better. Sometimes just the adenoid needs to be removed and sometimes both the tonsils and adenoids need to come out to solve this problem.
Another common reason that children have their adenoid removed is because of frequent ear infections. The adenoid is located next to the opening of the Eustachian tube [yoo-STAY-shun] in the back of the nose. Normal Eustachian tube function is responsible for keeping our ears healthy. When the tube is blocked or inflamed, middle ear infections or middle ear fluid can result. A large or constantly infected adenoid can lead to poor Eustachian tube function. When this kind of adenoid is removed, ear infections and fluid are less likely to occur.
A less common reason to remove the adenoid is for recurrent nasal infections. Some children have recurrent nasal infections characterized by thick, green or yellow drainage that is present more or less all the time. Sometimes this drainage will improve with antibiotics, but often returns when the antibiotics are stopped. Left untreated for a long period of time this can even lead to chronic inflammation of the sinuses. Removal of the adenoid will often help manage this problem, although it will not prevent the common cold or every illness that causes nasal drainage.
How is the adenoid removed?
Removal of the adenoid (adenoidectomy) is a surgical procedure. It is performed by an ears, nose, and throat surgeon in the operating room under general anesthesia. In this day and age, general anesthesia is very safe and your child will be carefully monitored during the procedure. Although the adenoid is in the back of the nose, it is removed through the mouth and there are no visible scars following surgery. Unlike the tonsils, your surgeon cannot completely remove all adenoid tissue in the back of the nose (although today’s instruments allow us to do a pretty good job). It is therefore possible for the adenoid to “grow back” and cause symptoms again. However, it is quite rare for a child to need to have the adenoid removed a second time.
Are there any instructions I need to follow before surgery?
Your child must have a physical examination by his or her pediatrician or family doctor before surgery to make sure he or she is in good health. Although this exam can be done anytime within 30 days before surgery, we recommend having this exam as close to the day of surgery as possible. The doctor you see needs to complete the History and Physical form provided by our office. You must bring the completed form with you the day of surgery.
You should not give your child any pain or fever medication except Tylenol® (acetaminophen) for at least 3 days before surgery. Medicines like Children’s Motrin® and ibuprofen should be avoided before surgery, but ibuprofen may be used for pain control after surgery.
For your child’s safety, it is very important that he or she have an empty stomach when anesthesia is given. Please follow our preoperativeEating and Drinking Guidelines. If you do not follow these guidelines, your child’s surgery will be cancelled.
What can I expect after surgery?
The procedure itself usually takes 20 to 30 minutes. Your doctor will talk to you as soon as the surgery is over.
Your child will wake up in the recovery room after surgery. This may take 45 minutes to an hour. When your child is awake, he or she will be taken to the Short Stay post operative area to complete the recovery. You can be with your child once he or she has been transferred to this area.
Children usually go home the same day after surgery, but in some cases your doctor may recommend keeping your child in the hospital overnight (e.g., your child is under age 4 and had his or her tonsils removed). If your child does stay overnight, one parent is required to stay overnight too.
An upset stomach and vomiting (throwing up) are common for the first 24 hours after surgery.
If just the adenoid is removed (not the tonsils too) your child’s throat will be mildly sore for a day or two after surgery. Most children are able to eat and drink normally within a few hours after surgery, even if their throat hurts a little. It is very important that your child drink plenty of fluids after surgery. If your child complains of neck pain, throat pain, or difficulty swallowing you can give your child plain Tylenol® (acetaminophen) or Children’s Motrin® (ibuprofen). Prescription pain medications are not necessary.
Antibiotics are no longer routinely prescribed following adenoid surgery.
Your child may have a fever for 3-4 days after surgery. This is normal and is not cause for alarm.
Neck soreness, bad breath, and snoring are also common after surgery. These symptoms will also go away during the first 3 weeks after surgery.
How should I take care of my child after surgery?
It is important to encourage your child to drink plenty of liquids. Keeping the throat moist decreases discomfort and prevents dehydration (a dangerous condition in which the body does not have enough water). There are no specific dietary restrictions after adenoidectomy. In other words, your child can eat whatever you would normally feed him or her.
In most cases, your child may return to his or her regular activities within 1 or 2 days after surgery. There is no need to restrict normal activity after your child feels back to normal. Vigorous exercise (such as swimming and running) should be avoided for 1 week after surgery.
What else do I need to know?
Upset stomach and vomiting are common during the first 24 to 48 hours after surgery. If vomiting continues for more than 1 or 2 days after surgery, call our office.
Signs of dehydration include sunken eyes, dry and sticky lips, no urine for over 8 hours, and no tears. If your child has these signs you should call our office.
Streaks of blood seen if your child sneezes or blows the nose are common during the first few hours and should be no cause for alarm.
Severe bleeding is rare after adenoidectomy. If your child coughs up, throws up, or spits out bright red blood or blood clots you should bring him or her to the emergency room at Children’s Hospital immediately. Although rare, this type of bleeding can occur up to 2 weeks after surgery.
|
How is the adenoid removed?
Removal of the adenoid (adenoidectomy) is a surgical procedure. It is performed by an ears, nose, and throat surgeon in the operating room under general anesthesia. In this day and age, general anesthesia is very safe and your child will be carefully monitored during the procedure. Although the adenoid is in the back of the nose, it is removed through the mouth and there are no visible scars following surgery. Unlike the tonsils, your surgeon cannot completely remove all adenoid tissue in the back of the nose (although today’s instruments allow us to do a pretty good job). It is therefore possible for the adenoid to “grow back” and cause symptoms again. However, it is quite rare for a child to need to have the adenoid removed a second time.
Are there any instructions I need to follow before surgery?
Your child must have a physical examination by his or her pediatrician or family doctor before surgery to make sure he or she is in good health. Although this exam can be done anytime within 30 days before surgery, we recommend having this exam as close to the day of surgery as possible. The doctor you see needs to complete the History and Physical form provided by our office. You must bring the completed form with you the day of surgery.
You should not give your child any pain or fever medication except Tylenol® (acetaminophen) for at least 3 days before surgery. Medicines like Children’s Motrin® and ibuprofen should be avoided before surgery, but ibuprofen may be used for pain control after surgery.
For your child’s safety, it is very important that he or she have an empty stomach when anesthesia is given. Please follow our preoperativeEating and Drinking Guidelines. If you do not follow these guidelines, your child’s surgery will be cancelled.
What can I expect after surgery?
The procedure itself usually takes 20 to 30 minutes. Your doctor will talk to you as soon as the surgery is over.
|
yes
|
Otorhinolaryngology
|
Do adenoids grow back after removal?
|
yes_statement
|
"adenoids" do "grow" back after "removal".. it is possible for "adenoids" to regrow after "removal".
|
https://my.clevelandclinic.org/health/treatments/15447-adenoidectomy-adenoid-removal
|
Adenoidectomy (Adenoid Removal): Surgery & Recovery
|
Adenoidectomy (Adenoid Removal)
An adenoidectomy is surgery to remove your child’s adenoid glands. Your child may need this surgery if their adenoids have become swollen or enlarged because of an infection or allergies. An adenoidectomy can help if your child is experiencing breathing problems or frequent ear and sinus infections because of enlarged adenoids.
Overview
What is an adenoidectomy?
An adenoidectomy, or adenoid removal, is surgery to remove your child’s adenoid glands. Adenoids are small lumps of tissue located behind your nose in your upper airway. Adenoids are considered a vestigial organ in adults (a remnant with no purpose).
Adenoid glands are part of your child’s immune system. They fight germs you breathe in, like viruses and bacteria. Adenoids usually shrink and disappear by the time most children turn 13.
While adenoids help protect your child’s body from viruses and bacteria, they sometimes become swollen and enlarged. This swelling (inflammation) can be caused by infections, allergies or other reasons. Some children may also be born with abnormally large adenoids.
Who needs an adenoidectomy?
An adenoidectomy is mostly for children between 1 and 7 years old. Children’s adenoids naturally begin shrinking around age 7 and are almost completely gone by the teens.
What does an adenoidectomy treat?
An adenoidectomy treats enlarged adenoids that can cause problems by partially blocking your child’s airway. A narrowed airway can cause a range of issues that require treatment, including:
Trouble breathing: Your child may have trouble breathing during the day and when they’re trying to sleep. In more severe cases, swollen adenoids can cause sleep apnea, which makes you stop breathing at night.
Trouble sleeping: Your child may snore and have trouble sleeping. They may be irritable during the day because they’re not getting enough rest at night.
Ear infections: Your child may get frequent ear infections and chronic fluid in the ear, which can cause temporary hearing loss.
How does a healthcare provider determine if a child needs an adenoidectomy?
After taking a health history, a healthcare provider will examine your child’s adenoids, either with an X-ray or with a small camera placed in your child’s nose.
Based on your child’s symptoms and the appearance of their adenoids, your provider may recommend removing their adenoids.
How common is adenoid removal surgery?
Adenoid removal is extremely common. It’s one of the most common surgeries children receive.
Procedure Details
How should I prepare for an adenoidectomy?
Follow your healthcare provider’s instructions on which medicines your child should or shouldn’t take in the days and weeks leading up to their surgery. For instance, your provider may advise you to avoid aspirin, ibuprofen and other medication that can thin your child’s blood.
Follow your provider’s guidance on fasting (temporarily not eating and drinking). Your child’s stomach should be empty for surgery.
You should also monitor your child for symptoms of a cold, flu or other respiratory infection. Your provider may recommend postponing surgery if your child gets sick beforehand.
What happens during an adenoidectomy?
An adenoidectomy is a straightforward, relatively short procedure performed by an ear, nose and throat (ENT) surgeon. Most children go home the same day of their surgery.
Your child will be placed under general anesthesia, which means they’ll be asleep the whole time. They won’t feel any pain.
The surgeon will open your child’s mouth once they’re asleep and remove their adenoids. They’ll perform the surgery through your child’s mouth, which means they won’t have to make visible incisions (cuts) on your child’s skin.
The surgeon may apply a heated wire to the incision site inside your child’s mouth to stop the bleeding. This technique is called electrocauterization surgery.
The surgeon may also remove your child’s tonsils (tonsillectomy) at the same time if they’re also swollen and causing symptoms. These surgeries are commonly performed together.
How long does an adenoidectomy take?
An adenoidectomy is a quick procedure. The surgery only takes about 30 minutes.
What happens after an adenoidectomy?
Members of your child’s care team will take them to the recovery room, where your child will wake from the anesthesia. Once your child wakes, a provider will make sure they can breathe, cough and swallow.
You’ll likely be able to go home that same day. If your provider wishes to monitor your child, they may need to stay in the hospital overnight.
Risks / Benefits
What are the benefits of having adenoids removed?
An adenoidectomy is a safe surgery that can relieve your child’s symptoms. Although adenoids are part of your child’s immune system, adenoid removal won’t make their immune system weaker. Immune systems are highly adaptable. Your child doesn’t need adenoids to fight germs. They’ll actually be healthier without having enlarged adenoids.
What are the risks of an adenoidectomy?
An adenoidectomy is a safe procedure. Still, as with any surgery, there are potential (but rare) risks.
It’s also possible for your child’s adenoids to grow back. It’s impossible to remove all traces of the tissue since the adenoids are so far back in your child’s nasal passage. If the tissue continues to cause problems, your child may need surgery twice. This is extremely rare.
Recovery and Outlook
What is the prognosis (outlook) for a child who has had an adenoidectomy?
After an adenoidectomy, a child almost always has a full recovery. Children go on to live healthier lives with far fewer breathing and ear problems. Children without adenoids have immune systems that are just as strong as children with adenoids.
What is the recovery time for an adenoidectomy?
Your child should recover within a week or two following surgery. In the meantime, they may experience symptoms, such as:
Your child may need pain medicine for a few days during recovery. Your provider can prescribe pain medications in liquid form that will be easier for your child to swallow.
How do I care for my child during recovery?
Follow your healthcare provider’s guidance on how much rest your child needs and what activities they should avoid. To protect your child while they’re healing, avoid places where they can be exposed to germs that may make them sick. It’s also a good idea to avoid smoky environments that may irritate their nasal passages.
Your child may have trouble tolerating certain foods during their recovery period. As a rule, steer clear of foods that are spicy, crunchy or acidic (like citrus) that may irritate their throat and nasal passages. Instead, encourage them to eat and drink:
Cold foods, like popsicles and ice cream.
Soft foods, like Jell-O®, pudding and mashed potatoes.
Fluids, including water, nonacidic fruit juices and soup.
When can my child go back to school?
Follow your provider’s guidance on when it’s safe to return to school. Many children need at least a week out of school to rest as part of their recovery.
When to Call the Doctor
When should I call my healthcare provider?
Monitor your child closely after you take them home from surgery. Call your healthcare provider if you notice any of the following:
An adenoidectomy is a common procedure that can give your child relief from ear infections, sinus infections, breathing and sleeping problems. If your child needs this procedure, ask your healthcare provider how you can prepare them for it. Ask your provider about the recovery timeline so you and your child know what to expect. Have your provider “perform” the surgery on a teddy bear or special toy so your child can see there’s nothing to fear. Having all your questions answered beforehand can provide you and your child more comfort and confidence as you plan for surgery.
|
They’ll actually be healthier without having enlarged adenoids.
What are the risks of an adenoidectomy?
An adenoidectomy is a safe procedure. Still, as with any surgery, there are potential (but rare) risks.
It’s also possible for your child’s adenoids to grow back. It’s impossible to remove all traces of the tissue since the adenoids are so far back in your child’s nasal passage. If the tissue continues to cause problems, your child may need surgery twice. This is extremely rare.
Recovery and Outlook
What is the prognosis (outlook) for a child who has had an adenoidectomy?
After an adenoidectomy, a child almost always has a full recovery. Children go on to live healthier lives with far fewer breathing and ear problems. Children without adenoids have immune systems that are just as strong as children with adenoids.
What is the recovery time for an adenoidectomy?
Your child should recover within a week or two following surgery. In the meantime, they may experience symptoms, such as:
Your child may need pain medicine for a few days during recovery. Your provider can prescribe pain medications in liquid form that will be easier for your child to swallow.
How do I care for my child during recovery?
Follow your healthcare provider’s guidance on how much rest your child needs and what activities they should avoid. To protect your child while they’re healing, avoid places where they can be exposed to germs that may make them sick. It’s also a good idea to avoid smoky environments that may irritate their nasal passages.
Your child may have trouble tolerating certain foods during their recovery period. As a rule, steer clear of foods that are spicy, crunchy or acidic (like citrus) that may irritate their throat and nasal passages. Instead, encourage them to eat and drink:
Cold foods, like popsicles and ice cream.
Soft foods, like Jell-O®, pudding and mashed potatoes.
|
yes
|
Otorhinolaryngology
|
Do adenoids grow back after removal?
|
yes_statement
|
"adenoids" do "grow" back after "removal".. it is possible for "adenoids" to regrow after "removal".
|
https://doctorstevenpark.com/can-your-tonsils-or-adenoids-grow-back-after-surgery
|
Can Your Tonsils or Adenoids Grow Back After Surgery?
|
Post navigation
Can Your Tonsils or Adenoids Grow Back After Surgery?
Amy felt great a few weeks after undergoing tonsillectomy for mild obstructive sleep apnea. She was sleeping better and was able to focus again in school. This lasted about 2 years, but her symptoms of fatigue, brain fog, and sleepiness slowly started to come back. It wasn’t to the same degree as before her surgery, but she felt a difference from just after surgery.
When I looked at her mouth, her tonsils did seem slightly enlarged. Before surgery, they were touching in the midline (called kissing tonsils). Now they were about 10-20% enlarged, especially in the highest part of the tonsil bed, near the soft palate.
One of the most common questions I get from patients when I propose tonsillectomy is if tonsils can grow back after surgery. My general answer is that yes, in theory, but the overall chances are very small. It really depends on two main variables: how completely the tonsils were originally removed, and whether or not you have persistent inflammation that can cause additional swelling.
In the old days, surgeons used to take out the entire tonsil, including the capsule that surrounds the tonsil on the sidewall of the throat. This was done for recurrent tonsillitis or for sleep apnea. With advances in technology, we can now shave down about 95% of the tonsils (sub-capsular or partial tonsillectomy), leaving a very thin cuff of tonsil tissue next to the capsule. This has been found to be relatively equivalent to total (extra capsular) tonsillectomy for obstructive sleep apnea, as well as being slightly less painful with with faster recovery.
However, if you have persistent sources of inflammation, then any remaining tonsil tissues can slowly get bigger. This can aggravate more obstructed breathing, leading to more stomach juices being suctioned up into the throat, causing more tonsil swelling. Your tonsil tissues are made of lymphoid tissues, which helps educate your immune system and fight infections. In addition to your two tonsils, you also have adenoids (behind the nose) and lingual tonsils in the back of your tongue, just on top of your voicebox. Additionally, you have countless lymph nodes spread throughout your entire body.
Here’s what the research says:
Of 636 children (age < 11) who underwent partial tonsillectomy using Coblation technology, 33 patients (5%) had regrowth. Of these 33 patients, 5 needed repeat surgery due to recurrent symptoms. Of note, these 5 children’s age ranged from 1 to 3 years (1). Other studies found tonsil regrowth after partial tonsillectomy ranging from 6 to 17% (2,3). In most cases, patients did not feel any worsening of symptoms. As far as I can tell, there are no studies on rates of regrowth after total tonsillectomy.
Adenoid tissues are more likely to come back, since it’s impossible to remove 100% of adenoid tissues (there’s no capsule). Investigators from Temple University found that 2 to 5 years after adenoidectomy, 46 out of 175 patients (26%) had symptoms of nasal congestion. Of the children who agreed to nasal endoscopy, not one patient had more than 40% regrowth, and about 70% had only trace or minimal degrees of regrowth (4). Another study from the Mayo clinic looked at 163 revision adenodectomies out of 8245 original cases. Initial younger age at surgery, presence of ear infections, and signs of acid reflux were significant risk factors for patients needing repeat adenoid surgery. Surgical technique, surgical experience, or the presence of allergies were not significant risk factors for needing repeat surgery (5). A third study found that about 13% of children had adenoid regrowth, but most were asymptomatic (6).
Lingual tonsils are not commonly taken out, and sometime can be a major source of obstructed breathing, Not surprisingly, the presence of acid reflux was strongly correlated to lingual tonsil size (7).
If you’re considering tonsil or adenoid surgery for yourself or your child, the good news is that for the vast majority of patients, tonsils and adenoids don’t grow back, but even if it does, it won’t cause any problems. Rarely do you have to go back to repeat the surgery.
What are your experiences with tonsil or adenoid regrowth? Did you have to go back to the operating room again?
12 thoughts on “Can Your Tonsils or Adenoids Grow Back After Surgery?”
Hey all so i had my tonsils removed in February and about 3 – 3 and a half weeks post op the pain was back and so was my left tonsil i was shocked and angry that it was so soon after the initial surgery! i had my second tonsillectomy on the left side last week and the pain is definitely worse. all i can say is i hope that was the last time! although it does make me wonder if there was another reason for it coming up so quickly? has anyone else experienced it coming back so soon ?
Hi my daughter had her adenoids and tonsils removed with surgery,when she was almost two and then exactly two years later had the adenoids and grommets done… Its only been 1year and her adenoids r back,she can’t sleep well and I am worried. I asked the specialist at the hospital will it grow back and they said that once she turns 5 years it shouldn’t. She has recently been Ill with headaches and a temperatures for no reason, she will b fine and then tomorrow sick, and when she has a cold the ears start leaking. I am very worried… Will she get past this? Please help
I had my adenoids taken out three times, just to grow back. The last Doctor ensured me that it would not grow back but it did about a month after the surgery. It like running on one cydiner every day for me. I’m fatigue all the time. How can i solve this problem?
To everyone who had their adenoids grow back: Overall, it’s still very uncommon, but I do understand that of you, it’s a major problem. Adenoid regrowth can occur due to persistent allergies, acid reflux, for persistent sleep apnea.
3 yrs ago at the age of 53 I developed strange growths in the back of my throat following a cold. I was sent to surgery for removal and biopsy. When I woke up I was told they were tonsil regroth from having them removed 48 yrs earlier. It was some of the worst pain I had ever experienced and it lasted for 10 days before I felt any relief. On top of that I began loud snoring and nasal drainage due to my adenoids also rejoining the party! I had been battling autoimmune disorders for the past 5 yrs including gastrointestinal challenges. Now I know there was a correlation. If only we could regrow something useful like a spinal cord or missing limb!
Hello. My son is 5yo. Last year at 4, he had his tonsils and adenoids removed, but in just nine months, there was significant regrowth, such that all the signs had come back-fatigue, sleepiness during the day, snoring and apnea at night, ever having mucus and nose blockage and a chesty cough that is so hard to clear, occasional ear infections , we are always on medication and it’s wearing me out,, Especially the cold weather but also in dusty conditions. Should I consider second surgery for removal? Thank you.
Had tonsils and adenoids removed at 2 years of age. In my forties now. I have reflux (never knew that was what I had as I had it my whole life) and it destroyed the enamel on the back of my teeth. As well i have had ringing in my ears for at least two decades and one side of my head will swell close if I lay on on my side. Plus one ear swells and feels clogged a lot. Anyway, No one has been able to figure it out as I’m otherwise healthy. So I take a lot of herbal allergy type remedies to reduce inflammation and it seems to help but it is a lot of work to remember to take (when I feel better I tend to stop and have to start again)
Anyway, Your website made me feel better and at peace with what’s been going on my entire life.
Thanks!
Connect
Featured Video
Podcast
The Breathe Better, Sleep Better Live Better podcast is aimed at helping you get the sleep you need and the life you want. Hear from leading experts in the field of obstructive sleep apnea (OSA) and upper airway resistance syndrome (UARS) what you can do to overcome these chronic health problems.
If you subscribe using any of the options below, you'll get every new episode automatically for free.
Awards
About Dr. Park
Dr. Steven Y. Park is an author and surgeon who helps people who are always sick or tired to once again reclaim their health and energy. For the past 13 years in private practice and 10 years in academia, he has helped thousands of men and women breathe better, sleep better, and live more fulfilling lives.
The material on this website is for educational and informational purposes only and is not and should not be relied upon or construed as medical, surgical, psychological, or nutritional advice. Please consult your doctor before making any changes to your medical regimen, exercise or diet program. Some links may go to products on Amazon.com or other sites, for which Dr. Park receives a small commission if you make a purchase. (See full disclaimer)
|
’re considering tonsil or adenoid surgery for yourself or your child, the good news is that for the vast majority of patients, tonsils and adenoids don’t grow back, but even if it does, it won’t cause any problems. Rarely do you have to go back to repeat the surgery.
What are your experiences with tonsil or adenoid regrowth? Did you have to go back to the operating room again?
12 thoughts on “Can Your Tonsils or Adenoids Grow Back After Surgery?”
Hey all so i had my tonsils removed in February and about 3 – 3 and a half weeks post op the pain was back and so was my left tonsil i was shocked and angry that it was so soon after the initial surgery! i had my second tonsillectomy on the left side last week and the pain is definitely worse. all i can say is i hope that was the last time! although it does make me wonder if there was another reason for it coming up so quickly? has anyone else experienced it coming back so soon ?
Hi my daughter had her adenoids and tonsils removed with surgery,when she was almost two and then exactly two years later had the adenoids and grommets done… Its only been 1year and her adenoids r back,she can’t sleep well and I am worried. I asked the specialist at the hospital will it grow back and they said that once she turns 5 years it shouldn’t. She has recently been Ill with headaches and a temperatures for no reason, she will b fine and then tomorrow sick, and when she has a cold the ears start leaking. I am very worried… Will she get past this? Please help
I had my adenoids taken out three times, just to grow back. The last Doctor ensured me that it would not grow back but it did about a month after the surgery. It like running on one cydiner every day for me. I’m fatigue all the time. How can i solve this problem?
To everyone who had their adenoids grow back: Overall, it’s still very uncommon, but I do understand that of you, it’s a major problem.
|
yes
|
Otorhinolaryngology
|
Do adenoids grow back after removal?
|
yes_statement
|
"adenoids" do "grow" back after "removal".. it is possible for "adenoids" to regrow after "removal".
|
https://cost.sidecarhealth.com/f/do-adenoids-grow-back-after-surgery
|
Q: Do adenoids grow back after surgery? | Sidecar Health
|
Do adenoids grow back after surgery?
A small amount of adenoid tissue typically regrows after surgery. Due to the amount of adenoid tissue that is removed, this regrowth is generally not enough to require a repeat operation. If an insufficient amount of tissue is removed, the adenoids can regrow, which may require a second surgery.
* Savings estimate based on a study of more than 1 billion claims comparing self-pay (or cash pay) prices of a frequency-weighted market basket of procedures to insurer-negotiated rates for the same. Claims were collected between July 2017 and July 2019. R.Lawrence Van Horn, Arthur Laffer, Robert L.Metcalf. 2019. The Transformative Potential for Price Transparency in Healthcare: Benefits for Consumers and Providers. Health Management Policy and Innovation, Volume 4, Issue 3.
Sidecar Health offers and administers a variety of plans including ACA compliant and excepted benefit plans. Coverage and plan options may vary or may not be available in all states.
Your actual costs may be higher or lower than these cost estimates. Check with your provider and health plan details to confirm the costs that you may be charged for a service or procedure.You are responsible for costs that are not covered and for getting any pre-authorizations or referrals required by your health plan. Neither payments nor benefits are guaranteed. Provider data, including price data, provided in part by Turquoise Health.
The site is not a substitute for medical or healthcare advice and does not serve as a recommendation for a particular provider or type of medical or healthcare.
|
Do adenoids grow back after surgery?
A small amount of adenoid tissue typically regrows after surgery. Due to the amount of adenoid tissue that is removed, this regrowth is generally not enough to require a repeat operation. If an insufficient amount of tissue is removed, the adenoids can regrow, which may require a second surgery.
* Savings estimate based on a study of more than 1 billion claims comparing self-pay (or cash pay) prices of a frequency-weighted market basket of procedures to insurer-negotiated rates for the same. Claims were collected between July 2017 and July 2019. R.Lawrence Van Horn, Arthur Laffer, Robert L.Metcalf. 2019. The Transformative Potential for Price Transparency in Healthcare: Benefits for Consumers and Providers. Health Management Policy and Innovation, Volume 4, Issue 3.
Sidecar Health offers and administers a variety of plans including ACA compliant and excepted benefit plans. Coverage and plan options may vary or may not be available in all states.
Your actual costs may be higher or lower than these cost estimates. Check with your provider and health plan details to confirm the costs that you may be charged for a service or procedure. You are responsible for costs that are not covered and for getting any pre-authorizations or referrals required by your health plan. Neither payments nor benefits are guaranteed. Provider data, including price data, provided in part by Turquoise Health.
The site is not a substitute for medical or healthcare advice and does not serve as a recommendation for a particular provider or type of medical or healthcare.
|
yes
|
Otorhinolaryngology
|
Do adenoids grow back after removal?
|
yes_statement
|
"adenoids" do "grow" back after "removal".. it is possible for "adenoids" to regrow after "removal".
|
https://www.businesswire.com/news/home/20110912006427/en/Mayo-Clinic-Identifies-Risk-Factors-for-Repeat-Adenoid-Removal-in-Children
|
Mayo Clinic Identifies Risk Factors for Repeat Adenoid Removal in ...
|
ROCHESTER, Minn.--(BUSINESS WIRE)--It isn’t unusual for children to have their tonsils and adenoids
removed, and the younger those patients are, the greater the risk their
adenoids will grow back and they will need to have them taken out again,
Mayo Clinic researchers have found. Results of their study were
presented today during the 2011 American
Academy of Otolaryngology – Head and Neck Surgery Annual Meeting in
San Francisco.
The retrospective study looked at data from 8,245 children ages birth to
18 who had their tonsils and adenoids removed at Mayo Clinic between
1980 and May 2009. Of those, 163 children experienced regrowth of
adenoid tissue, and had to have it removed again, a procedure known as a
revision adenoidectomy. Their age when their adenoids were initially
removed was a significant factor: The younger the child, the greater the
risk they needed the procedure again, especially for those under 4.
Children with ear problems such as middle ear disease and eustachian
tube dysfunction were 20 times more likely to have a revision
adenoidectomy than those with infections such as recurrent
adenotonsillitis. Patients with reflux also had increased risk of
revision. Patients whose adenoidectomies were performed by surgeons
early in training were roughly 50 percent more likely to require a
revision.
“We found that revision adenoidectomy was performed in less than 2
percent of the cases, but things that were actually shown to be
associated with regrowth of adenoid tissue included age and diagnosis of
gastroesophogeal reflux. And the more inexperienced the surgeon is,
those patients were more likely to have regrowth,” says lead author Laura
Orvidas, M.D., a Mayo Clinic pediatric otolaryngologist.
“It appeared that we were getting more and more children that were
having symptoms of regrowth of adenoid, and we were wondering if there
was a trend,” Dr. Orvidas says. “If children do redevelop symptoms of
adenoid enlargement — things such as snoring or sleep-disordered
breathing — they should be re-evaluated because we do now know that it’s
a possibility that adenoids can regrow.”
The study’s authors recommend that if symptoms redevelop after an
adenoidectomy, parents should have those symptoms reinvestigated,
especially if the patients were young, particularly under 4, at the time
of removal or if they have symptoms of reflux.
Other members of the research team included Amy Dearking, M.D.; Brian
Lahr; and Admire Kuchena, all of Mayo Clinic.
About Mayo Clinic Mayo Clinic is a nonprofit worldwide
leader in medical care, research and education for people from all walks
of life. For more information, visit www.mayoclinic.com
and www.mayoclinic.org/news.
More from Business Wire
Business Wire Information
Internet Explorer presents a security risk. To ensure the most secure and best overall experience on our website we recommend the latest versions of
Chrome,
Edge,
Firefox, or
Safari. Internet Explorer will not be supported as of August 17, 2021.
Internet Explorer is no longer supported. To ensure the most secure and best overall experience on our website, we recommend the latest versions of
Chrome,
Edge,
Firefox, or
Safari.
|
ROCHESTER, Minn.--(BUSINESS WIRE)--It isn’t unusual for children to have their tonsils and adenoids
removed, and the younger those patients are, the greater the risk their
adenoids will grow back and they will need to have them taken out again,
Mayo Clinic researchers have found. Results of their study were
presented today during the 2011 American
Academy of Otolaryngology – Head and Neck Surgery Annual Meeting in
San Francisco.
The retrospective study looked at data from 8,245 children ages birth to
18 who had their tonsils and adenoids removed at Mayo Clinic between
1980 and May 2009. Of those, 163 children experienced regrowth of
adenoid tissue, and had to have it removed again, a procedure known as a
revision adenoidectomy. Their age when their adenoids were initially
removed was a significant factor: The younger the child, the greater the
risk they needed the procedure again, especially for those under 4.
Children with ear problems such as middle ear disease and eustachian
tube dysfunction were 20 times more likely to have a revision
adenoidectomy than those with infections such as recurrent
adenotonsillitis. Patients with reflux also had increased risk of
revision. Patients whose adenoidectomies were performed by surgeons
early in training were roughly 50 percent more likely to require a
revision.
“We found that revision adenoidectomy was performed in less than 2
percent of the cases, but things that were actually shown to be
associated with regrowth of adenoid tissue included age and diagnosis of
gastroesophogeal reflux. And the more inexperienced the surgeon is,
those patients were more likely to have regrowth,” says lead author Laura
Orvidas, M.D., a Mayo Clinic pediatric otolaryngologist.
“It appeared that we were getting more and more children that were
having symptoms of regrowth of adenoid,
|
yes
|
Otorhinolaryngology
|
Do adenoids grow back after removal?
|
yes_statement
|
"adenoids" do "grow" back after "removal".. it is possible for "adenoids" to regrow after "removal".
|
https://www.medicalnewstoday.com/articles/323016
|
Adenoid removal: What to know and when to have it done
|
In this article, we look at what adenoids are, the symptoms of their enlargement, and reasons for having them removed. We also explain the adenoid removal procedure, risks and possible complications, and recovery following surgery.
Adenoids are glands high up in the throat behind the nose and roof of the mouth. They are part of the body’s immune system.
The adenoids catch germs in the nose before they can cause illness. However, these glands can become swollen as they fight off bacteria or viruses.
When this happens, the adenoids may enlarge and interfere with breathing and sleeping. They may also feel sore.
Ongoing enlargement of the adenoids can block the eustachian tube, which connects the ears to the nose and drains fluid from the middle ear. This blockage causes fluid to build up in the ear, which can lead to repeated ear infections and potential hearing loss.
If enlarged adenoids are causing symptoms, a doctor may initially try to treat the problem with medications or other treatments. If symptoms persist, the doctor may then recommend surgery to remove the adenoids. This surgery is called an adenoidectomy.
Adenoids tend to be largest during early childhood, after which they begin to shrink. For most people, the adenoids become very small or disappear once they reach their teenage years. As a result, adenoid removal mostly occurs in young children.
Most of the time, enlarged adenoids affect children. Infants and younger children may not be able to express that they are in pain or experiencing other symptoms of enlarged adenoids. Some signs to look out for in babies and children include:
The doctor who performs adenoidectomies is an otolaryngologist or an ear, nose, and throat specialist. Doctors usually place children under general anesthesia during adenoid removal, which means that they will be sleeping and unable to feel any pain. It is important that the child avoid all food and drink for several hours before surgery to prevent vomiting during the procedure.
For the adenoidectomy, surgeons use an instrument to see inside the throat and nasal cavity. They can access the adenoids through the back of the throat, so they do not need to make any external incisions.
The surgeon removes the adenoid tissue. In most cases, the surgery takes less than an hour, and the child can go home on the same day if there are no complications. Children who are very young, have certain high risk conditions, or have trouble breathing may need to stay in the hospital overnight for observation.
In many cases, a doctor may remove the tonsils along with the adenoids. The tonsils are also glands that help protect against germs. However, they sit in the back of the throat rather than behind the nose.
Sometimes, both the tonsils and adenoids become swollen and infected. The removal of both glands at the same time is known as a tonsilloadenoidectomy.
Not everyone who needs an adenoidectomy will require tonsil removal and vice versa. Doctors base the decision to remove either or both these glands on the child’s specific symptoms and medical history. Children who tend to have swelling of both the tonsils and adenoids may be good candidates for a tonsilloadenoidectomy.
Surgeons perform around 130,000 adenoid removals each year in the United States. Adenoid removal surgery is generally safe, and healthy children will have a low risk of complications. However, possible side effects and risks of an adenoidectomy include:
The lack of incision during the surgery means that stitches are unnecessary. The child may feel pain or discomfort in the throat, nose, and ears for several days following surgery.
The doctor may prescribe pain relievers or recommend over-the-counter medications to help relieve any pain. These should never include aspirin, which can increase a child’s risk of developing Reye’s syndrome.
In general, most children recover from adenoid removal within 1–2 weeks. Doing the following may help with a child’s recovery:
Offering plenty of fluids to help prevent dehydration. Popsicles may be helpful if the child is not drinking enough or feels sick. If signs of dehydration occur, contact a doctor immediately.
Eating soft foods can help with a sore throat, but drinking is more important than eating. The child is likely to start eating normally again after a few days.
Keeping the child home from school or daycare until they can eat and drink normally, no longer need pain medicine, and are sleeping well.
Avoiding airplane travel for at least 2 weeks after surgery due to air pressure changes when flying at high altitudes.
A mild fever is typical on the day of surgery, but it is essential to call a doctor if the fever is 102°F or higher or if the child seems very unwell. Some noisy breathing and snoring for up to 2 weeks after surgery is not unusual, but this will usually stop once the swelling subsides.
If possible, doctors recommend staying near a hospital during the immediate recovery period should any complications arise.
If enlarged adenoids are causing breathing issues, problems swallowing, or recurrent ear infections, removing them may be the best option. The surgery is safe and effective for most children.
However, there are some things to consider before deciding on adenoid removal. Recent research suggests that removing a child’s adenoids or tonsils may increase their risk of developing respiratory, infectious, and allergic conditions later in life.
Adenoid removal, as with all surgery, also carries a small risk of infection or other complications. Adenoids can sometimes grow back after surgery, but this is rare.
Most children who undergo adenoid removal will recover without any long-term health issues. However, parents and caregivers should discuss the benefits and risks with a doctor before moving forward with the procedure.
How we reviewed this article:
Medical News Today has strict sourcing guidelines and draws only from peer-reviewed studies, academic research institutions, and medical journals and associations. We avoid using tertiary references. We link primary sources — including studies, scientific references, and statistics — within each article and also list them in the resources section at the bottom of our articles. You can learn more about how we ensure our content is accurate and current by reading our editorial policy.
|
Eating soft foods can help with a sore throat, but drinking is more important than eating. The child is likely to start eating normally again after a few days.
Keeping the child home from school or daycare until they can eat and drink normally, no longer need pain medicine, and are sleeping well.
Avoiding airplane travel for at least 2 weeks after surgery due to air pressure changes when flying at high altitudes.
A mild fever is typical on the day of surgery, but it is essential to call a doctor if the fever is 102°F or higher or if the child seems very unwell. Some noisy breathing and snoring for up to 2 weeks after surgery is not unusual, but this will usually stop once the swelling subsides.
If possible, doctors recommend staying near a hospital during the immediate recovery period should any complications arise.
If enlarged adenoids are causing breathing issues, problems swallowing, or recurrent ear infections, removing them may be the best option. The surgery is safe and effective for most children.
However, there are some things to consider before deciding on adenoid removal. Recent research suggests that removing a child’s adenoids or tonsils may increase their risk of developing respiratory, infectious, and allergic conditions later in life.
Adenoid removal, as with all surgery, also carries a small risk of infection or other complications. Adenoids can sometimes grow back after surgery, but this is rare.
Most children who undergo adenoid removal will recover without any long-term health issues. However, parents and caregivers should discuss the benefits and risks with a doctor before moving forward with the procedure.
How we reviewed this article:
Medical News Today has strict sourcing guidelines and draws only from peer-reviewed studies, academic research institutions, and medical journals and associations. We avoid using tertiary references. We link primary sources — including studies, scientific references, and statistics — within each article and also list them in the resources section at the bottom of our articles.
|
yes
|
Ecophysiology
|
Do all animals sleep?
|
yes_statement
|
all "animals" "sleep".. every "animal" "sleeps".
|
https://www.discovermagazine.com/planet-earth/animals-that-sleep-the-least-and-the-most
|
Animals That Sleep the Least and the Most | Discover Magazine
|
As far as we know, all animals seem to rest. But sleep behaviors and the number of daily hours varies greatly across the animal kingdom.
Newsletter
Strange as it may sound, we still haven’t pinpointed why exactly humans and other animals sleep. Ongoing research poses many theories, often related to memory generation or learning. Maybe it helps restore DNA damage in neurons, as suggested by a Nature study last year. But back in 2017, scientists had learned thatthe upside-down jellyfish also appears to sleep, despite the lack of a brain or central nervous system. So, the jury is still out on the why behind sleep.
What we do know is that essentially all animals rest — though detailed studies havemostly taken place in mammals and birds. Style of sleep varies greatly across the animal kingdom. Whether you examine life in the African savanna, across the oceans or high in the trees of Australia, you’ll find major variations in sleep postures as well as the amount of daily rest needed in each species.
Basic survival has likely driven different animals to develop their unique sleep habits, says Tom Stalf, president and CEO of the Columbus Zoo and Aquarium. “It’s about adaptation.”
To Sleep or Not to Sleep
The tallest animal on earth, giraffes, have often been touted as the mammal that sleeps least of all, despite weighing up to 3,000 pounds. One commonly cited statistic estimates they sleep only 30 minutes per day. But, that’s likely only referring to deep sleep, consideringa major study in 1996 that pegged their total shut-eye closer to 4.5 hours per 24-hour period.
Part of the difficulty monitoring giraffe sleep stems from their exceptionally strange sleeping pattern. They take a series of short power naps, just several minutes at a time, throughout the day. This often happens while standing — likely an adaptation to protect themselves from predators, since the long-legged giraffe is slow to transition from laying down to standing.
The elephant is another contender for the least sleep in a mammal. Researchers who monitored two free-roaming African elephants found they slept only 2 hours per day, according toa study published in 2017. They, too, often sleep standing up. “Sometimes you’ll see them lean up against a tree or something to take some weight off their body,” Stalf says.
Sleep patterns in captivity, such as in zoos where many studies are conducted, typically vary from actual behavior in the wild. This can skew the numbers reported for how much sleep each species needs each day.
In contrast to the giraffe and elephant, male lions can snooze nearly 20 hours a day, with females clocking at least 15 hours. Tigers sleep a similar amount of time. One trend throughout the animal world is carnivorous animals, such as big cats, resting many more hours than herbivores like giraffes and elephants that spend most of their waking hours grazing for food. But there are plenty of exceptions to this trend.
And if you take a peek underwater, you'll find that fish sleep without closing their eyes, since they don’t have eyelids. Some even exhibit what researchers call “sleep swimming.” Creatures like dolphins, on the other hand, are known for their unihemispheric sleeppatterns. This means half of the brain shifts into slow-wave processing mode, while the other half remains active. Some researchers suggest this behavior does not actually meet the definition of sleepbecause dolphins move continuously. So, a case could be made that the dolphin does not sleep.
Similarly, you may have also heard that bullfrogs never sleep at all. In 2008, researchersin PLOS Biology point out that notion is mostly based ona 1967 study that used a specific definition for sleep. Their real takeaway, and broadly in the current field of animal sleep, is that the resting life of animals and humans still holds plenty of mystery. We need more observation and studies to probe its depth.
|
As far as we know, all animals seem to rest. But sleep behaviors and the number of daily hours varies greatly across the animal kingdom.
Newsletter
Strange as it may sound, we still haven’t pinpointed why exactly humans and other animals sleep. Ongoing research poses many theories, often related to memory generation or learning. Maybe it helps restore DNA damage in neurons, as suggested by a Nature study last year. But back in 2017, scientists had learned thatthe upside-down jellyfish also appears to sleep, despite the lack of a brain or central nervous system. So, the jury is still out on the why behind sleep.
What we do know is that essentially all animals rest — though detailed studies havemostly taken place in mammals and birds. Style of sleep varies greatly across the animal kingdom. Whether you examine life in the African savanna, across the oceans or high in the trees of Australia, you’ll find major variations in sleep postures as well as the amount of daily rest needed in each species.
Basic survival has likely driven different animals to develop their unique sleep habits, says Tom Stalf, president and CEO of the Columbus Zoo and Aquarium. “It’s about adaptation.”
To Sleep or Not to Sleep
The tallest animal on earth, giraffes, have often been touted as the mammal that sleeps least of all, despite weighing up to 3,000 pounds. One commonly cited statistic estimates they sleep only 30 minutes per day. But, that’s likely only referring to deep sleep, consideringa major study in 1996 that pegged their total shut-eye closer to 4.5 hours per 24-hour period.
Part of the difficulty monitoring giraffe sleep stems from their exceptionally strange sleeping pattern. They take a series of short power naps, just several minutes at a time, throughout the day. This often happens while standing — likely an adaptation to protect themselves from predators, since the long-legged giraffe is slow to transition from laying down to standing.
The elephant is another contender for the least sleep in a mammal.
|
yes
|
Ecophysiology
|
Do all animals sleep?
|
yes_statement
|
all "animals" "sleep".. every "animal" "sleeps".
|
https://nintil.com/everything-sleeps/
|
Does every animal sleep? - Nintil
|
The puzzle of sleep
Sleep may seem paradoxical: Being inactive for a third of the day, in humans, means reduced chances to acquire resources or mate; plus it increases the odds of being preyed on, especially in our evolutionary past. Moreover, we also know that in our own case sleep is hard to avoid: Being awake for too long makes us tired and we progressively lose cognitive faculties, plus the more we stay awake the stronger there is of a drive to fall asleep, to the point it overrides the will not to. Even in those cases, lapses in wakefulnes known as microsleeps occur, so we have evolved mechanisms that do generate a strong and hard to elude need to sleep. Several theories have been proposed to explain why is it that we sleep, usually having to do with memory consolidation. Naturally, one wonders: Could we sleep less? If so why don't we? Elephants or horses for example sleep just 2-3 hours every day.
Back when I was reviewing prior to publication Alexey Guzey's takedown of Why We Sleep one of the claims that was discussed there was whether or not every animal1 sleeps. This is a background claim in the book that can be used as part of the evidence for sleep being very important. So, from Guzey's review
On page 6, Walker writes:
[E]very species studied to date sleeps
This is false,at least, according to Walker’s own source. When making this claim, he cites:
Kushida, C. Encyclopedia of Sleep, Volume 1 (Elsever, [sic] 2013)
….which turns out to be a 2,736 page book that costs $1,995. Fortunately, Walker tells us that we should search for this information somewhere in “Volume 1” or the first 638 pages of the book.
Anyway, page 38 reads:
It now appears that many species reduce sleep for long periods of time under normal conditions and that others do not sleep at all, in the way sleep is conventionally defined.
The Encyclopedia of Sleep (From 2013), says what Alexey says; and indeed Walker cited a source that contradicts him, but does not expand on examples of animals that do not sleep at all, one has to go to the end to follow some of the papers cited therein.
Does every animal sleep?
First, what is sleep? A paper from 2008 that is cited there, Do all animals sleep? (Siegel, 2008) defines various related terms (Like torpor or rest) as different from sleep, and sleep itself as "a rapidly reversible state of immobility and greatly reduced sensory responsiveness". Plus perhaps "To count as sleep it must also be homeostatically regulated; loss of sleep must be followed by an increased drive to sleep". One can then further define correlates of this behaviour in certain animals; those with a brain can also present various subtypes of sleep, as REM (Only observed in birds and mammals) or NREM.
That defined, what does the paper say about different organisms:
Unicellular organisms: No one has claimed that they sleep; but in some circadian rhythms have been found
Insects: Drosophila sleeps, cockroaches, bees, and scorpions meet the conditions of sleep except for homeostatic regulation. No evidence of REM sleep in insects.
Fish: Less than 10 species have been examined for rest or sleep behaviour. Zebrafish sleep, perches rest, but do not present the elevated response threshold to stimuli.
Amphibians: The bullfrog presents circadian rhythms rest but they are also more vigilant during rest, which would go against the definition of sleep above. The tree frog seems however to sleep.
Reptiles: Mixed evidence for REM sleep even for the same species. Turtles seem to sleep.
Mammals: Mammalian all seem to sleep, but this varies a lot across species and depending on the environment (eg. hunger decreases sleep). Sleep time shows great variation, with horses sleeping for 2h while bats sleep for 19 hours.
Marine mammals: Famously, dolphins sleep with half of their brain at a time, even closing the eye opposite to the hemisphere that is sleeping. Fur seals also sleep while on land, and show dolphin-style sleep. Importantly, dolphins have never been observed to sleep the way we do (i.e. bihemispheric sleep). Sleep deprived dolphins(after 5 days) do not show decline in performance in an accuracy(?) task, however dolphins required progressively more stimulation to stay awake.
Birds: Birds sleep, but they don't seem to suffer from cognitive impairment after being sleep deprived. A few species are also able to sleep one hemisphere of the brain at a time, even when in flight
So the author concludes that no, sleep is not universal; moreover given the great variation in sleeping time and behaviours, sleep may be serving different functions in each species.
From the same year however, Cirelli & Tononi assess the hypothesis that sleep may just be a default state when all the needs have been met. Assuming that, say, roaming around at night is more dangerous that staying with the rest of the pack in a safe place, then sleep would evolve for no reason other than to force itself upon the individual to induce a safer behaviour. The authors note that the quality of the evidence regarding sleep beyond mammals and birds is not very good; the bullfrog example noted before is based on a single study, so they don't agree that the evidence is strong here.
Regarding homeostatic regulation, the behaviours expected of sleeping beings were increased sleep pressure, and a compensatory rebound (after sleep deprivation, additional sleep time ensues). Siegel had argued that insects do not show this; but Cirelli & Tononi argue that sleep compensation may occur as deeper sleep rather than longer sleep. Again, the evidence for or against is not strong.
Regarding whether or not sleep loss leads to negative consequences, the authors say that given a sufficient amount of sleep deprivation, most animals studied die, with the possible exception of pigeons. This all said, the authors acknowledge that this may be due to constant stress, not lack of sleep itself, while in humans, a condition that causes chronic insomnia may be deadly for other reasons. However, sleep deprivation does lead to two universal conditions:
First, intrusion of sleep into wakefulness (Increased sleep pressure, urge to sleep) that goes to the point of being unavoidable, even with constant stimulation that aimed for total sleep deprivation (microsleeps); EEG wave patterns that are typically associated with sleep leak into wakefulness, so the wakeful state becomes more sleeplike. Here the authors conclude that there is no evidence that total sleep deprivation is possible for more than 24 hours without either microsleeps and mixed sleep-wakeful states occurring.
And second, cognitive impairment; here the authors note than in humans there is great variability in how susceptible individuals are to impairment, even at the task-level. Contra Spiegel, the authors claim there is evidence for cognitive impariment following deprivation in flies, birds, or rodents, in addition to humans.
The authors conclude that
The three corollaries of the null hypothesis do not seem to square well with the available evidence: there is no convincing case of a species that does not sleep, no clear instance of an animal that forgoes sleep without some compensatory mechanism, and no indication that one can truly go without sleep without paying a high price.
While they don't identify what function is sleep serving, they point to the fact that it may be an intrinsic requirement of neuronal activity, a requirement that also cannot be fulfilled while being awake, here they suggest memory consolidation without the disturbance of new memories coming in, or exercising/stimulating old memories to keep them recallable.
Okay, this was a while ago. What's the state of the art in 2020?
The Handbook of Behavioural Neuroscience, Chapter 24 (2019) mentions jellyfish as an example of animal that sleeps, even when no central nervous system is present. This behaviour is claimed to be full sleep (rest+higher responsivity threshold+homeostatic response); so jellyfish if sleep-deprived will then rest more when given a chance to do so.
While the chapter introduces sleep as "essential to nearly all studied animals to date" it does not mention examples where it does not occur (Perhaps as a form of hedging the claim). The chapter also talks about mexican cavefish, evolutionarily related fish in geographically close cave systems. These fish present substantial variation in sleep time from around 6 hours (In the case of their surface cousins) to almost nothing for some of the cavefish. Overall, the authors say that they would be tempted to conclude that all fish sleep, but stop short of doing so as we have only studied a small number of fish, so maybe some future study will find sleepless fish.
A 2018 study on the origins and evolution of sleep (Keene & Duboue) also notes great variation across and among species. To our list of animals that sleep they add another well known animal model, C. elegans which has just 302 neurons, and various mollusks, and octopuses. As with longevity, the genes that underpin sleep behaviours also seem conserved (They mention the genes Shaker, Sleepless, and Cyclin a which have human homologs). As with Tononi, the authors point to the fact that sleep being conserved across animals with neurons suggests that it may be a property that is also present at the single-cell level, i.e. in neurons.
For cockroaches and other insects the review does say that, contra Spiegel, they do present all the sleep markers, including homeostatic regulation.
Through the power of artificial selection, it has also been possible to study sleep variation within one species; in Drosophila it was possible to generate individuals that presented 90% sleep loss in 60 generations, though biomarkers present in sleep-deprived humans were present in higher concentrations in these flies, and these flies also showed reduced lifespan, same as wild type flies that are sleep deprived.
So the consensus seems to have moved towards the universality of sleep. If we look back at Kavanau (1998), the author points to numerous studies showing that, for example, a kind of cavefish does not sleep at all; including a one year long study where the fish were swimming at all times, and always disposed to accept food. But then again, a more recent review (Kelly et al. 2019), that cites the 1-year-long study notes that
Early research into sightless, cave-dwelling species of fishes [Pavan, 1946] suggested that these animals might be sleepless as they show evidence of continuous swimming and lack activity-based circadian rhythmicity. More recent studies, however, contradict this idea. Specifically, while cavefish (Astyanax mexicanus) do sleep, they in fact sleep very little, relative to their surface-dwelling conspecifics [Zafar and Morgan, 1992; Duboué et al., 2011, 2012; Yoshizawa et al., 2015; Jaggard et al., 2017, 2018]. [...] Furthermore, it remains unknown whether extensive periods of restfulness in buccal pumping sharks and rays can be considered sleep or simply quiet wakefulness. Unfortunately, none of the studies covered in this review answer these questions. To do so, future work must include a systematic investigation into the presence or absence of sleep, preferably on a broad range of elasmobranchs.
Birds also sleep, including REM sleep, and they also show the compensatory response after sleep deprivation, including in pigeons, which Spiegel said above had not been observed to die from sleep deprivation; the Handbook notes they are particularly resistant to sleep deprivation compared to mammals.
Reptiles, evidence seems still mixed even within the same species studied.
For fish, and particularly the Pavan study, the Handbook also repeats the claim from Kelly et al., that newer evidence points to them actually sleeping. Some kinds of shark that do not require movement to breathe have been observed in an immobile state, but whether they are sleeping or not is unclear. For "ram ventilating" bony fish like sharks, which must remain swimming to "ram" water through their gills, they don't show periods of inactivity, forming the basis of the claim that they do not sleep at all; but the authors of the review do not take this as evidence that they don't sleep; rather sleep may be compatible with swimming. Unlike in the other sections, they don't mention here any EEG measurement. The Kelly paper above is a review of this specific case in particular, where he notes that the possibility that ram ventilating sharks and rays do not sleep, seems unlikely, and they posit that they sleep just like dolphins do.
For invertebrates they mention that sleep has been "convincingly demonstrated" in various species of arthropods, roundwordms, mollusk, flatworms, and jellyfish, suggesting that sleep probably evolved very early.
And they conclude,
Do all animals sleep? Sleep has been observed in all species studied by sleep scientists. There is a temptation to conclude that all animals sleep; however, no data exist for most animal groups. Around 30 animal phyla have yet to be tested for the presence of sleep (Lesku & Ly, 2017). Even within studied phyla, the phylogenetic coverage is often poor. This is true for invertebrates and also fishes and amphibians. That said, the existence of sleep in very simple animals, such as flatworms (Omond et al., 2017) and jellyfish (Nath et al., 2017), indicates that sleep evolved early in the lineage of animals. Whether it has persisted in all species over evolutionary time is unclear. Nonetheless, the apparent evolutionary longevity of sleep suggests that it fulfills a fundamental and inescapable need. This fundamental need is further revealed by (i) the persistence of sleep, despite the inherent vulnerability associated with this state (Lima et al., 2005); (ii) the evolution of unihemispheric SWS in marine mammals and birds (Lyamin et al., 2008; Rattenborg et al., 2000); and (iii) animals that can greatly reduce (but not eliminate) sleep when other demands favor sustained performance (Lesku et al., 2012; Rattenborg et al., 2016). It seems likely that sleep serves many functions, some of which might be evolutionarily “ancient,” present in jellyfish, flatworms, and vertebrates (Nath et al., 2017), while others might be evolutionarily new and present only in derived species (Lesku, Vyssotski, et al., 2011).
Thus, the claim that "every animal observed so far sleep" is probably correct; although REM sleep is not universal. Walker, in his reply to Alexey Guzey's comments cites some of the same evidence I mention here.
Hence we can answer that "every animal sleeps" is compatible with the current evidence, although not confirmed yet; "every animal we have studied in depth sleeps"is probably right, and I add that future studies (EEG on sharks, bullfrogs, etc) will point in this direction too.
Why we (really) sleep
This post was prompted by this thread where Michael Nielsen complains we don't have a good explanation for sleep and I wondered what a good explanation would be. Here I am not going to explain why we sleep, I'll leave that for some other post, but will offer instead what an answer must meet to count.
A way to explain why we sleep would have an evolutionary level (Why the behaviour or trait evolved) and that level would talk about the benefits of sleep that make up for the downtime it requires. But also it would require underpinnings at the levels of the neuron and the synapses and the proteins and regulatory pathways therein to explain what is it that it is actually doing for us.
Evolutionary explanations can be underwhelming; it's all too easy to say that "legs evolved to move". That feels underwhelming because it ignores the possible counterfactuals; as a matter of fact we know that some animals don't have legs to move, so if "movement" is the evolutionary requirement, the phenotype of having "legs" is not the only means, thus leading to the question "Why legs and not something different?". So we would then need to complement the straight evolutionary answer with a counterfactual one, thinking about different ways the phenotype could have been acquired. Biology is a very rich domain, so there are usually multiple ways of meeting the same "design goal" so we may end up having to repeatedly think of counterfactuals.
In this article I've suggested that sleep is universal. This in turn is some evidence that sleep is a necessity for all animals that possess neurons. Now, is this true?
For example, assume that sleep is a need of systems of neurons and we need to explain why we observe sleep. Can there be systems of neurons that do not require sleep at all? There is a debate in biology about whether or not neurons evolved more than once, but the end-result is in any case similar, we all have the same basic kind of neurons (Though we have different kinds of ion channels; every animal have K but only some have Ca and Na; the more variation the better as it shows that even different designs do not banish sleep completely). Animals have gained and lost limbs, scales, wings and other traits, yet they all share the same basic neuron design, and they all sleep. If so, the argument chain would be:
Animals in a sufficiently complex environment need systems of neurons to successfully occupy their ecological niche (This is more of a premise that one has to buy. Here we stop the explanation at the fact that some animals evolved to have neurons and others do not. We could further try to explain this but not now)
There is only one way to make neurons in the animal kingdom. Evidence is that it is the case so far, they all share the same basic architecture, and while as noted above there is debate about how many times they evolved, it seems such a fixed feature that they are along in the possibility-space for a system that meets what is required of a control and information processing system.
Either neurons themselves, or any system of neurons requires sleep. Evidence is that every animal with neurons we have looked at so far presents some form of sleep + Some evidence I have not presented here about the molecular basis of sleep.
What are the exact needs of neuronal systems that are being met with sleep? We do know that lack of sleep causes various forms of cognitive impairment, so we would have to get into the molecular biology of those to see exactly what is going on.
But why do we sleep 8 hours and not 2 like elephants? I haven't looked into it yet; but it will be a combination of brain-specific architecture and environment. For a fixed brain architecture (e.g. number of neurons and synapses, overall connectomic patterns, etc) there will be an optimal amount of sleep for it to do its functions. However, other evolutionary requirements will shift this around. You could see this with the flies described above that slept 90% less; that gains was possible by paying the cost of shorter lives and effects akin to those of sleep deprivation. In any case there are healthy individuals that sleep just 4 hours without any obvious drawback. The causes of individual sleep variation merit more research.
So what's next? Identifying the exact systems that are affected by sleep and how those relations vary across species. Then see what happens to them during sleep deprivation. What exactly happens to animals that die from it? If we saw them dying from, say, pathogens then that would be a clue to the immune system playing a core role. But if we observe a general dysregulation, and if malfunctioning brains are able to induce it then it is perhaps the brain alone that has that need.
1
By animal throughout the article I refer to animals that have neurons. An example of animal without neurons are the sponges.
|
Does every animal sleep?
First, what is sleep? A paper from 2008 that is cited there, Do all animals sleep? (Siegel, 2008) defines various related terms (Like torpor or rest) as different from sleep, and sleep itself as "a rapidly reversible state of immobility and greatly reduced sensory responsiveness". Plus perhaps "To count as sleep it must also be homeostatically regulated; loss of sleep must be followed by an increased drive to sleep". One can then further define correlates of this behaviour in certain animals; those with a brain can also present various subtypes of sleep, as REM (Only observed in birds and mammals) or NREM.
That defined, what does the paper say about different organisms:
Unicellular organisms: No one has claimed that they sleep; but in some circadian rhythms have been found
Insects: Drosophila sleeps, cockroaches, bees, and scorpions meet the conditions of sleep except for homeostatic regulation. No evidence of REM sleep in insects.
Fish: Less than 10 species have been examined for rest or sleep behaviour. Zebrafish sleep, perches rest, but do not present the elevated response threshold to stimuli.
Amphibians: The bullfrog presents circadian rhythms rest but they are also more vigilant during rest, which would go against the definition of sleep above. The tree frog seems however to sleep.
Reptiles: Mixed evidence for REM sleep even for the same species. Turtles seem to sleep.
Mammals: Mammalian all seem to sleep, but this varies a lot across species and depending on the environment (eg. hunger decreases sleep). Sleep time shows great variation, with horses sleeping for 2h while bats sleep for 19 hours.
Marine mammals: Famously, dolphins sleep with half of their brain at a time, even closing the eye opposite to the hemisphere that is sleeping. Fur seals also sleep while on land, and show dolphin-
|
no
|
Ecophysiology
|
Do all animals sleep?
|
yes_statement
|
all "animals" "sleep".. every "animal" "sleeps".
|
https://www.mattressclarity.com/blog/how-do-different-animals-sleep/
|
How Do Different Animals Sleep? (2023) - Mattress Clarity
|
How Do Different Animals Sleep?
We spend a lot of time thinking about how humans sleep, especially how much we sleep we can get uninterrupted. What about how our pets sleep? Or the animals at the zoo? Does sleep for animals serve the same purpose for them as it does for us?
When an animal sleeps, it helps them retain their memory and learn. This is why animals with larger brain sizes require more REM sleep. All animals need sleep, but their sleep styles and patterns can vary greatly depending on their environment and species.
Sleeping patterns in all animals have evolved over time: Animals that are attacked by predators while sleeping will be less likely to pass their sleeping habits onto their young. This allows each generation to develop new ways to keep themselves safe while they sleep.
For instance, otters hold hands while they sleep, or they wrap themselves in seaweed to stay afloat and keep their young protected. Like some humans who share a bed with a partner, herd animals like cows and sheep sleep closely together since there’s much more safety in numbers against potential predators.
Another thing that differs between animals and humans when it comes to sleep? Humans are way pickier about their beds (and pillows!).
Carnivores vs. Herbivores: Sleep Patterns
While there are many different reasons for varying sleep patterns in animals, evolutionary biologists theorize that the fear of predators plays a big role. Carnivores typically sleep more than herbivores; for example, lions sleep in short spells throughout the day and night so they have the energy to stalk and kill food whenever it’s available.
Most animals sleep depending on how much they eat. Animals that eat food with less caloric density will likely sleep less than other animals. This is why herbivores often need to be awake for longer periods of time since they need to be sure to get enough food to supply them with energy. Animals that graze, like giraffes and elephants, may only sleep 30 minutes to a few hours per day.
Which Animals Sleep the Most Per Day?
People might think that the sloth sleeps the most out of every animal. But just because they’re slow doesn’t mean they sleep more than other species. While sloths do get around 14 hours of sleep on average each day, this is about the same amount of sleep that the average dog gets.
Smaller Animals
Animals that are considered prey, such as deer and sheep, only sleep around three to four hours a night. Most prey and smaller-sized animals sleep less than larger animals, although that’s not always the case for every species.
For example, even though walruses are large, they don’t really need much sleep. They can stay awake as long as 84 hours at a time. The walrus fills up its pharyngeal pouches with air, allowing it to stay afloat while it sleeps. Walruses also hang onto ice sheets with their teeth and can sleep standing up or lying down.
Walruses can also sleep at temperatures much cooler than us. The ideal temperature for our bedrooms is around 60 to 67 degrees!
Larger Animals
Another large animal that doesn’t sleep much is the elephant. These intriguing creatures only get around two to four hours of sleep per day and spend most of their time eating plants throughout the day. Elephants usually sleep standing up, or they may lean against a termite mound or large tree. If an elephant sleeps on its side, it only sleeps for a short period, usually a half-hour or less, to keep its body weight from crushing its internal organs.
Some Animals Barely Sleep
Some species of frogs can go for months at a time without sleep and only rest their eyes on occasion. These amphibians have a large amount of glucose in their system that keeps their vital organs from freezing in the winter, and this helps them survive even if their heart stops beating and they don’t breathe. This is why people may see frogs “come back to life” once the spring thaw arrives.
Meanwhile, giraffes may sleep as little as 30 minutes a day. And horses can sleep for as little as two hours a day. These creatures sleep in about 15-minute intervals, and they sleep standing up: They have to since their large size and long neck makes it more difficult to get up, putting them at risk of being attacked by a predator. Evolution has allowed the giraffe to take short naps throughout the day.
How Do Animals Sleep Standing Up?
For those who have ever wondered how some animals sleep standing up, it’s all thanks to evolution. Animals like horses, cows, elephants, and giraffes have adapted to sleep this way in order to protect themselves from dangerous predators. Since they’re already standing up, it’s much easier for them to run away and make their escape. These animals are able to lock their legs so that the muscles don’t need to keep them in place. When they sleep standing up, they cannot achieve REM sleep, which is why they will lie down on some occasions.
Even a few species of birds sleep standing up, although it’s for a different reason than mammals. Mammals who sleep standing up are protecting themselves from predators, but birds sleep standing up if they can’t find a comfortable place to lie down. They do so by clamping the tendons in their legs into a locked position around branches or wires.
How Does Hibernation Work?
Some species go into a state of hibernation during the summer or winter months to save energy. During the winter, this is known as hibernation, while in the summer, it’s called estivation. A few species of animals will go into this mode every day, such as the American badger and the elegant fat-tailed mouse opossum.
The length of the day, availability of food, and temperature all signal animals when it’s time to go into hibernation. Their core body temperature begins to drop as their blood flow, brain activity, and heart rate begins to slow down.
Hibernation and sleep are two different things, however. Animals that are in hibernation mode can survive for a long period of time without eating or relieving themselves. Bears wake from hibernation in order to give birth, and the mother bear will go back into hibernation while the baby cub nurse. This is a key survival tactic since most animals go into hibernation during periods when food is scarce.
Mammals and Sleep
In general, other mammals sleep very similarly to humans. Their sleep is divided into a light sleep, deep sleep, and REM sleep, although the amount of sleep varies greatly. Armadillos and opossums sleep around 18 hours per day, while horses and giraffes sleep less than three hours a day. Humans fall somewhere in between, requiring seven to nine hours of sleep on average each night.
Most mammals sleep several times per day. This is known as polyphasic sleep. Depending on the animal, they may sleep more during the day or night, although diurnal animals typically tend to sleep at night.
Primates sleep in one period each day. Monkeys sleep sitting upright to stay safe against predators, but great apes like gorillas and chimpanzees prefer to lay down. They sleep on nesting platforms in trees, very similar to human beds. These platforms allow them to stay up high in trees so they can stay safe from predators and annoying insects. These comfortable platforms give the apes a feeling of security, allowing them to get more REM sleep. With REM sleep, cognitive development provides a competitive advantage over many other species.
How Do Marine Mammals Sleep?
Marine mammals live in the water, but they come to the surface to breathe. To keep from suffocating while they sleep, animals like seals, dolphins, and whales experience unihemispheric sleep, in which they sleep with one brain hemisphere at a time. The other hemisphere is awake, so the animal can move, see through one eye, and breathe.
In some case, dolphins may float on top of the water while they sleep. This behavior is called “logging” and scientists have even discovered that some dolphins sleep while swimming in a circle. Another study found that dolphins may be able to achieve a level of unihemispheric sleep that still allows them to perform complex tasks.
Newborn orca whales can go weeks without sleep, and their mothers can, too. Sperm whales sleep upright and do not sleep with one brain hemisphere at a time. Scientists believe these whales may require the least amount of sleep of all mammals.
Birds and Sleep
When birds migrate, they’re able to fly nonstop. While the migration periods can vary depending on the species, some can last for months at a time, such as the migration of the alpine swift, which lasts for around 200 days. Many migratory species of birdssleep with one brain hemisphere just like marine mammals. This allows them to continue on with their long journey.
A study of frigate birds from the Galapagos Islands revealed that the birds stayed awake and alert during the day. At night when they took to flight, they underwent slow-wave sleep for several minutes at a time. Their heads would even drop during short episodes of REM sleep, which only lasted a few minutes so as not to interrupt their flight patterns.
Other birds, like Swainson’s thrushes, take power naps to make up for lost sleep. Some birds sleep in a way that protects themselves, like ducks, who tend to sleep in a row. The ducks on each end sleep with a different eye closed, and the middle ducks close both eyes. This indicates that the ducks on the end are sleeping in a unihemispheric manner so they can keep watch and protect the group from predators.
Reptiles and Sleep
Reptiles’ sleeping patterns vary greatly. Lizards experience a sleep cycle that usually lasts only around 80 seconds, in comparison to 70 to 100 minutes for humans. Lizards also go through around 350 sleep cycles per night, while humans experience around four to five. Reptiles do not have cerebrums, and scientists previously thought that REM sleep only took place in more highly evolved creatures like birds and mammals. But a recent study of Australian bearded dragons revealed that they can, in fact, achieve REM sleep.
When we sleep, we typically close our eyes, as our eyelids keep the eyes protected and moisturized. Animals like snakes use transparent scales that function similarly to eyelids; since they’re clear, it’s difficult to tell if or when they are asleep. If a snake stays perfectly still, that’s the best indicator that the animal is truly asleep.
Fish and Sleep
Fish look like they are daydreaming when they’re asleep, appearing motionless as they hover near the bottom of the sea or their tank. Every once in awhile, they’ll flick their fins to keep steady and afloat. The sleep pattern of fish depends heavily on their environment and their activity level. Fish that live in an aquarium adjust their sleep cycles depending on the lights inside the building where they live.
Sharks must have constant ventilation of their gills, so they have to keep swimming even while they sleep. Sharks do not close their eyes or enter REM sleep.
A surprising discovery was found in the zebrafish. This unique fish appears to experience insomnia, similar to humans. Scientists induced sleep deprivation in the fish, and later, it displayed classic symptoms of insomnia along with reduced sleep time. Another fish called the Parrotfish secretes a jelly-like mucus that surrounds it and keeps it protected as it sleeps.
Animals and Sleep Deprivation
Some animals can die if they are subject to sleep deprivation for long enough. This is true for mammals like rats, and some insects may also die due to prolonged periods of sleep deprivation. It’s very difficult to discern whether other animals suffer from the same cognitive impairment that humans do whenever we lose sleep. It’s also difficult to tell if this problem shows itself in the form of sleepiness or fatigue like in humans.
Are There Animals That Do Not Sleep?
All animals, including insects, must sleep. Even lower animals with very little or even no brain sleep, although it’s in a much different way than humans and other mammals. These animals do exhibit periods of inactivity and less response to stimuli. Some research with fruit flies has shown that some of the same biochemical activity in the brain occurs as we see in humans during sleep. Evolution requires that all living things undergo some form of sleep in order to survive.
Subscribe to Mattress Clarity!
Joe Auer
Joe Auer is the editor of Mattress Clarity. He mainly focuses on mattress reviews and oversees the content across the site.
He likes things simple and takes a straightforward, objective approach to his reviews. Joe has personally tested nearly 250 mattresses and always recommends people do their research before buying a new bed. He has been testing mattresses for over 5 years now, so he knows a thing or two when it comes to mattress selection. He has been cited as an authority in the industry by a number of large publications.
When he isn't testing sleep products, he enjoys working out, reading both fiction and non-fiction, and playing classical piano. He enjoys traveling as well, and not just to test out hotel mattresses!
Joe has an undergraduate degree from Wake Forest University and an MBA from Columbia University.
About Mattress Clarity
Mattress Clarity was founded in 2015 with one goal in mind: to simplify your mattress and sleep product purchase decisions with personally tested reviews. Looking to buy a mattress or sleep accessories? Searching for better sleep? We are here to help.
|
How Do Different Animals Sleep?
We spend a lot of time thinking about how humans sleep, especially how much we sleep we can get uninterrupted. What about how our pets sleep? Or the animals at the zoo? Does sleep for animals serve the same purpose for them as it does for us?
When an animal sleeps, it helps them retain their memory and learn. This is why animals with larger brain sizes require more REM sleep. All animals need sleep, but their sleep styles and patterns can vary greatly depending on their environment and species.
Sleeping patterns in all animals have evolved over time: Animals that are attacked by predators while sleeping will be less likely to pass their sleeping habits onto their young. This allows each generation to develop new ways to keep themselves safe while they sleep.
For instance, otters hold hands while they sleep, or they wrap themselves in seaweed to stay afloat and keep their young protected. Like some humans who share a bed with a partner, herd animals like cows and sheep sleep closely together since there’s much more safety in numbers against potential predators.
Another thing that differs between animals and humans when it comes to sleep? Humans are way pickier about their beds (and pillows!).
Carnivores vs. Herbivores: Sleep Patterns
While there are many different reasons for varying sleep patterns in animals, evolutionary biologists theorize that the fear of predators plays a big role. Carnivores typically sleep more than herbivores; for example, lions sleep in short spells throughout the day and night so they have the energy to stalk and kill food whenever it’s available.
Most animals sleep depending on how much they eat. Animals that eat food with less caloric density will likely sleep less than other animals. This is why herbivores often need to be awake for longer periods of time since they need to be sure to get enough food to supply them with energy. Animals that graze, like giraffes and elephants, may only sleep 30 minutes to a few hours per day.
Which Animals Sleep the Most Per Day?
People might think that the sloth sleeps the most out of every animal.
|
yes
|
Ecophysiology
|
Do all animals sleep?
|
yes_statement
|
all "animals" "sleep".. every "animal" "sleeps".
|
https://www.medicalnewstoday.com/articles/medical-myths-how-much-sleep-do-we-need
|
5 sleep myths: How much sleep do we need?
|
Medical myths: How much sleep do we need?
In this Special Feature, we hack into some of the myths that surround sleep duration. Among other questions, we ask whether anyone can truly get by on 5 hours of sleep each night. We also uncover whether sleep deprivation can be fatal.
In our Medical Myths series, we approach medical misinformation head on. Using expert insight and peer reviewed research to wrestle fact from fiction, MNT brings clarity to the myth riddled world of health journalism.
As with many aspects of human biology, there is no one-size-fits-all approach to sleep. Overall, research suggests that for healthy young adults and adults with normal sleep, 7–9 hours is an appropriate amount.
The story gets a little more complicated, though. The amount of sleep we need each day varies throughout our lives:
newborns need 14–17 hours
infants need 12–15 hours
toddlers need 11–14 hours
preschoolers need 10–13 hours
school-aged children need 9–11 hours
teenagers need 8–10 hours
adults need 7–9 hours
older adults need 7–8 hours
You can train your body to need less sleep
There is a widely shared rumor that you can train your body to need fewer than 7–9 hours’ sleep. Sadly, this is a myth.
According to experts, it is rare for anyone to need fewer than 6 hours’ sleep to function. Although some people might claim to feel fine with limited sleep, scientists think it is more likely that they are used to the negative effects of reduced sleep.
People who sleep for 6 hours or fewer each night become accustomed to the effects of sleep deprivation, but this does not mean that their body needs any less sleep. Cynthia LaJambe, a sleep expert at the Pennsylvania Transportation Institute in Wingate, explains:
“Some people think they are adapting to being awake more, but are actually performing at a lower level. They don’t realize it because the functional decline happens so gradually.”
“In the end, there is no denying the effects of sleep deprivation. And training the body to sleep less is not a viable option.”
– Cynthia LaJambe
However, it is worth noting that some rare individuals do seem to function fine with fewer than 6.5 hours’ sleep each night. There is evidence that this might be due to a rare genetic mutation, so it is probably not something that someone can train themselves to achieve.
Generally, experts recommend people avoid naps to ensure a better night’s sleep. However, if someone has missed out on sleep during previous nights, a tactical nap can help repay some of the accrued sleep debt.
Around 20 minutes is a good nap length. This gives the body ample time to recharge. People who sleep much longer than this could mean they descend into a deep sleep, and once awake, they feel groggy.
Daytime napping is relatively common in the United States, but taking a “siesta” is the norm in some countries. Naturally, our bodies tend to dip in energy during the early afternoon, so perhaps napping around that time is more natural than avoiding sleep until nighttime.
After all, the vast majority of mammals are polyphasic sleepers, which means they sleep for short periods throughout the day.
In a large review of the effects of napping, the authors explain afternoon naps in people who are not sleep deprived can lead to “subjective and behavioral improvements” and improvements in “mood and subjective levels of sleepiness and fatigue.” They found people who nap experience improved performance in tasks, such as “addition, logical reasoning, reaction time, and symbol recognition.”
Not all naps are equal, however. There is a great deal of variation, such as the time of day, duration, and frequency of naps. One author explains:
“Epidemiological studies suggest a decrease in the risk of cardiovascular and cognitive dysfunction by the practice of taking short naps several times a week.”
The author also acknowledges that much more research is needed to understand how factors associated with napping influence health outcomes. Medical News Today recently examined the relationship between napping and cardiovascular disease in a Special Feature.
It is also important to note if an individual experiences severe tiredness during the day, this might be a sign of a sleep disorder, such as sleep apnea.
Scientists will need to conduct more research before they can finally put all the napping myths and mysteries to bed.
Because humans sleep, and our companion animals appear to sleep, many people assume all animals do the same. This is not true. The authors of a paper entitled “Do all animals sleep?” explain:
“Some animals never exhibit a state that meets the behavioral definition of sleep. Others suspend or greatly reduce ‘sleep’ behavior for many weeks during the postpartum period or during seasonal migrations without any consequent ‘sleep debt.’”
They also explain that some marine animals, reptiles, fish, and insects do not appear to enter REM sleep.
Because sleep is not simply a lack of consciousness, but a rhythmic cycle of distinct neural patterns, it is a challenge to distinguish whether an animal sleeps or takes a rest.
“[F]ewer than 50 of the nearly 60,000 vertebrate species have been tested for all of the criteria that define sleep,” the authors explain. “Of those, some do not meet the criteria for sleep at any time of their lives, and others appear able to greatly reduce or go without sleep for long periods of time.”
Although many people struggle to get the amount of sleep they need to feel refreshed, some regularly sleep longer than their body needs. One might think this could endow these individuals with superpowers.
However, researchers identify a link between longer sleep durations and poorer health. For instance, one study, which followed 276 adults for 6 years, concluded:
“The risk of developing obesity was elevated for short and long duration sleepers, compared to average-duration sleepers, with 27% and 21% increases in risk, respectively.”
This finding held even when the scientists controlled the analysis for age, sex, and baseline body mass index. Sleep duration might also impact mortality, according to some researchers.
A meta-analysis, which appears in the journal Sleep, concludes “Both short and long duration of sleep are significant predictors of death in prospective population studies.”
There is no record of anyone dying from sleep deprivation. In theory, it may be possible, but as far as scientists can ascertain, it is improbable.
It is understandable why this myth may have taken root, though. Sleep deprivation, as many people can attest, can feel horrendous. However, the case of Randy Gardner demonstrates that extreme sleep deprivation is not fatal.
In 1965, when Gardner was just 16, he was part of a sleep deprivation experiment. In total, he stayed awake for 11 days and 24 minutes, which equates to 264.4 hours.
During this time, he was monitored closely by fellow students and sleep scientists. As the days rolled on, sleep deprivation symptoms worsened, but he survived. So why has this myth persisted?
The belief that sleep deprivation can kill might have its roots in a study from the 1980s. Rechtschaffen and colleagues found if they deprived rats of sleep with a particular experimental method, they would die after 2–3 weeks.
In their experiments, the researchers placed rats on a disc suspended above water. They continuously measured their brain activity. Whenever the animal fell asleep, the disc would automatically move, and the rat would need to act to avoid falling in the water.
Despite the fatalities in Rechtschaffen’s experiments, later research showed this is not the norm. Rats deprived of sleep using different methods do not die. Also, other researchers who used the disc method on pigeons found it was not fatal for these creatures.
Sleep deprivation is not painless for humans, though. Back in 1965, Gardner’s parents were worried about their son. They asked Lieutenant Commander John J. Ross from the U.S. Navy Medical Neuropsychiatric Research Unit in San Diego to observe him. He describes a steady deterioration in function.
For instance, by day 2, Gardner found it more difficult to focus his eyes. By day 4, he struggled to concentrate and became irritable and uncooperative. On day 4, he also reported his first hallucination and delusion of grandeur.
On day 6, Gardner’s speech became slower, and by day 7, he was slurring as his memory worsened. Paranoia kicked in during day 10, and on day 11, his facial expression and tone of voice became expressionless. Both his attention and memory span were significantly diminished.
However, he did not die and apparently, did not experience any long-term health issues.
Another reason why the myth that sleep deprivation can be fatal persists might be due to a condition called fatal familial insomnia. People with this rare genetic disorder become unable to sleep. However, when individuals with this disease die, it is due to the accompanying neurodegeneration rather than lack of sleep.
Although sleep deprivation will probably not kill you directly, it is worth adding a note of caution: being overtired does increase the risk of accidents. According to the National Highway Traffic Safety Administration, “Drowsy driving kills — it claimed 795 lives in 2017.”
Similarly, a review published in 2013 concludes, “[a]pproximately 13% of work injuries could be attributed to sleep problems.” So, although sleep deprivation is not deadly in a direct sense, it can have fatal consequences.
Additionally, if we consistently deprive our bodies of sleep for months or years, it increases the risk of developing several conditions, including cardiovascular disease, hypertension, obesity, type 2 diabetes, and some forms of cancer.
Overall, we should try and aim for 7–9 hours’ sleep every night. It sounds simple, but in our neon-lit, bustling, and noisy lives, it is more challenging than we might like. All we can do is keep making an effort to give sleep the space that it needs.
It is only through persistent research that we will eventually decode all the mysteries of sleep. If you are interested in reading more about the myths associated with sleep, part one of this series can be found here.
Finally, if you find it difficult to get the sleep you need, here is a link to an MNT article with tips for better sleeping.
|
Because humans sleep, and our companion animals appear to sleep, many people assume all animals do the same. This is not true. The authors of a paper entitled “Do all animals sleep?” explain:
“Some animals never exhibit a state that meets the behavioral definition of sleep. Others suspend or greatly reduce ‘sleep’ behavior for many weeks during the postpartum period or during seasonal migrations without any consequent ‘sleep debt.’ ”
They also explain that some marine animals, reptiles, fish, and insects do not appear to enter REM sleep.
Because sleep is not simply a lack of consciousness, but a rhythmic cycle of distinct neural patterns, it is a challenge to distinguish whether an animal sleeps or takes a rest.
“[F]ewer than 50 of the nearly 60,000 vertebrate species have been tested for all of the criteria that define sleep,” the authors explain. “Of those, some do not meet the criteria for sleep at any time of their lives, and others appear able to greatly reduce or go without sleep for long periods of time.”
Although many people struggle to get the amount of sleep they need to feel refreshed, some regularly sleep longer than their body needs. One might think this could endow these individuals with superpowers.
However, researchers identify a link between longer sleep durations and poorer health. For instance, one study, which followed 276 adults for 6 years, concluded:
“The risk of developing obesity was elevated for short and long duration sleepers, compared to average-duration sleepers, with 27% and 21% increases in risk, respectively.”
This finding held even when the scientists controlled the analysis for age, sex, and baseline body mass index. Sleep duration might also impact mortality, according to some researchers.
A meta-analysis, which appears in the journal Sleep, concludes “Both short and long duration of sleep are significant predictors of death in prospective population studies.”
There is no record of anyone dying from sleep deprivation. In theory, it may be possible, but as far as scientists can ascertain, it is improbable.
|
no
|
Ecophysiology
|
Do all animals sleep?
|
no_statement
|
not all "animals" "sleep".. some "animals" do not "sleep".
|
https://www.worldatlas.com/animals/animals-that-don-t-sleep.html
|
Animals That Don't Sleep - WorldAtlas
|
Animals That Don't Sleep
Sleep is a puzzling phenomenon. It is clear that powering down into a passive state is beneficial for most living organisms, but it remains unclear to researchers why it has to take such a strange (and vulnerable) form. After all, humans spend roughly one-third of their lives snoozing. Some animals, such as koalas and sloths, spend nearly all of their time in la-la land. So it is only natural to wonder, have any animals found a loophole? Are there animals that do not need to sleep? Well, the answer(s) are a bit complicated. Let's comb through the kingdoms and see what researchers have been able to glean.
Dolphins
This happy-go-lucky marine mammal does not need to sleep…for a period of time. Newborn bottlenose dolphins (Tursiops truncates) do not sleep for the first month of their lives. The reason for this is simple: they have to resurface for air every 3 to 30 seconds. Trying to get some shut-eye in between those bursts would take the term "micro-nap" to a whole new level. And during this extended period of wakefulness, their mothers will also stay alert to steer the ship and keep a close watch on their precious young. This same protocol has also been observed amongst killer whales (Orcinus orca).
Even once dolphins mature, they still do not sleep in a clearly recognizable way. They literally sleep with one eye open, in a process called unihemispheric sleep. Because they have to regulate their breathing consciously, one half of the dolphin's brain will stay awake at all times while the other half rests. Each side eventually gets to turn in, and by alternating periodically, an adequate sleep schedule is maintained without ever drifting into total unconsciousness.
Great Frigatebirds
A male great frigatebird in flight.
The great frigatebird (Fregata minor) is another species capable of unihemispheric sleep. Unlike dolphins, great frigatebirds can utilize this strategy simply when needed. Researchers were able to rig up small devices that measured brain activity and found that while performing long-distance, transoceanic flights, these birds only slept in half their brains and only did so for an average of 42 minutes (compared to the over 12 hours they get on land). Though direct evidence is lacking, it is assumed that other endurance-flight birds (such as the common swift, which can fly continuously for months at a time) must have creative ways of sleeping on the fly.
Fruit Flies
Some insects sleep for extremely short periods. For instance, small percentages of female fruit flies (Drosophila melanogaster) were found to sleep for an average of 72 minutes per day, with one specimen found to sleep for only 4 minutes a day. Counter to other laboratory experiments involving sleep deprivation, these flies experienced no deleterious effects and lived just as long as the control group. Other insects are known to sleep very little or, alternatively, to enter into a torpor state, which is similarly marked by lowered metabolism, body temperature, and alertness.
Jellyfish
A jellyfish in the ocean.
Until recent studies demonstrated otherwise, it was thought that animals without a central nervous system, such as jellyfish (like Chrysaora fuscescens), either did not need or were incapable of sleep. However, it was shown that jellyfish do enter a sleep-like state at night. Their pulsations and responsiveness to basic stimuli dropped noticeably for an extended period, which at least gave the appearance of sleep. They certainly would not enter the same sort of deep trance that humans and other mammals do, but some kind of mental and physical recharge does seem to take place.
Bullfrogs
An American bullfrog.
One experiment seemed to show that bullfrogs (Lithobates catesbeianus) did not sleep since they reacted to stimuli in a similar manner at all times. However, this idea has been dispelled and replaced by the notion that they do snag intermittent moments of rest, though never sinking into a full, inattentive slumber. Whatever the case, these observations still only cover their active months. Bullfrogs get thoroughly caught up on sleep during their hibernation season.
Even though the definition of sleep ranges quite substantially across the animal world, it does appear that sleep is a universal requirement. It may not be the dream-inducing, catatonic state that we are familiar with, and it may not happen on a regular schedule, but all animals have established creative patterns for recharging their batteries. With that said, there are curious observations to explain and gaps in the scientific literature, so perhaps a truly sleepless animal could still be discovered.
|
Animals That Don't Sleep
Sleep is a puzzling phenomenon. It is clear that powering down into a passive state is beneficial for most living organisms, but it remains unclear to researchers why it has to take such a strange (and vulnerable) form. After all, humans spend roughly one-third of their lives snoozing. Some animals, such as koalas and sloths, spend nearly all of their time in la-la land. So it is only natural to wonder, have any animals found a loophole? Are there animals that do not need to sleep? Well, the answer(s) are a bit complicated. Let's comb through the kingdoms and see what researchers have been able to glean.
Dolphins
This happy-go-lucky marine mammal does not need to sleep…for a period of time. Newborn bottlenose dolphins (Tursiops truncates) do not sleep for the first month of their lives. The reason for this is simple: they have to resurface for air every 3 to 30 seconds. Trying to get some shut-eye in between those bursts would take the term "micro-nap" to a whole new level. And during this extended period of wakefulness, their mothers will also stay alert to steer the ship and keep a close watch on their precious young. This same protocol has also been observed amongst killer whales (Orcinus orca).
Even once dolphins mature, they still do not sleep in a clearly recognizable way. They literally sleep with one eye open, in a process called unihemispheric sleep. Because they have to regulate their breathing consciously, one half of the dolphin's brain will stay awake at all times while the other half rests. Each side eventually gets to turn in, and by alternating periodically, an adequate sleep schedule is maintained without ever drifting into total unconsciousness.
Great Frigatebirds
A male great frigatebird in flight.
The great frigatebird (Fregata minor) is another species capable of unihemispheric sleep. Unlike dolphins, great frigatebirds can utilize this strategy simply when needed.
|
no
|
Ecophysiology
|
Do all animals sleep?
|
no_statement
|
not all "animals" "sleep".. some "animals" do not "sleep".
|
https://www.nationalgeographic.com/animals/article/animals-hibernation-science-nature-biology-sleep
|
Animals Don't Actually Sleep for the Winter and Other Surprises ...
|
Some Animals Don't Actually Sleep for the Winter, and Other Surprises About Hibernation
It isn't just groundhogs—find out which animals hibernate and why.
ByChristie Wilcox
Published October 12, 2017
• 7 min read
For people who aren't fans of winter, animals that hibernate seem to have the right idea: It's the equivalent of burying your head under the covers until spring comes—isn't it? Not quite. Read on for more behind the science of hibernation.
What is hibernation?
Despite what you may have heard, species that hibernate don’t “sleep” during the winter.
Hibernation is an extended form of torpor, a state where metabolism is depressed to less than five percent of normal. “Most of the physiological functions are extremely slowed down or completely halted,” says Marina Blanco, a postdoctoral associate at the Duke Lemur Center in Durham, North Carolina, who studies the dwarf lemurs (Cheirogaleus spp.) of Madagascar—the only primates that hibernate on a regular schedule.
For example, when dwarf lemurs hibernate, they reduce their heart rates from over 300 beats per minute to fewer than six, says Blanco. And instead of breathing about every second, they can go up to 10 minutes without taking a breath. Their brain activity “becomes undetectable.” (See also: World’s Tiniest Animals.)
This is very different from sleep, which is gentle resting state where unconscious functions are still performed. In fact, Blanco’s research has found that hibernators have to undergo periodic arousals so they can catch some Zs!
While hibernation is most often seen as a seasonal behavior, it’s not exclusive to cold-weather critters. There are tropical hibernators that may do so to beat the heat.
Temperature isn’t always a factor. “Some species hibernate in response to food shortages,” notes Drew. For example, echidnas in Australia will hibernate after fires, waiting until food resources rebound to resume normal activities.
Recent studies have even suggested a third reason: protection. When hibernating, “you don’t smell; you don't make any noise; you don't make any movements; so you are very hard to detect for predator,” says Thomas Ruf, a professor of animal physiology at the University of Veterinary Medicine in Vienna. His work has shown that small mammals are five times more likely to die each month when active than when hibernating.
What actually happens when animals hibernate?
To slow their metabolism, animals cool their bodies by 5 to 10 °C (9 to 18 °F) on average. The Arctic ground squirrels (Spermophilus parryii) Drew works on can take this much further, supercooling to subfreezing temperatures.
Drew’s research has shown that cooling is likely regulated by levels of adenosine in the brain. Not only does adenosine ramp up in winter in ground squirrels, the receptors for the molecule become more sensitive to it.
Unfortunately, these arousals may drive hibernating species to extinction as our climate changes; scientists have found that animals stay active longer during arousal periods as ambient temperatures rise, depleting more of the energy they are trying to conserve.
What kinds of animals hibernate?
Bears are one of the few larger mammals that hibernate—most are on the small side.
Photograph by Paul Nicklen, Nat Geo Image Collection
Please be respectful of copyright. Unauthorized use is prohibited.
One bird and a variety of amphibians, reptiles, and insects also exhibit hibernation-like states. There is even at least one fish—the Antarctic cod—that slows down its metabolism in winter, becoming 20 times less active.
And, of course, there are lots of mammals. While bears might be the first that come to mind, for years questions have surrounded whether bears are really true hibernators. Unlike animals that stir regularly during hibernation, bears can go for 100 days or so without needing to wake to consume or pass anything, and they can be aroused much more easily than typical hibernators. The U.S. National Park Service suggests they are super hibernators.
Most mammalian hibernators on the smaller side.
“The average hibernator weighs only 70 grams,” says Ruf. That’s because little bodies have high surface area to volume ratios, making it more taxing for them to stay warm in cold weather—so they need the seasonal energy savings more than larger animals.
The edible dormouse can hibernate for more than 11 months at a time.
Photograph by Joel Sartore, National Geographic Photo Ark
Please be respectful of copyright. Unauthorized use is prohibited.
What animal hibernates the longest?
It’s harder than you’d think to award a prize for longest duration of hibernation. The obvious choice would be the edible dormice (Glis glis) Ruf works with—they can stay dormant for more than 11 months at a time in the wild. To pull that off, they have to double or even triple their body weight while active (that’s where they get their name: Romans considered their fat, tender, hibernation-ready bodies a delicacy.)
In one experiment, a big brown bat (Eptesicus fuscus) hibernated in a refrigerator for 344 days, suggesting bats may deserve the title (although, the animal didn’t exactly choose to and didn’t survive the feat).
This article has been updated to reflect the questions surrounding bears and hibernation.
An oil rig that environmentalists love? Here’s the real story.
Off the coast of California, an oil platform named Eureka is nearing the end of its lifespan and its time in the ocean—but it is home to a thriving ecosystem of marine life comparable to natural coral reef systems.
|
Some Animals Don't Actually Sleep for the Winter, and Other Surprises About Hibernation
It isn't just groundhogs—find out which animals hibernate and why.
ByChristie Wilcox
Published October 12, 2017
• 7 min read
For people who aren't fans of winter, animals that hibernate seem to have the right idea: It's the equivalent of burying your head under the covers until spring comes—isn't it? Not quite. Read on for more behind the science of hibernation.
What is hibernation?
Despite what you may have heard, species that hibernate don’t “sleep” during the winter.
Hibernation is an extended form of torpor, a state where metabolism is depressed to less than five percent of normal. “Most of the physiological functions are extremely slowed down or completely halted,” says Marina Blanco, a postdoctoral associate at the Duke Lemur Center in Durham, North Carolina, who studies the dwarf lemurs (Cheirogaleus spp.) of Madagascar—the only primates that hibernate on a regular schedule.
For example, when dwarf lemurs hibernate, they reduce their heart rates from over 300 beats per minute to fewer than six, says Blanco. And instead of breathing about every second, they can go up to 10 minutes without taking a breath. Their brain activity “becomes undetectable.” (See also: World’s Tiniest Animals.)
This is very different from sleep, which is gentle resting state where unconscious functions are still performed. In fact, Blanco’s research has found that hibernators have to undergo periodic arousals so they can catch some Zs!
While hibernation is most often seen as a seasonal behavior, it’s not exclusive to cold-weather critters. There are tropical hibernators that may do so to beat the heat.
Temperature isn’t always a factor. “Some species hibernate in response to food shortages,” notes Drew. For example, echidnas in Australia will hibernate after fires, waiting until food resources rebound to resume normal activities.
Recent studies have even suggested a third reason: protection.
|
no
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.