category
stringclasses 191
values | search_query
stringclasses 434
values | search_type
stringclasses 2
values | search_engine_input
stringclasses 748
values | url
stringlengths 22
468
| title
stringlengths 1
77
| text_raw
stringlengths 1.17k
459k
| text_window
stringlengths 545
2.63k
| stance
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|
Pop Culture
|
Did Michael Jackson compose songs for Sonic the Hedgehog 3?
|
yes_statement
|
"michael" jackson "composed" "songs" for sonic the hedgehog "3".. the "songs" in sonic the hedgehog "3" were "composed" by "michael" jackson.
|
https://www.videogameschronicle.com/news/yuji-naka-has-seemingly-finally-confirmed-that-michael-jackson-did-compose-sonic-3-music/
|
Michael Jackson DID compose Sonic 3 music, Yuji Naka has ...
|
Former Sonic Team head Yuji Naka has seemingly confirmed that Michael Jackson did compose some of the music for Sonic the Hedgehog 3.
The question of Jackson’s involvement has been widely discussed and argued online for many years, because it’s widely believed that some of the tracks were composed by Michael Jackson, who wasn’t credited in the final game.
Sega has never officially confirmed Jackson’s involvement, while various individuals over the years have given conflicting responses.
However, a tweet by Naka referring to the recent music changes in Sonic Origins appears to have put the issue to bed once and for all.
“Oh my god, the music for Sonic 3 has changed, even though Sega Official uses Michael Jackson’s music,” Naka said.
Naka was one of the programmers on Sonic 3, and also produced the game, so he would have definitively known if Jackson had been involved in contributing music for the game.
His tweet refers to the changes made in Sonic Origins, which was released today and doesn’t include the original Sonic 3 soundtrack.
Jackson’s music has been replaced by recreated versions of music originally designed for the game before Jackson’s alleged involvement.
Notice: To display this embed please allow the use of Functional Cookies in Cookie Preferences.
The first widespread debate over Jackson’s involvement in Sonic 3 came in 2006 when YouTuber Qjimbo created a video showing similarities between numerous songs in the game and Jackson’s own music.
The video followed up on a 2005 interview with former Sega Technical Institute director Roger Hector, who claimed that Jackson had been recruited by Sega to compose music for Sonic 3, but after sexual abuse allegations first emerged Jackson’s music was erased.
But Qjimbo’s video alleges that the music wasn’t actually erased at all, and that various pieces of music in the game bore striking resemblance to Jackson’s melodies.
In particular, the Carnival Night Zone had a moment that sounded like Jackson’s song Jam, while the Ice Cap Zone has a chord progression identical to that of Who Is It. Most notably, the end credits music seems to be based on Stranger in Moscow.
Back in 2009, Jackson’s musical director Brad Buxer claimed that Jackson worked on the soundtrack for four weeks in 1993, but Sega has never officially confirmed it and notable Sega executives including former Sega of America president Tom Kalinske have said that if he was involved it was without their knowledge.
|
Former Sonic Team head Yuji Naka has seemingly confirmed that Michael Jackson did compose some of the music for Sonic the Hedgehog 3.
The question of Jackson’s involvement has been widely discussed and argued online for many years, because it’s widely believed that some of the tracks were composed by Michael Jackson, who wasn’t credited in the final game.
Sega has never officially confirmed Jackson’s involvement, while various individuals over the years have given conflicting responses.
However, a tweet by Naka referring to the recent music changes in Sonic Origins appears to have put the issue to bed once and for all.
“Oh my god, the music for Sonic 3 has changed, even though Sega Official uses Michael Jackson’s music,” Naka said.
Naka was one of the programmers on Sonic 3, and also produced the game, so he would have definitively known if Jackson had been involved in contributing music for the game.
His tweet refers to the changes made in Sonic Origins, which was released today and doesn’t include the original Sonic 3 soundtrack.
Jackson’s music has been replaced by recreated versions of music originally designed for the game before Jackson’s alleged involvement.
Notice: To display this embed please allow the use of Functional Cookies in Cookie Preferences.
The first widespread debate over Jackson’s involvement in Sonic 3 came in 2006 when YouTuber Qjimbo created a video showing similarities between numerous songs in the game and Jackson’s own music.
The video followed up on a 2005 interview with former Sega Technical Institute director Roger Hector, who claimed that Jackson had been recruited by Sega to compose music for Sonic 3, but after sexual abuse allegations first emerged Jackson’s music was erased.
But Qjimbo’s video alleges that the music wasn’t actually erased at all, and that various pieces of music in the game bore striking resemblance to Jackson’s melodies.
In particular, the Carnival Night Zone had a moment that sounded like Jackson’s song Jam, while the Ice Cap Zone has a chord progression identical to that of Who Is It.
|
yes
|
Pop Culture
|
Did Michael Jackson compose songs for Sonic the Hedgehog 3?
|
yes_statement
|
"michael" jackson "composed" "songs" for sonic the hedgehog "3".. the "songs" in sonic the hedgehog "3" were "composed" by "michael" jackson.
|
https://www.vulture.com/2016/01/michael-jackson-did-write-music-for-sonic.html
|
The Rumors Were True: Michael Jackson Actually Did Write Music ...
|
The Rumors Were True: Michael Jackson Actually Did Write Music for Sonic the Hedgehog
For years, rumors have long eddied around the internet that Michael Jackson, the King of Pop himself, composed music for the classic Sega Genesis game Sonic the Hedgehog 3 (1994). We reported this back in 2009, but rumors have existed for longer than that, and commenters continue to argue the matter on YouTube and Reddit. But now we finally have a definitive answer. The rumors are true, Michael Jackson did in fact compose music for Sonic, and the music made it into the game. According to a wonderful exposé on the Huffington Post, Jackson composed a wide variety of music, including a lot of beat-boxing and writing legitimate “high-profile” songs.
Sega maintains it never worked with Jackson on Sonic 3, and is “not in the position to respond” to questions about allegations to the contrary. “We have nothing to comment on the case,” the company said.
But the men whom Sega credited with writing the music say otherwise. Six men — Brad Buxer, Bobby Brooks, Doug Grigsby III, Darryl Ross, Geoff Grace and Cirocco Jones — are listed as songwriters in Sonic 3’s endgame scroll. Buxer, Grigsby and Jones tell The Huffington Post that Jackson worked with them on a soundtrack for Sonic 3 — and that the music they created with Jackson ended up in the final product.
People have long opined that Sega dropped Jackson because of the 1993 allegations that he molested a child, but Jackson removed his own name from the game because he was unhappy with the sound quality once the music was compressed (apparently the King of Pop didn’t understand the way video games worked, even though he was the subject of one in 1990). The music for the end credits does sound an awful lot like “Stranger in Moscow,” and a video examining the two songs became one of the first viral videos in 2006, when YouTube was less than a year old. The best part is that Jackson was the only musician involved who actually played Sonic games. As Jackson himself put it, “He-he!”
|
The Rumors Were True: Michael Jackson Actually Did Write Music for Sonic the Hedgehog
For years, rumors have long eddied around the internet that Michael Jackson, the King of Pop himself, composed music for the classic Sega Genesis game Sonic the Hedgehog 3 (1994). We reported this back in 2009, but rumors have existed for longer than that, and commenters continue to argue the matter on YouTube and Reddit. But now we finally have a definitive answer. The rumors are true, Michael Jackson did in fact compose music for Sonic, and the music made it into the game. According to a wonderful exposé on the Huffington Post, Jackson composed a wide variety of music, including a lot of beat-boxing and writing legitimate “high-profile” songs.
Sega maintains it never worked with Jackson on Sonic 3, and is “not in the position to respond” to questions about allegations to the contrary. “We have nothing to comment on the case,” the company said.
But the men whom Sega credited with writing the music say otherwise. Six men — Brad Buxer, Bobby Brooks, Doug Grigsby III, Darryl Ross, Geoff Grace and Cirocco Jones — are listed as songwriters in Sonic 3’s endgame scroll. Buxer, Grigsby and Jones tell The Huffington Post that Jackson worked with them on a soundtrack for Sonic 3 — and that the music they created with Jackson ended up in the final product.
People have long opined that Sega dropped Jackson because of the 1993 allegations that he molested a child, but Jackson removed his own name from the game because he was unhappy with the sound quality once the music was compressed (apparently the King of Pop didn’t understand the way video games worked, even though he was the subject of one in 1990). The music for the end credits does sound an awful lot like “Stranger in Moscow,” and a video examining the two songs became one of the first viral videos in 2006, when YouTube was less than a year old. The best part is that Jackson was the only musician involved who actually played Sonic games. As Jackson himself put it, “He-he!”
|
yes
|
Pop Culture
|
Did Michael Jackson compose songs for Sonic the Hedgehog 3?
|
yes_statement
|
"michael" jackson "composed" "songs" for sonic the hedgehog "3".. the "songs" in sonic the hedgehog "3" were "composed" by "michael" jackson.
|
https://consequence.net/2016/01/michael-jackson-secretly-composed-music-for-sonic-the-hedgehog-video-game/
|
Michael Jackson secretly composed music for Sonic the Hedgehog ...
|
Michael Jackson secretly composed music for Sonic the Hedgehog video game
It’s long held that Michael Jackson’s greatest contribution to video games came with the ridiculous/awesome 1990 arcade beat-’em-up Moonwalker. However, rumors have persisted over the years that the King of Pop also composed music for the 1994 Sega game Sonic the Hedgehog 3, but had the music removed for reasons only speculated upon. Now, a new Huffington Post expose on the strange journey of Jackson’s soundtrack claims to prove once and for all that not only did he write and record music for the game, but that the songs actually made it in.
The story goes that Jackson was so enamored with the Sonic games that, upon hearing work was underway on a third installment, he called up Sega and invited himself over. While touring the facility, a developer working on the new game asked if Jackson wanted to contribute music for the soundtrack. Brad Buxer, who has a songwriter credit on Sonic 3 and was working with Jackson on music of his own, confirmed to HuffPo that they collaborated. “I was working with Michael on the Dangerous album,” Buxer said, “and he told me he was going to be doing the Sonic the Hedgehog soundtrack for Sonic 3. He asked me if I would help him with it.”
According to Buxer, some 41 tracks were recorded. “We did use a lot of samples made from [Jackson’s] beatboxing,” added Buxer. “We would chop this up and use it in cues. Of course there were Michael ‘he-he’s’ and other signature Michaelisms.” However, because Jackson chose to record in “high-profile” studio quality, much of the music was squashed down thanks to the compression necessary to fit it on a 16-bit game. Because of this, Jackson was dissatisfied with the final quality of the sound.
Advertisement
Related Video
“Michael wanted his name taken off the credits if they couldn’t get it to sound better,” Buxer explained. (Rumors also maintain that Jackson’s then-fresh accusations surrounding child sexual abuse could have led to Sega pulling his name from the game.) Doug Grigsby III and Cirocco Jones, two other credited songwriters on Sonic 3, also confirmed this, with all noting that Jackson was essentially the leader of the soundtrack team.
Although his name was left off the final game (for whatever reason), it turns out his music was very much kept in. “Oh, it did get in the game,” Grigsby insisted. “The stuff we handed in, the stuff we did, made it. To. The game.”
In fact, careful listeners claim that you can flat out hear famous Jackson jams within the score. There’s “Carnival Night Act 1”, which is almost a dead ringer for “Jam”; “Azure Lake”, which sounds like a sped-up “Black or White”; and the “Act 1 Boss” battle, which beats like “In the Closet” and still features some trademark “woo”s and “c’mon”s. Most incredibly of all, the closing credits song sounds just like an early attempt at “Stranger in Moscow”.
Advertisement
You can hear the musical similarities for yourself by breaking out that old Sega Genesis or listening to the track-by-track comparisons below. Note that the YouTube video was made by a 15-year-old a few years back, but it still gives a good idea of how Jackson’s stamp is clearly on this game.
|
Michael Jackson secretly composed music for Sonic the Hedgehog video game
It’s long held that Michael Jackson’s greatest contribution to video games came with the ridiculous/awesome 1990 arcade beat-’em-up Moonwalker. However, rumors have persisted over the years that the King of Pop also composed music for the 1994 Sega game Sonic the Hedgehog 3, but had the music removed for reasons only speculated upon. Now, a new Huffington Post expose on the strange journey of Jackson’s soundtrack claims to prove once and for all that not only did he write and record music for the game, but that the songs actually made it in.
The story goes that Jackson was so enamored with the Sonic games that, upon hearing work was underway on a third installment, he called up Sega and invited himself over. While touring the facility, a developer working on the new game asked if Jackson wanted to contribute music for the soundtrack. Brad Buxer, who has a songwriter credit on Sonic 3 and was working with Jackson on music of his own, confirmed to HuffPo that they collaborated. “I was working with Michael on the Dangerous album,” Buxer said, “and he told me he was going to be doing the Sonic the Hedgehog soundtrack for Sonic 3. He asked me if I would help him with it.”
According to Buxer, some 41 tracks were recorded. “We did use a lot of samples made from [Jackson’s] beatboxing,” added Buxer. “We would chop this up and use it in cues. Of course there were Michael ‘he-he’s’ and other signature Michaelisms.” However, because Jackson chose to record in “high-profile” studio quality, much of the music was squashed down thanks to the compression necessary to fit it on a 16-bit game. Because of this, Jackson was dissatisfied with the final quality of the sound.
Advertisement
Related Video
“Michael wanted his name taken off the credits if they couldn’t get it to sound better,” Buxer explained.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.sapiens.org/biology/neanderthal-brain/
|
The Neanderthal Brain—Clues About Cognition – SAPIENS
|
The Neanderthal Brain—Clues About Cognition
Anna Goldfield is an archaeologist and producer on the Kids’ Podcast team at American Public Media, where she creates music, makes silly noises into a microphone, and writes about science and history for kids of all ages. Goldfield also hosts The Dirt Podcast, a weekly show for all audiences about archaeology, anthropology, and stories from our shared human past.
Please note that this article includes images of human remains.
✽
One of the most tantalizing topics about Neanderthals is their cognition: how it developed and whether it was much different from patterns of thought in Homo sapiens.
We know from the archaeological record that much of Neanderthal hunting, foraging, and toolmaking behavior was quite similar to that of anatomically modern humans in the same time period, some 50,000 years ago. Recent evidence for Neanderthal art also suggests that they had the potential for symbolic and abstract thinking, which had previously only been attributed to H. sapiens. These clues hint that Neanderthals may have been broadly capable of the same mental tasks as anatomically modern humans, and yet the standard view is that modern humans won the evolutionary race.
What let them win? Was there something about the Neanderthals’ cognitive capacity that didn’t measure up?
Researchers haven’t ever found any soft tissue from Neanderthals. But even in the absence of actual gray matter, researchers can still see something of the Neanderthal brain simply by looking at the skull. The brain and its bony casing grow so closely together that an imprint of the soft tissue can actually be seen on the inside of the skull. The size, shape, and texture imprinted on the surface of this empty space (together creating a real or virtual shape called an endocast), reveals something about the brain once housed inside.
Overall, Neanderthal endocasts, and thus their brains, are bigger and of a different shape than those of modern humans. The occipital (rear) portion of the Neanderthal cranium is elongated, resulting in a feature termed the “occipital bun.” Picture a ballerina with her hair tied back at the base of her skull, and you’ll get a rough idea of the shape. Researchers have wondered how that bigger size and odd shape might also affect the size and shape of different regions of the brain, thus influencing patterns of cognition.
Chris Stringer, a merit researcher at the Natural History Museum, London, face to face with a Neanderthal skull at the press launch of the museum’s new Treasures gallery.
Gareth Fuller/Getty Images
A 2018 study used CT scans of four adult Neanderthal skulls and four anatomically modern H. sapiens skulls, and MRI scans from more than a thousand living human subjects to create endocasts of their brains. As expected, the Neanderthal brains were slightly bigger and more elongated than those of modern humans. To figure out how all of the different regions of the brain probably fit into the Neanderthal’s differently shaped space, the team used the modern human brain as a starting point and manipulated it with software to fit the proportions of Neanderthal endocasts. The study found that early H. sapiens probably had a larger cerebellum than Neanderthals—a part of the brain that, in modern humans, is important for both motor skills and higher cognition, including language processing, learning and reasoning, and social abilities. Differences in how Neanderthal and H. sapiens populations transmitted information within their social groups and passed on skills important to a hunter-gatherer lifestyle could have had a major impact on each population’s success and adaptability.
Left to right: Antonio García-Tabernero, Antonio Rosas, and Luis Ríos beside the skeleton of a Neanderthal child found in Spain.
Andrés Díaz/CSIC Communications Department
Neanderthal brain growth from infancy to adulthood is also a process researchers can understand by studying endocasts. An especially important fossil for this topic is the skeleton of a young male child who was around 8 years old when he died. The skull of this Neanderthal, unearthed at the Spanish site of El Sidrón, was still growing at the time of his death and was only around 87 percent of the full volume of the average adult male Neanderthal. In contrast, a human child of the same age would have completed almost 95 percent of their total cranial growth. Along with other similar finds of infants and children in Europe and the Levant, this suggests that, after birth, Neanderthal brains developed more slowly than those of modern humans.
Some of this slower growth rate may have its roots in the evolutionary trade-off between large brains and the limitations of the birth canal. An infant’s head can only be so large before birth becomes dangerous for both mother and newborn. Another factor limiting brain size is the high energetic cost of growing brain tissue. For these reasons, we large-brained humans, unlike other primates, are born before much of our brain has finished developing. As a result, an infant’s mother, or other members of the social group, must care for the newborn until it is able to fend for itself. Based on what researchers now know from Neanderthal endocasts, the maturation process was even more prolonged in our extinct relatives.
A final insight into the Neanderthal brain comes from the world of genetics. In an effort to understand how Neanderthal brains might have functioned, scientists at the Stem Cell Program at the University of California, San Diego (UCSD), are cultivating tiny versions of Neanderthal brains in a dish.
Approximately one-third of the human genome, or around 10,000 genes, govern neurological development. And modern humans living outside of Africa have less than 4 percent Neanderthal DNA. But one particular neurological-development gene, NOVA1, is extremely similar between humans and Neanderthals—only one pair of DNA “letters” differs between the two. That makes it an excellent target for study. Program director Alysson Muotri and colleagues use the gene-editing tool CRISPR to alter contemporary human stem cells, tweaking their NOVA1 code to match that of a Neanderthal. Then they coax the cells to become brain cells in a petri dish—a process that takes six to eight months.
The NOVA1 research is just in its beginning stages, and the tiny portions of Neanderthal brain cells currently occupying lab space at UCSD can’t yet tell us how Neanderthal brains functioned. But, along with what can be gleaned from the Neanderthal cranium, the study is an intriguing glimpse into the Neanderthal mind.
This column is part of an ongoing series about the Neanderthal body: a head-to-toe tour. See our interactive graphic.
Correction: May 10, 2019 An earlier version of this article incorrectly stated that modern humans share approximately20 percent of our genome with Neanderthals. The correct figure is 1–4 percent. This statement was also corrected to identify the percentage of shared genetic material with modern humans outside of Africa.
You may republish this article, either online and/or in print, under the Creative Commons CC BY-ND 4.0 license. We ask that you follow these simple guidelines to comply with the requirements of the license.
In short, you may not make edits beyond minor stylistic changes, and you must credit the author and note that the article was originally published on SAPIENS.
Accompanying photos are not included in any republishing agreement; requests to republish photos must be made directly to the copyright holder.
You may republish this article, either online and/or in print, under the Creative Commons CC BY-ND 4.0 license. We ask that you follow these simple guidelines to comply with the requirements of the license.
In short, you may not make edits beyond minor stylistic changes, and you must credit the author and note that the article was originally published on SAPIENS.
Accompanying photos are not included in any republishing agreement; requests to republish photos must be made directly to the copyright holder.
|
What let them win? Was there something about the Neanderthals’ cognitive capacity that didn’t measure up?
Researchers haven’t ever found any soft tissue from Neanderthals. But even in the absence of actual gray matter, researchers can still see something of the Neanderthal brain simply by looking at the skull. The brain and its bony casing grow so closely together that an imprint of the soft tissue can actually be seen on the inside of the skull. The size, shape, and texture imprinted on the surface of this empty space (together creating a real or virtual shape called an endocast), reveals something about the brain once housed inside.
Overall, Neanderthal endocasts, and thus their brains, are bigger and of a different shape than those of modern humans. The occipital (rear) portion of the Neanderthal cranium is elongated, resulting in a feature termed the “occipital bun.” Picture a ballerina with her hair tied back at the base of her skull, and you’ll get a rough idea of the shape. Researchers have wondered how that bigger size and odd shape might also affect the size and shape of different regions of the brain, thus influencing patterns of cognition.
Chris Stringer, a merit researcher at the Natural History Museum, London, face to face with a Neanderthal skull at the press launch of the museum’s new Treasures gallery.
Gareth Fuller/Getty Images
A 2018 study used CT scans of four adult Neanderthal skulls and four anatomically modern H. sapiens skulls, and MRI scans from more than a thousand living human subjects to create endocasts of their brains. As expected, the Neanderthal brains were slightly bigger and more elongated than those of modern humans.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.smithsonianmag.com/science-nature/science-shows-why-youre-smarter-than-a-neanderthal-1885827/
|
Science Shows Why You're Smarter Than a Neanderthal | Science ...
|
Science Shows Why You’re Smarter Than a Neanderthal
A Neanderthal’s skull (right) was larger than a human’s (left) and had a similar inner volume for mental capacity, but new research indicates less of it was devoted to higher-order thinking. Image via Wikimedia Commons/DrMikeBaxter
Neanderthals never invented written language, developed agriculture or progressed past the Stone Age. At the same time, they had brains just as big in volume as modern humans’. The question of why we Homo sapiens are significantly more intelligent than the similarly big-brained Neanderthals—and why we survived and proliferated while they went extinct—has puzzled scientists for some time.
Now, a new study by Oxford researchers provides evidence for a novel explanation. As they detail in a paper published today in the Proceedings of the Royal Society B, a greater percentage of the Neanderthal brain seems to have been devoted to vision and control of their larger bodies, leaving less mental real estate for higher thinking and social interactions.
The research team, led by Eiluned Pearce, came to the finding by comparing the skulls of 13 Neanderthals who lived 27,000 to 75,000 years ago to 32 human skulls from the same era. In contrast to previous studies, which merely measured the interior of Neanderthal skulls to arrive at a brain volume, the researchers attempted to come to a “corrected” volume, which would account for the fact that the Neanderthals’ brains were in control of rather differently-proportioned bodies than ours ancestors’ brains were.
A replica of the La Ferrassie 1 Neanderthal skull, the largest and most complete Neanderthal skull ever found. Image via the Natural History Museum London
One of the easiest differences to quantify, they found, was the size of the visual cortex—the part of the brain responsible for interpreting visual information. In primates, the volume of this area is roughly proportional to the size of the animal’s eyes, so by measuring the Neanderthals’ eye sockets, they could get a decent approximation of their the visual cortex as well. The Neanderthals, it turns out, had much larger eyes than ancient humans. The researchers speculate that this could be because they evolved exclusively in Europe, which is of higher latitude (and thus has poorer light conditions) than Africa, where H. sapiens evolved.
Along with eyes, Neanderthals had significantly larger bodies than humans, with wider shoulders, thicker bones and a more robust build overall. To account for this difference, the researchers drew upon previous research into the estimated body masses of the skeletons found with these skulls and of other Neanderthals. In primates, the amount of brain capacity devoted to body control is also proportionate to body size, so the scientists were able to calculate roughly how much of the Neanderthals’ brains were assigned to this task.
After correcting for these differences, the research team found that the amount of brain volume left over for other tasks—in other words, the mental capacity not devoted to seeing the world or moving the body—was significantly smaller for Neanderthals than for ancient H. sapiens. Although the average raw brain volumes of the two groups studied were practically identical (1473.84 cubic centimeters for humans versus 1473.46 for Neanderthals), the average “corrected” Neanderthal brain volume was just 1133.98 cubic centimeters, compared to 1332.41 for the humans.
This divergence in mental capacity for higher cognition and social networking, the researcher argue, could have led to the wildly different fates of H. sapiens and Neanderthals. “Having less brain available to manage the social world has profound implications for the Neanderthals’ ability to maintain extended trading networks,” Robin Dunbar, one of the co-authors, said in a press statement. “ are likely also to have resulted in less well developed material culture—which, between them, may have left them more exposed than modern humans when facing the ecological challenges of the Ice Ages.”
Previous studies have also suggested that the internal organization of Neanderthal brains differed significantly from ours. For example, a 2010 project used computerized 3D modeling and Neanderthal skulls of varying ages to find that their brains developed at different rates over the course of an individual’s adolescence as compared to human brains despite comparable brain volumes.
The overall explanation for why Neanderthals went extinct while we survived, of course, is more complicated. Emerging evidence points to the idea that Neaderthals were smarter than previously thought, though perhaps not smart enough to outmaneuver humans for resources. But not all of them had to—in another major 2010 discovery,a team of researchers compared human and Neanderthal genomes and found evidence that our ancestors in Eurasia may have interbred with Neanderthals, preserving a few of their genes amidst our present-day DNA.
Apart from the offspring of a small number of rare interbreeding events, though, the Neanderthals did die out. Their brains might have been just as big as ours, but ours might have been better at a few key tasks–those involved in building social bonds in particular—allowing us to survive the most recent glacial period while the Neanderthals expired.
|
The researchers speculate that this could be because they evolved exclusively in Europe, which is of higher latitude (and thus has poorer light conditions) than Africa, where H. sapiens evolved.
Along with eyes, Neanderthals had significantly larger bodies than humans, with wider shoulders, thicker bones and a more robust build overall. To account for this difference, the researchers drew upon previous research into the estimated body masses of the skeletons found with these skulls and of other Neanderthals. In primates, the amount of brain capacity devoted to body control is also proportionate to body size, so the scientists were able to calculate roughly how much of the Neanderthals’ brains were assigned to this task.
After correcting for these differences, the research team found that the amount of brain volume left over for other tasks—in other words, the mental capacity not devoted to seeing the world or moving the body—was significantly smaller for Neanderthals than for ancient H. sapiens. Although the average raw brain volumes of the two groups studied were practically identical (1473.84 cubic centimeters for humans versus 1473.46 for Neanderthals), the average “corrected” Neanderthal brain volume was just 1133.98 cubic centimeters, compared to 1332.41 for the humans.
This divergence in mental capacity for higher cognition and social networking, the researcher argue, could have led to the wildly different fates of H. sapiens and Neanderthals. “Having less brain available to manage the social world has profound implications for the Neanderthals’ ability to maintain extended trading networks,” Robin Dunbar, one of the co-authors, said in a press statement. “ are likely also to have resulted in less well developed material culture—which, between them, may have left them more exposed than modern humans when facing the ecological challenges of the Ice Ages.”
Previous studies have also suggested that the internal organization of Neanderthal brains differed significantly from ours.
|
no
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.discovermagazine.com/planet-earth/neanderthal-brains-bigger-not-necessarily-better
|
Neanderthal Brains: Bigger, Not Necessarily Better | Discover ...
|
Newsletter
In any textbook on human evolution, you’ll find that fact, often accompanied by measurements of endocranial volume, the space inside a skull. On average, this value is about 1410 cm3 (~6 cups) for Neanderthals and 1350 cm3 (5.7 cups) for recent humans.
So does that quarter-cup of brain matter, matter? Were Neanderthals smarter than our kind?
While brain size is important, cognitive abilities are influenced by numerous factors including body size, neuron density and how particular brain regions are enlarged and connected. Some of these variables are unknowable for Neanderthals, as we only have their cranial bones and not their brains. But anthropologists have made the most of these hollow skulls, to learn what they can about the Neanderthal mind.
Two Paths to Big Brains
The question of Neanderthal intelligence has fascinated scientists since 1856, when the first fossils classified as Homo neanderthalensis were discovered. From the start, they got a bad reputation. In an early study of the skull, “The Reputed Fossil Man of the Neanderthal,” geologist William King speculated the Neanderthal’s “thoughts and desires … never soared beyond those of the brute.” The view persists today from GEICO ads to the Oxford English Dictionary.
But is there basis for this stereotype? After all, Neanderthals were our evolutionary cousins, sharing about 99.8 percent of our genetic code, including genes important for brain expansion and language. They were similar enough, in terms of biology and behavior, that Homo sapiens and Neanderthals interbred, in several periods and places between 40,000 and 100,000 years ago.
Since diverging from a common ancestor over 500,000 years ago, Neanderthals and modern humans evolved distinctive anatomies (Credit: Encyclopedia Britannica/UIG via Getty Image)
At the same time, Neanderthals were distinct enough to be classified as a separate species. Sometime between 520,000 and 630,000 years ago, the shared ancestors of Neanderthals and Homo sapiens diverged and embarked on separate evolutionary paths. Members of that population that spread to Europe eventually evolved into Neanderthals, whereas those in Africa gave rise to Homo sapiens or modern humans. During this period of separation, the groups evolved distinctive anatomies. Modern humans were relatively tall and lean. Neanderthals became short and massive, with average males about 5 foot 4 inches, 170 pounds and females 5 foot 1 inch, 145 pounds, based on estimates from femur and pelvis size.
Since their common ancestor, the lineages also increased in brain size, but in different ways. To accommodate bigger brains, Neanderthal crania expanded lengthwise like footballs, whereas modern human skulls became more globular, like soccer balls. By 150,000 years ago, members of both species had brains surpassing 1400 cm3 — about three times larger than chimpanzees, our closest living relatives.
How Much Can that Skull Hold
To measure fossil brain volume, anthropologists have traditionally filled skulls with beads or seeds, and dumped the contents into a graduated cylinder (a precise measuring cup). They’ve also submerged molds of skulls into water, measuring the volume displaced. Today CT (computed tomography) scanning methods offer more accurate (and less-messy) measurements, but much of the data in textbooks and other references was collected the old fashioned way.
Based on these values, we can confidently say fossil Neanderthals and modern humans from the same time period had similar brain sizes. Twenty-three Neanderthal skulls, dating between 40,000 and 130,000 years ago, had endocranial volumes between 1172 to 1740 cm3. A sample of 60 Stone Age Homo sapiens ranged from 1090 to 1775 cm3.
For recent humans, average adult brain size is 1,349 cm3 based on measurements from 122 global populations compiled in the 1980s. Excluding extreme conditions like microcephaly, people span from 900 to 2,100 cm3. That means the average Neanderthal brain volume, of roughly 1410 cm3, is higher than the mean value for humans today. But all the Neanderthals that we’ve measured fall comfortably within the range of living people.
Body Size and Brain Shape
So we know Neanderthals had similar-sized, if not bigger, brains. But their brains could have been organized or proportioned differently, resulting in important cognitive differences. Because Neanderthals had more massive bodies, they may have needed more brain volume for basic somatic maintenance — leaving less brain matter for other functions.
Some scientists also suggest that Neanderthals had relatively better vision. In a 2013 study, researchers estimated visual cortex volume based on the size of orbits, or the holes in skulls for eyes. Neanderthals had bigger orbits, implying larger visual cortices and better vision, which may have been an adaptation for higher latitudes, with less light (although it’s questionable whether orbital size is a reliable indicator of visual cortex volume in humans).
And what did Homo sapiens do with our extra brain space? Some researchers have argued modern humans had larger cerebellums, making us better at information processing. Others have suggested we prioritized smell: Modern human brains had relatively large olfaction regions according to a 2011 study in Nature Communications, which compared the internal base of skulls. The authors propose that heightened sense of smell would have been beneficial for subconsciously identifying safe foods or detecting social information (like who is kin, angry or a suitable mate).
I know you’re thinking, “I’d take vision over smell any day.” That’s my reaction too. What matters here is this: We don’t know if this difference played any role in the success of modern humans and the extinction of Neanderthals. But identifying any such differences — in brains, bodies or culture — gives us a starting point for understanding what gave our species an evolutionary edge.
|
520,000 and 630,000 years ago, the shared ancestors of Neanderthals and Homo sapiens diverged and embarked on separate evolutionary paths. Members of that population that spread to Europe eventually evolved into Neanderthals, whereas those in Africa gave rise to Homo sapiens or modern humans. During this period of separation, the groups evolved distinctive anatomies. Modern humans were relatively tall and lean. Neanderthals became short and massive, with average males about 5 foot 4 inches, 170 pounds and females 5 foot 1 inch, 145 pounds, based on estimates from femur and pelvis size.
Since their common ancestor, the lineages also increased in brain size, but in different ways. To accommodate bigger brains, Neanderthal crania expanded lengthwise like footballs, whereas modern human skulls became more globular, like soccer balls. By 150,000 years ago, members of both species had brains surpassing 1400 cm3 — about three times larger than chimpanzees, our closest living relatives.
How Much Can that Skull Hold
To measure fossil brain volume, anthropologists have traditionally filled skulls with beads or seeds, and dumped the contents into a graduated cylinder (a precise measuring cup). They’ve also submerged molds of skulls into water, measuring the volume displaced. Today CT (computed tomography) scanning methods offer more accurate (and less-messy) measurements, but much of the data in textbooks and other references was collected the old fashioned way.
Based on these values, we can confidently say fossil Neanderthals and modern humans from the same time period had similar brain sizes. Twenty-three Neanderthal skulls, dating between 40,000 and 130,000 years ago, had endocranial volumes between 1172 to 1740 cm3. A sample of 60 Stone Age Homo sapiens ranged from 1090 to 1775 cm3.
|
no
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.livescience.com/60481-how-neanderthals-got-such-large-brains.html
|
How Neanderthals Got Their Unusually Large Brains | Live Science
|
How Neanderthals Got Their Unusually Large Brains
Neanderthals had larger brains than modern humans do, and a new study of a Neanderthal child's skeleton now suggests this is because their brains spent more time growing.
Modern humans are known for having unusually large brains for their size. It takes a lot of energy to develop such large brains, and previous research suggested that the high cost of modern-human brain development was a key reason why human growth in general is slow compared with that of other primates.
"If you look at earlier primates, they have much quicker development," said study co-lead author Antonio Rosas, chairman of the paleoanthropology group at Spain's National Museum of Natural Sciences, in Madrid. [In Photos: Neanderthal Burials Uncovered]
Researchers knew that Neanderthals had even larger brains than modern humans do, but it was unclear whether the Neanderthal pattern of growth was as slow as it is in modern humans or whether it was faster, as in other primates.
To learn more about Neanderthal development, scientists investigated an exceptionally well-preserved, nearly complete skeleton of a young male Neanderthal unearthed at the 49,000-year-old site of El Sidrón in Spain. A study in March found that Neanderthals at El Sidrón once dined on woolly rhinoceroses and wild sheep, and even self-medicated with painkillers and antibiotics.
Researchers working inside the El Sidrón cave in Spain where the skeleton of a Neanderthal boy was found. (Image credit: Paleoanthropology Group MNCN-CSIC)
To find out how old the Neanderthal was when he died, the scientists cut into the skeleton's teeth and counted the number of growth layers, much as one can estimate a tree's age by counting the number of rings in its trunk. They estimated the boy was about 7.7 years old when he died. The cause of his death was unclear, but it did not appear to be disease or trauma.
The skull of the Neanderthal was still maturing at the time of death, and his brain was only 87.5 percent the size of the average adult Neanderthal brain. "We think this Neanderthal boy's brain was still growing in volume," Rosas told Live Science. In contrast, "at about the same age, the modern human brain would have reached nearly 95 percent of its volume," he added.
These findings suggest "it took a little bit longer for the brain to grow in Neanderthals than in modern humans," Rosas said. Similarly, a number of the Neanderthal's vertebrae had not yet fused, although those same vertebrae tend to fuse in modern humans by about the ages of 4 to 6.
The skeleton of a Neanderthal boy recovered from the El Sidrón cave in Spain has revealed that Neanderthal brains spent a long time growing. (Image credit: Paleoanthropology Group MNCN-CSIC)
Still, the researchers noted that maturation of most other features of the Neanderthal boy's anatomy matched the maturation of those of a modern human of the same age. "Our main conclusion is that Neanderthals shared a common [overall] pattern of growth with modern humans, and this common pattern was possibly inherited from a common ancestor," Rosas said.
"We thought our slow way of growing was very specific, very particular, very unique to our species," Rosas said. "What we realize now is that this pattern of slow growth that allows us to have this big brain and mature slowly, with all the advantages involved with that, was also shared by different human species."
It remains uncertain what consequences, if any, this different rate of brain development might have had for how Neanderthals thought or behaved, the researchers added.
The scientists detailed their findings in the Sept. 22 issue of the journal Science.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
Charles Q. Choi is a contributing writer for Live Science and Space.com. He covers all things human origins and astronomy as well as physics, animals and general science topics. Charles has a Master of Arts degree from the University of Missouri-Columbia, School of Journalism and a Bachelor of Arts degree from the University of South Florida. Charles has visited every continent on Earth, drinking rancid yak butter tea in Lhasa, snorkeling with sea lions in the Galapagos and even climbing an iceberg in Antarctica.
|
How Neanderthals Got Their Unusually Large Brains
Neanderthals had larger brains than modern humans do, and a new study of a Neanderthal child's skeleton now suggests this is because their brains spent more time growing.
Modern humans are known for having unusually large brains for their size. It takes a lot of energy to develop such large brains, and previous research suggested that the high cost of modern-human brain development was a key reason why human growth in general is slow compared with that of other primates.
"If you look at earlier primates, they have much quicker development," said study co-lead author Antonio Rosas, chairman of the paleoanthropology group at Spain's National Museum of Natural Sciences, in Madrid. [In Photos: Neanderthal Burials Uncovered]
Researchers knew that Neanderthals had even larger brains than modern humans do, but it was unclear whether the Neanderthal pattern of growth was as slow as it is in modern humans or whether it was faster, as in other primates.
To learn more about Neanderthal development, scientists investigated an exceptionally well-preserved, nearly complete skeleton of a young male Neanderthal unearthed at the 49,000-year-old site of El Sidrón in Spain. A study in March found that Neanderthals at El Sidrón once dined on woolly rhinoceroses and wild sheep, and even self-medicated with painkillers and antibiotics.
Researchers working inside the El Sidrón cave in Spain where the skeleton of a Neanderthal boy was found. (Image credit: Paleoanthropology Group MNCN-CSIC)
To find out how old the Neanderthal was when he died, the scientists cut into the skeleton's teeth and counted the number of growth layers, much as one can estimate a tree's age by counting the number of rings in its trunk. They estimated the boy was about 7.7 years old when he died. The cause of his death was unclear, but it did not appear to be disease or trauma.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://science.howstuffworks.com/life/inside-the-mind/human-brain/neanderthal-bigger-brains-humans.htm
|
Neanderthals Had Bigger Brains Than Modern Humans — Why Are ...
|
Neanderthals Had Bigger Brains Than Modern Humans — Why Are We Smarter?
""
This recreation of what a living Neanderthal man would've looked like is found in the Neanderthal Museum in Mettmann, Germany. Erich Ferdinand/Flickr
A lot of us have a little Neanderthal DNA in us. Modern humans of European or Asian descent inherited somewhere between 1 and 4 percent of our genes from this hominid that went extinct 30,000 years ago. We coexisted, and apparently more than coexisted, with them for as many as 5,400 years, but then they died out, and we remained. We were two very similar hominid species, and it's tough to pinpoint the advantage Homo sapiens of the time had over the Neanderthals: We both seemed to thrive and grow our populations during the last ice age, for instance. And Neanderthals actually had larger brains than modern humans, and seem to have done very "human" things, like bury their dead, cook, and make tools and personal ornaments. So what was the difference between a Neanderthal and a modern human of the time? And did our brain give us some sort of hidden advantage?
First of all, although your average Neanderthal had a larger brain than that of the last human you spoke to, it was probably comparable in size to the brain of the Homo sapiens of the time.
Advertisement
"Our ancestors had larger bodies than us, and needed larger brains to control and maintain those bodies," says Dr. Eiluned Pearce, a researcher in the Department of Experimental Psychology at Oxford, and coauthor of a 2013 paper on Neanderthal brains published in the Proceedings of the Royal Society B. "And Neanderthals were even larger-bodied than the modern humans living at the same time, so it's likely they would have needed a lot more neural tissue to control their bigger muscles."
Secondly, it's not just brain size that matters here, but brain organization. Neanderthals had very large eyes, which allows us to infer some things about their brains:
"There is a simple relationship between the size of the eyeball and the size of the visual area in the brains of monkeys and apes — and in humans, of course," says Pearce's co-author Dr. Robin Dunbar, professor of Evolutionary Psychology at Oxford. "From correlations known in monkeys, we can work out how much of the Neanderthal brain was dedicated to visual processing."
And it makes sense that Neanderthals would need an extra visual boost; they evolved at higher latitudes, where there's little sunlight during the long, dark winters. Pearce and Dunbar suggest that living in low-light conditions made it necessary for the Neanderthal brain to be dominated by a tricked-out visual processing system in the back. This allowed them to see in low-light conditions — but it also took up a lot of skull real estate.
""
This illustration based on scientific scans shows the different shape and size of modern human (left) skulls and those of Neanderthals.
heavypred/Getty Images
Modern humans, on the other hand, put more energy into growing the front part of their brains, where all the complex social cognitive processes happen. This allowed them to grow their social networks to a size a Neanderthal might have found difficult to manage. So when caveman problems reared their ugly heads — cold, famine, disease — modern humans might not have been able to see quite as well as their Neanderthal counterparts, but they could maintain relationships with a larger group of people who could help them in times of trouble.
So, it's possible Neanderthals died out simply because they didn't have the people skills to get help from their buds when they needed it, which might have gradually decreased their numbers.
"It would have been an issue of social processing and social cognition for handling the complexities of human social relationships," says Dunbar. "Neanderthals would have been at the lower end of the distribution we find in normal human populations."
What might it have been like, then, to interact with a Neanderthal?
"We might find them a little on the slow, unsophisticated side," he says. "Probably pretty similar to many people we meet in everyday life, actually."
Frequently Answered Questions
How are Neanderthal brains different from humans?
There are a few key ways that Neanderthal brains are different from human brains. For one, Neanderthal brains are slightly larger than human brains on average. Additionally, the shape of the Neanderthal brain is slightly different, with a more elongated shape overall. Finally, Neanderthal brains have slightly different proportions of white and gray matter than human brains.
Cite This!
Please copy/paste the following text to properly cite this HowStuffWorks.com article:
Jesslyn Shields
"Neanderthals Had Bigger Brains Than Modern Humans — Why Are We Smarter?"
1 January 1970. HowStuffWorks.com. <https://science.howstuffworks.com/life/inside-the-mind/human-brain/neanderthal-bigger-brains-humans.htm>
15 August 2023
|
Neanderthals Had Bigger Brains Than Modern Humans — Why Are We Smarter?
""
This recreation of what a living Neanderthal man would've looked like is found in the Neanderthal Museum in Mettmann, Germany. Erich Ferdinand/Flickr
A lot of us have a little Neanderthal DNA in us. Modern humans of European or Asian descent inherited somewhere between 1 and 4 percent of our genes from this hominid that went extinct 30,000 years ago. We coexisted, and apparently more than coexisted, with them for as many as 5,400 years, but then they died out, and we remained. We were two very similar hominid species, and it's tough to pinpoint the advantage Homo sapiens of the time had over the Neanderthals: We both seemed to thrive and grow our populations during the last ice age, for instance. And Neanderthals actually had larger brains than modern humans, and seem to have done very "human" things, like bury their dead, cook, and make tools and personal ornaments. So what was the difference between a Neanderthal and a modern human of the time? And did our brain give us some sort of hidden advantage?
First of all, although your average Neanderthal had a larger brain than that of the last human you spoke to, it was probably comparable in size to the brain of the Homo sapiens of the time.
Advertisement
"Our ancestors had larger bodies than us, and needed larger brains to control and maintain those bodies," says Dr. Eiluned Pearce, a researcher in the Department of Experimental Psychology at Oxford, and coauthor of a 2013 paper on Neanderthal brains published in the Proceedings of the Royal Society B. "And Neanderthals were even larger-bodied than the modern humans living at the same time, so it's likely they would have needed a lot more neural tissue to control their bigger muscles. "
Secondly, it's not just brain size that matters here, but brain organization.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.fortinberrymurray.com/todays-research/were-the-neanderthals-smarter-than-we-are
|
Were the Neanderthals smarter than we are? | Today's Research by ...
|
Providing Insights
With our deep knowledge of business and our detailed understanding of human behavior we are able to offer fresh insights and new thinking to a wide range of issues in an ever-changing business environment.
Were the Neanderthals smarter than we are?
Listen to this article
Neanderthals had larger brains than modern humans do, and a new study of a Neanderthal child's skeleton now suggests this is because their brains spent more time growing.
Modern humans are known for having unusually large brains for their size. It takes a lot of energy to develop such large brains, and previous research suggested that the high cost of modern-human brain development was a key reason why human growth, in general, is slow compared with that of other primates.
What the researchers say: âIf you look at earlier primates, they have much quicker development,â said the study co-lead author. The research appears in the journal Science.
Researchers knew that Neanderthals had even larger brains than modern humans do, but it was unclear whether the Neanderthal pattern of growth was as slow as it is in modern humans or whether it was faster, as in other primates.
To learn more about Neanderthal development, scientists investigated an exceptionally well-preserved, nearly complete skeleton of a young male Neanderthal unearthed at the 49,000-year-old site of El Sidrón in Spain. A study in March found that Neanderthals at El Sidrón once dined on woolly rhinoceroses and wild sheep, and even self-medicated with painkillers and antibiotics.
To find out how old the Neanderthal was when he died, the scientists cut into the skeleton's teeth and counted the number of growth layers, much as one can estimate a tree's age by counting the number of rings in its trunk. They estimated the boy was about 7.7 years old when he died. The cause of his death was unclear, but it did not appear to be disease or trauma.
The skull of the Neanderthal was still maturing at the time of death, and his brain was only 87.5 percent the size of the average adult Neanderthal brain. âWe think this Neanderthal boy's brain was still growing in volume,â the lead researcher said. âAt about the same age, the modern human brain would have reached nearly 95 percent of its volume,â he added.
These findings suggest âit took a little bit longer for the brain to grow in Neanderthals than in modern humans,â he said. Similarly, a number of the Neanderthal's vertebrae had not yet fused, although those same vertebrae tend to fuse in modern humans by about the ages of 4 to 6.
Still, the researchers noted that maturation of most other features of the Neanderthal boy's anatomy matched the maturation of those of a modern human of the same age. âOur main conclusion is that Neanderthals shared a common [overall] pattern of growth with modern humans, and this common pattern was possibly inherited from a common ancestor,â the researchers said.
âWe thought our slow way of growing was very specific, very particular, very unique to our species,â they said. âWhat we realize now is that this pattern of slow growth that allows us to have this big brain and mature slowly, with all the advantages involved with that, was also shared by different human species.â
It remains uncertain what consequences if any, this different rate of brain development might have had for how Neanderthals thought or behaved, the researchers added.
So what? Interestingly this study of a long-almost-lost species (Europeans and all other non-Africans carry about 5% of Neanderthal DNA in their genome), has a lot to say about modern learning. The question as to which species were smarter will now become a much hotter topic. This is because the longer brains take to physically mature the longer the peak learning period that they have. Humans and one supposes Neanderthals, learn very fast in the early stages as the brain grows to its final size. The capacity to learn slows down the larger the brain gets (we learn more in the first year of life, when the brain is growing fastest, than at any other time).
Presumably, the Neanderthal child had the capacity to learn more because he had longer to do so. As an adult, he wouldâve then, possibly, been smarter than modern human adults. But why should the younger maturing brain learn so much more easily? I believe one of the reasons is that we had fewer cognitive and emotional blocks to learning. The assumptions that we build up about ourselves and our capacity to learn and to understand develop as we grow. They are based on our experience, what our parents and teachers tell us about ourselves, the influence of our relationships with our childhood friends and enemies. In the end, we become specialized in what we think weâre good at. We tend to tell ourselves âIâm good at sports,â or âIâm not good at mathâ for example or âI donât understand science.â Once that happens we cease to learn and for the rest of our lives we hold fast to that view of ourselvesâit becomes part of who we are.
Can we re-start rapid learning? Well probably yes, and that is something that Alicia and I are working on. We are using as a base the work that has been done using the concept of âlearning mindsetâ or âgrowth mindset.â The essence of this is that you can rid yourself of the shackles of negative assumptions if you are guided to experiment in new areas.
The new that we are applying is the idea of the primacy of the human drive to acquire and deepen supportive relationships (not a new concept to TR readers). We are able to change our assumptions and our beliefs if we are committed to the relationship with someoneâor a group of peopleâwho believe something different. We change our beliefs to deepen the relationship and make ourselves more attractive to that person or group. In that way, we seek to gain their support.
This is learning and it can be quite rapid. Thus if we want the support of our science teacher we will adopt the assumption âMaybe I can try to do well at scienceâ and begin experimenting. Assuming that the teacher responds to this behavioral change and encourages it through praise and recognition (thereby activating the dopamine and oxytocin reward systems which many studies have shown are crucial for learning) the new assumption will become solid.
The brain in this way âlearns to learnâ and we can adopt other experimental behaviors. This is, I believe the basis of all adult learning and change.
Join our tribe
Subscribe to Dr. Bob Murrayâs Todayâs Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.
|
Providing Insights
With our deep knowledge of business and our detailed understanding of human behavior we are able to offer fresh insights and new thinking to a wide range of issues in an ever-changing business environment.
Were the Neanderthals smarter than we are?
Listen to this article
Neanderthals had larger brains than modern humans do, and a new study of a Neanderthal child's skeleton now suggests this is because their brains spent more time growing.
Modern humans are known for having unusually large brains for their size. It takes a lot of energy to develop such large brains, and previous research suggested that the high cost of modern-human brain development was a key reason why human growth, in general, is slow compared with that of other primates.
What the researchers say: âIf you look at earlier primates, they have much quicker development,â said the study co-lead author. The research appears in the journal Science.
Researchers knew that Neanderthals had even larger brains than modern humans do, but it was unclear whether the Neanderthal pattern of growth was as slow as it is in modern humans or whether it was faster, as in other primates.
To learn more about Neanderthal development, scientists investigated an exceptionally well-preserved, nearly complete skeleton of a young male Neanderthal unearthed at the 49,000-year-old site of El Sidrón in Spain. A study in March found that Neanderthals at El Sidrón once dined on woolly rhinoceroses and wild sheep, and even self-medicated with painkillers and antibiotics.
To find out how old the Neanderthal was when he died, the scientists cut into the skeleton's teeth and counted the number of growth layers, much as one can estimate a tree's age by counting the number of rings in its trunk. They estimated the boy was about 7.7 years old when he died. The cause of his death was unclear, but it did not appear to be disease or trauma.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://australian.museum/learn/science/human-evolution/homo-neanderthalensis/
|
Homo neanderthalensis – The Neanderthals - The Australian Museum
|
In this section, find out everything you need to know about visiting the Australian Museum, how to get here and the extraordinary exhibitions on display. Check out the What's On calendar of events, workshops and school holiday programs.
In this section, there's a wealth of information about our collections of scientific specimens and cultural objects. Come and explore what our researchers, curators and education programs have to offer.
In this section, explore all the different ways you can be a part of the Museum's groundbreaking research, as well as come face-to-face with our dedicated staff. Join us, volunteer and be a part of our journey of discovery!
Neanderthals co-existed with modern humans for long periods of time before eventually becoming extinct about 28,000 years ago. The unfortunate stereotype of these people as dim-witted and brutish cavemen still lingers in popular ideology but research has revealed a more nuanced picture.
3D interactive model of Neanderthal skull cast
Important fossil discoveries
The first Neanderthal fossil was found in 1829, but it was not recognised as a possible human ancestor until more fossils were discovered during the second half of the 19th century. Since then, thousands of fossils representing the remains of many hundreds of Neanderthal individuals have been recovered from sites across Europe and the Middle East. These include babies, children and adults up to about 40 years of age. As a result, more is known about this human ancestor than about any other.
Key specimens:
Le Moustier – a 45,000-year-old skull discovered in Le Moustier, France. The distinctive features of Neanderthals are already apparent in this adolescent individual. This shows that these characteristics were genetic and not developed during an individual’s lifetime.
Shanidar 1 – upper jaw with teeth. The front teeth of Neanderthals often show heavy wear, a characteristic that is even found in young Neanderthals. It is probable that they used their teeth as a kind of vice to help them hold animal skins or other objects as they worked.
La Ferrassie 1 – a 50,000-year-old skull discovered in 1909 in La Ferrassie, France. This skull of an elderly male has the features associated with ‘classic’ European Neanderthals.
Amud 1 – a 45,000-year-old skull discovered in1961 by Hisashi Suzuki in Amud, Israel. This individual was more than 180 centimetres tall and had the largest brain of any fossil human (1740 cubic centimetres). Neanderthals probably migrated to the Middle East during times of harsh European winters. These individuals had less robust features than their European counterparts.
Maba – a partial skull classifed as Homo sp. (species uncertain) and discovered in Maba, China. This partial skull, dated to about 120,000 – 140,000 years old, shows remarkable similarities to European Neanderthals and its discovery in southern China suggests the possibility that Neanderthals travelled further east than once thought. More fossil evidence from Asia is needed to understand the significance of this specimen.
La Chapelle-aux-Saints – a 50,000-year-old skull discovered in 1908 in La Chapelle-aux-Saints, France. This male individual had lost most of his teeth and his skeleton showed evidence of major injuries and disease including a healed broken hip, and arthritis of the lower neck, back, hip and shoulders. He survived for quite some time with these complaints, which indicates that these people cared for the sick and elderly.
Neanderthal 1 – a 45,000-year-old skullcap discovered in 1856 in Feldhofer Grotto, Neander Valley, Germany. This is the ‘type specimen’ or official representative of this species.
Kebara 2 – 60,000-year-old partial skeleton discovered in 1983 in Kebara cave, Israel. This relatively complete skeleton belonged to an adult male. It was deliberately buried but as no grave goods were found it is difficult to infer any ritualistic behaviour.
Lagar Velho – a 24,000-year-old skeleton of a Homo sapiens boy discovered in 1998 in Abrigo do Lagar Velho, central western Portugal. This specimen has been described by its discoverers (and particularly Eric Trinkhaus) as a Neanderthal-Homo sapiens hybrid. This interpretation was based on knee and leg proportions but as the head, pelvis and forearms are decidedly human it is more likely that the robustness is a climatic adaptation (see Tattersal and Schwartz). Comparisons to other humans of this period are difficult due to lack of knowledge on variations within child populations.
What the Neanderthal name means
Homo, is a Latin word meaning ‘human’ or ‘man’. The word neanderthalensis is based on the location where the first major specimen was discovered in 1856 – the Neander Valley in Germany. The German word for valley is ‘Tal’ although in the 1800s it was spelt ‘Thal’. Homo neanderthalensis therefore means ‘Human from the Neander Valley’.
Some people refer to this species as the Neandertals (with no 'h') to reflect the modern German spelling rather than the original spelling, Neanderthal, used to define the species.
Stay in the know
Distribution
Remains of this species have been found scattered across Europe and the Middle East. The eastern-most occurrence of a Neanderthal may be represented by a fossil skull from China known as ‘Maba’.
A study published in 2009 confirms the presence of three separate sub-groups of Neanderthals, between which slight differences could be observed, and suggests the existence of a fourth group in western Asia. The study analysed the genetic variability, and modelled different scenarios, based on the genetic structure of the maternally transmitted mitochondrial DNA (mtDNA). The study was possible thanks to the publication, since 1997, of 15 mtDNA sequences from 12 Neanderthals. According to the study, the size of the Neanderthal population was not constant over time and a certain amount of migration occurred among the sub-groups.
Relationships with other species
While we are closely related to the Neanderthals, they are not our direct ancestors. Evidence from the fossil record and genetic data shows they are a distinct species that developed as a side branch in our family tree. Some European Homo heidelbergensis fossils were showing early Neanderthal-like features by about 300,000 years ago and it is likely that Neanderthals evolved in Europe from this species.
The name Homo sapiens neanderthalensis was once common when Neanderthals were considered to be members of our own species, Homo sapiens. This view and name are no-longer favoured.
Interbreeding with modern humans?
Groundbreaking analysis of the Neanderthal genome (nuclear DNA and genes) published in 2010 shows that modern humans and Neanderthals did interbreed, although on a very limited scale. Researchers compared the genomes of five modern humans with the Neanderthal, discovering that Europeans and Asians share about 1-4% of their DNA with Neanderthals and Africans none. This suggests that modern humans bred with Neanderthals after moderns left Africa but before they spread to Asia and Europe. The most likely location is the Levant, where both species co-existed for thousands of years at various times between 50-90,000 years ago. Interestingly, the data doesn't support wide-scale interbreeding between the species in Europe, where it would have been most likely given their close proximity. Researchers are now questioning why interbreeding occurred on such a low scale, given that it was biologically possible. The answer may lie in cultural differences.
Sharing Europe with the Denisovians?
Did the Neanderthals also live alongside another human species in Europe? An interesting case making headlines in 2010 was the discovery of a finger bone and tooth from Denisova cave in Russia. The bones were found in 2008 and date to about 30,000-50,000 years old. Mitochondrial DNA (mtDNA) was extracted from the remains, and then sequenced. The result was that the mtDNA did not match either modern human or Neanderthal mtDNA.
Little else could be gleaned from these studies so scientists started work on extracting nuclear DNA. This produced far more information. The 'Denisovians', as they have been nicknamed, were more closely related to Neanderthals than modern humans. This suggests the Neanderthals and 'Denisovans' shared a common ancestor after modern humans and Neanderthals split. Perhaps this ancestor left Africa half a million years ago with the Neanderthals spreading west to the Near East and Europe while the Denisovans headed east. However, this does not necessarily mean they are a 'new' species as they may be already known from fossils that have no DNA record to compare, such as Homo heidelbergensis or H. antecessor. (See Nature, December 2010)s
Neanderthals key physical feature
Neanderthals are recognisably human but have distinctive facial features and a stocky build that were evolutionary adaptations to cold, dry environments.
Body size and shape
Neanderthals were generally shorter and had more robust skeletons and muscular bodies than modern humans
males averaged about 168 centimetres in height while females were slightly shorter at 156 centimetres.
Brain
brain size was larger than the average modern human brain and averaged 1500 cubic centimetres. This is expected, as Neanderthals were generally heavier and more muscular than modern humans. People that live in cold climates also tend to have larger brains than those living in warm climates.
Skull
distinctive skull shape that was long and low, with a rounded brain case
back of the skull had a bulge called the occipital bun and a depression (the suprainiac fossa) for the attachment of strong neck muscles
thick but rounded brow ridge lay under a relatively flat and receding forehead
mid-face region showed a characteristic forward projection (this resulted in a face that looked like it had been ‘pulled’ forward by the nose)
orbits (eye sockets) were large and rounded
nose was broad and very large
Jaws and teeth
jaws were larger and more robust than those of modern humans and had a gap called the retromolar space, behind the third molars (wisdom teeth) at the back of the jaw.
jaw lacked the projecting bony chin that is found in Homo sapiens.
teeth were larger than those of modern humans.
Limbs and pelvis
limb bones were thick and had large joints which indicates they had strongly muscled arms and legs
shin bones and forearms tended to be shorter than those of modern humans. These proportions are typical for people living in cold climates.
pelvis was wider from side to side than in modern humans and this may have slightly affected their posture
DNA and biomolecular studies
Neanderthals are our only ancestors to have had studies performed on their DNA and other biomolecules. Although numerous studies have been undertaken since the first was published in 1997 (on mitochondrial DNA), the most significant is the publication in 2009 of the rough draft of the Neanderthal genome.
Other key findings on from a variety of studies include the discovery of: a gene for red hair and fair skin (2007): the FOXP2 gene, related to language ability, that was the same as modern humans; type O blood in two males from Spain (2008)
Neanderthals lifestyle
Neanderthals Culture
Evidence shows that Neanderthals had a complex culture although they did not behave in the same ways as the early modern humans who lived at the same time. Scholars debate the degree of symbolic behaviour shown by Neanderthals as finds of art and adornment are rare, particularly when compared to their modern human contemporaries who were creating significant amounts of cave paintings, portable art and jewellery. Some researchers believe they lacked the cognitive skills to create art and symbols and, in fact, copied from or traded with modern humans rather than create their own artefacts. However, others suggest the scarcity may have been due to social and demographic factors.
Tools
The Neanderthals had a reasonably advanced tool kit classified as Mode 3 technology that was also used by early members of our own species, Homo sapiens. This was also known as the Mousterian, named after the site of Le Moustier. At the end of their long history in Europe, they began manufacturing a more refined toolkit (known as the Chatelperronian), similar to the blade tools of Homo sapiens. This occurred at about the same time as modern humans entered Europe. Many archaeologists think that the Neanderthals were attempting to copy the types of tools that they observed modern humans making. Alternatively, they may have obtained these tools by trading with the modern humans.
Fire, shelter and clothing
The Neanderthals built hearths and were able to control fire for warmth, cooking and protection. They were known to wear animal hides, especially in cooler areas. However, there is no physical evidence that Neanderthal clothing was sewed together, and it may have simply been wrapped around the body and tied.
Caves were often used as shelters but open air shelters were also constructed.
Art and decoration
Neanderthals left behind no known symbolic art and only limited evidence for body decoration. One of few decorative items found at a Neanderthal site is a pendant from Arcy-sur-Cure in France, found amongst bone tools and other artefacts that were attributed to a culture known as Chatelperronian (which most researchers consider Neanderthal). However, redating of the site's layers in 2010 suggest contamination occurred between layers and that the artefact may have been made by modern humans, as they also occupied this site in later times. There is only one other undisputed Chatelperronian site that has yielded personal ornaments, and even these may have been obtained by trade with modern humans (Homo sapiens), or been made in imitation of artefacts made by modern humans.
In 2010 researchers uncovered artefacts at two sites in Spain - Anton rock shelter and Aviones cave - that provide indirect evidence of symbolic art. The former held naturally-perforated scallop shells painted with orange pigments and the latter a cockleshell that may have been used as a paint container as it had residue of red and black pigments. The Avione finds date to between 45-50,000 years ago, which is before modern humans arrived in Europe so could not have been copied from them.
Burials
The dead were often buried, although there is no conclusive evidence for any ritualistic behaviour. However, at some sites, objects have been uncovered that may represent grave goods.
Environment and diet
This species occupied a range of environments across Europe and the Middle East and lived through a period of changing climatic conditions. Ice Ages in Europe were interspersed with warmer periods but by 110,000 years ago average temperatures were on the decline and full glacial conditions had appeared by 40,000 years ago.
There is evidence that the Neanderthals hunted big game and chemical analysis of their fossils shows that they ate significant amounts of meat supplemented with vegetation. Despite this mixed diet, nearly half of the Neanderthal skeletons studied show the effects of a diet deficient in nutrients.
Researchers have long debated whether Neanderthals also included human meat in their diets. It is not always easy to determine if cut marks on human bones are due to cannabilism, some other practice or even animal teeth, but in recent years new evidence has emerged that suggests some Neanderthals may indeed have been cannibals on occasions.
At the site of Krapina Cave in Croatia, over 800 Neanderthal bones show evidence of cut marks and hammerstone fragments. The marrow-rich bones are missing and the marrow-poor bones are all in tact. Some argue that the evidence is inconclusive as the fragmentation of bones may have been caused by cave-ins and the bone cuts are different to the marks seen on reindeer bones. They claim the cut marks could be from secondary burial practises.
Bones from Abri Moula in France show cut marks typical of butchery rather than simple ritual defleshing. The marks were also like those on the bones of roe deer, assumed to be food, found in the same shelter.
The cave of El Sidron in Spain yielded hundreds of Neanderthal bones with cut marks, deliberate breaks for marrow extraction, and other signs that the bodies had been butchered for flesh in the same way as animals.
What happened to the Neanderthals?
Neanderthals persisted for hundreds of thousands of years in extremely harsh conditions. They shared Europe for 10,000 years with Homo sapiens. Today they no longer exist. Beyond these facts the fate of Neanderthals has generated much debate.
Two main theories
Theory 1: They interbred with Homo sapiens sapiens on a relatively large scale. Followers of this theory believe that although Neanderthals as organisms no longer exist their genes were present in early modern Europeans and may still exist today. Interbreeding diluted Neanderthal DNA because there were significantly more Homo sapiens sapiens. Neanderthals were a sub-species of Homo sapiens rather than a separate species and hence their scientific name is Homo sapiens neanderthalensis.
Proponents of this theory cite the following as evidence:
there are features of Neanderthals in some Cro-Magnon (Homo sapiens) populations. For instance the discoverers of the 24,000-year-old skeleton of a modern human boy from Lagar Velho in Portugal, argue that although the pelvis and facial morphology are sapien-like, the robusticity and limb proportions are more Neanderthal-like. As the age of the skeleton is later than the time of the last known Neanderthal, these features must represent significant interbreeding and transmission of DNA between modern humans and Neanderthals. Cro-magnon remains from Vogelherd in Germany and Mladec in the Czech Republic also exhibit a Neanderthal-like projection of the occipital bun at the back of the skull, more so than in later Homo sapiens.
there are modern features in later Neanderthal populations. The Vindija Neanderthals look more modern than do other Neanderthals, which suggests that they may have interbred with incoming Homo sapiens.
there are features of Neanderthals in modern Europeans. Some Europeans living today have a similar shaped mandibular foramen (nerve canal in lower jaw) to the Neanderthals and the distinct retromolar gap (typical of Neanderthals) appears in isolated modern European populations.
Theory 2: They were essentially replaced by Homo sapiens. In this case, Neanderthals are a separate species from Homo sapiens. This model does allow for peripheral interbreeding but no significant genetic input from Neanderthals to modern Europeans.
Proponents of this theory cite the following as evidence:
studies of Neanderthal mitochondrial DNA (first extracted in 1997) show that it lies outside the range of modern human mtDNA. Neanderthal mtDNA is four times older than that of Homo sapiens, hence scientists postulate a Neanderthal split from the line leading to modern humans about 500-600,000 years ago. The studies also reveal that Neanderthal mtDNA is no closer to modern European mtDNA than moderns from any other part of the world.
analysis of the draft Neanderthal genome (the nuclear DNA and genes), released in 2010, shows that modern human and Neanderthal lineages began to diverge about 600,000 years ago. It also indicates that there was small-scale interbreeding as non-Africans derive about 1-4% of their DNA from Neanderthals. These results challenge the simplest version of 'Out of Africa' (which claims no interbreeding in its model for modern human origins) but do support the view that the vast majority of genes of non-Africans came with the spread of modern humans that originated in Africa.
studies of the facial growth patterns of young Neanderthals show they developed in distinct ways to Homo sapiens. The differences are therefore deeply genetic, contradicting the evidence of the Lagar Velho boy. The distinctive features include brow ridges, chins, forehead and facial protrusion.
Why did they become extinct?
Various reasons have been proposed for the ‘replacement’ of Neanderthals by modern humans. Today, most theories accept that Neanderthals displayed advanced behaviours and adaptive strategies and were not sluggish brutes that stood no chance against the vastly superior Homo sapiens. However,the incoming Homo sapiens were doing something that was different enough, and just that little bit more superior, to give them an edge under the circumstances. Exactly what was 'a little bit more superior' is debated. Of particular interest are a number of new studies that focus on the role of climate change and the subtle differences that behaviour and biology play in these conditions.
Perhaps their extinction was a combination of two or more of the following factors:
Biological
Neanderthal reproductive success and survival rates appear poor compared to Homo sapiens. Most Neanderthal remains are of individuals rarely over 30 years old and over half are children. Slightly better rates of reproductive success and childhood survival over 10,000 years could be all it took for Homo sapiens to replace Neanderthals.
Neanderthal metabolic rates appeared to be much higher than modern humans so would have required more food to survive. In situations of plenty this would make little difference, but in severe winters or unstable climatic conditions (see below), the dwindling resources would put pressure on populations that needed large amounts of energy from food.
Claims that Neanderthals could not run as well as modern humans over long distances is supported by evidence from Neanderthal ankles. Their heal bones are longer than modern humans', resulting in a longer Achilles tendon. Shorter Achilles, as in modern humans, store more energy so are more efficient for running. Neanderthals generally didn't need to be good long-distance runners as they hunted in cooler regions using ambush tactics, but when conditions changed this could prove a huge disadvantage. Evidence suggests this happened 50,000 years ago as much of northern Europe changed from forest to tundra due to advancing ice sheets. Neanderthals were forced into isolated forest refuges in southern areas while modern humans adapted to hunting on the increasingly widespread tundra.
Social and behavioural
Neanderthal culture lacks the depth of symbolic and progressive thought displayed by modern humans and this may have made competing difficult. Neanderthal culture remained relatively static whereas the contemporary Homo sapiens were steadily evolving a complex culture. By the time Homo sapiens arrived in Europe 40,000 years ago they had a highly developed cultural system. This is despite the fact that 100,000 years ago there is relatively no cultural difference between either species in the archaeological record.
Neanderthals may have had limited speech and language capabilities compared to Homo sapiens and the extent of the differences may have played a role in their extinction. For instance, studies of the base of the skull suggest limited Neanderthal repertoire and the position of the tongue in the mouth and larynx is also different from Homo sapiens. (This is a highly contentious theory with scientists on both sides strongly arguing for or against).
Neanderthals may have lacked the adaptive nature of modern humans who had complex social networks across wide areas. Smaller populations of Neanderthals that tended to stay in limited areas may have made them vulnerable to local extinctions.
The survival techniques of Neanderthals were not as developed as Homo sapiens. For instance, studies on stress and build-up of tissue in Neanderthal bones indicate they may have lacked systematic and directional planning in procuring food. This Neanderthal predominance of ‘brawn over brain’ may also be reflected in the number of skeletal injuries seen in both sexes, probably from close range hunting. Other studies show that 40% of Neanderthal remains have hypoplasia, a condition caused by lack of nutrients in early childhood. This is supported by tests on Neanderthal bone collagen which indicate that meat was very significant in Neanderthal diets to the point that they may be lacking the nutrients from other sources used by Homo sapiens, especially fresh water products and vegetable matter.
Neanderthals may not have used their brains they way modern humans do as their brains were shaped differently - modern human brains have expanded parietal and cerebellar regions. These regions develop in the first year of life (Neanderthal infants appear to miss this stage of development) and are linked to key functions like the ability to integrate sensory information and form abstract representations of surroundings.
Possible violent interactions with modern humans.
Environment or climate
New data on the glacial period that occurred from about 65,000 to 25,000 years ago (known as OIS-3) shows that it was a period of rapid, severe and abrupt climate changes with profound environmental impacts. Although Neanderthals were physically adapted to the cold, the severe changes in conditions (within individuals' lifetimes in many cases) allowed no time for populations to recover. Even small advantages in biology, behaviour or lifestyle, such as those mentioned above, would mean the difference between life and death. The archaeological record indicates that modern humans had a wider range of adaptations which would have helped in survival.
There is another angle to the climate change theory. Evidence based on extensive surveys of sites in Europe suggests that Neanderthal replacement was not due to direct competition with modern humans. Instead, evidence suggests that the severe conditions made the continent inhospitable for all humans living in Europe - and all populations died out about 30-28,000 years ago. However, there were other modern human populations living in Africa that were able to recolonise Europe at a later date. As there were no Neanderthal populations elsewhere, they became extinct.
The Australian Museum respects and acknowledges the Gadigal people as the First Peoples and Traditional Custodians of the land and waterways on which the Museum stands.
—
We pay our respect to Aboriginal Elders and recognise their continuous connection to Country. This website may contain names, images and voices of deceased Aboriginal and Torres Strait Islander peoples.
|
Brain
brain size was larger than the average modern human brain and averaged 1500 cubic centimetres. This is expected, as Neanderthals were generally heavier and more muscular than modern humans. People that live in cold climates also tend to have larger brains than those living in warm climates.
Skull
distinctive skull shape that was long and low, with a rounded brain case
back of the skull had a bulge called the occipital bun and a depression (the suprainiac fossa) for the attachment of strong neck muscles
thick but rounded brow ridge lay under a relatively flat and receding forehead
mid-face region showed a characteristic forward projection (this resulted in a face that looked like it had been ‘pulled’ forward by the nose)
orbits (eye sockets) were large and rounded
nose was broad and very large
Jaws and teeth
jaws were larger and more robust than those of modern humans and had a gap called the retromolar space, behind the third molars (wisdom teeth) at the back of the jaw.
jaw lacked the projecting bony chin that is found in Homo sapiens.
teeth were larger than those of modern humans.
Limbs and pelvis
limb bones were thick and had large joints which indicates they had strongly muscled arms and legs
shin bones and forearms tended to be shorter than those of modern humans. These proportions are typical for people living in cold climates.
pelvis was wider from side to side than in modern humans and this may have slightly affected their posture
DNA and biomolecular studies
Neanderthals are our only ancestors to have had studies performed on their DNA and other biomolecules. Although numerous studies have been undertaken since the first was published in 1997 (on mitochondrial DNA), the most significant is the publication in 2009 of the rough draft of the Neanderthal genome.
Other key findings on from a variety of studies include the discovery of: a gene for red hair and fair skin (2007): the FOXP2 gene, related to language ability,
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3619466/
|
New insights into differences in brain organization between ...
|
Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
Abstract
Previous research has identified morphological differences between the brains of Neanderthals and anatomically modern humans (AMHs). However, studies using endocasts or the cranium itself are limited to investigating external surface features and the overall size and shape of the brain. A complementary approach uses comparative primate data to estimate the size of internal brain areas. Previous attempts to do this have generally assumed that identical total brain volumes imply identical internal organization. Here, we argue that, in the case of Neanderthals and AMHs, differences in the size of the body and visual system imply differences in organization between the same-sized brains of these two taxa. We show that Neanderthals had significantly larger visual systems than contemporary AMHs (indexed by orbital volume) and that when this, along with their greater body mass, is taken into account, Neanderthals have significantly smaller adjusted endocranial capacities than contemporary AMHs. We discuss possible implications of differing brain organization in terms of social cognition, and consider these in the context of differing abilities to cope with fluctuating resources and cultural maintenance.
Keywords: Neanderthals, brain's orbits, body mass, social cognition
1. Introduction
Comparisons of the morphology of Neanderthal and anatomically modern human (AMH) brains have previously identified a number of similarities, for example, in the degree of asymmetry and gyrification [1], as well as non-allometric widening of the frontal lobes [2]. However, differences in brain morphology have also been noted. For instance, in addition to their uniquely globular brain shape [3], it has recently been reported that the temporal pole is relatively larger and more forward-projecting, the orbitofrontal cortex being relatively wider and the olfactory bulbs being larger in AMHs compared with other hominins, including Neanderthals [4]. In addition, Neanderthals show lateral widening but overall flattening of their parietal lobes, whereas AMHs have uniform parietal surface enlargement [5]. These differences have led to the suggestion that Neanderthals and AMHs reached similarly enlarged brains along divergent developmental [3] and evolutionary [6] pathways.
Most of the work on Neanderthal and fossil AMH brains has relied on endocasts and the internal morphology of the cranium. However, this approach is limited to investigating external surface features and the overall shape and size of the brain, and provides no information about internal brain organization. For instance, identification of the lunate sulcus on endocast surfaces is highly ambiguous, making the size of the primary visual area (V1) difficult to measure from endocasts. An alternative approach has been to use known relationships between overall brain volume and the volume of specific brain areas in extant primates to estimate the respective brain region volumes in fossil crania [7]. While this approach is broadly reliable, it does assume that identical overall brain volumes imply identical brain organization. Although there are conserved patterns of scaling in brain organization across mammals [7,8], this is not always the case: the size of the visual system, for example, varies independently of the size of other sensory systems [9,10]. More importantly, there are well-known examples among primates where mosaic brain evolution has resulted in brain regions that are significantly smaller or larger than would be predicted by overall brain size alone. Among the great apes, for instance, gorillas and orang-utans have unusually large cerebella and relatively small neocortices [11]. Assuming that similar endocranial volumes equate with identical organization within the brain may be seriously misleading.
We hypothesize that the similarly sized brains of Neanderthals and AMHs were organized differently for at least two reasons. First, Neanderthals had larger bodies than AMHs and, hence, they would have required proportionately more neural matter for somatic maintenance and control [1,12]. Second, Neanderthals lived at high latitudes, where they would have experienced lower light levels than tropical hominins. Since even recent humans living at high latitudes require larger eyeballs to attain the same level of visual acuity and/or sensitivity as individuals living at lower latitudes [13], Neanderthals would probably have had larger eyes than contemporary AMHs, who had only recently emerged from low-latitude Africa. Components of the visual system scale with each other, from orbits and eyes [14,15] (contra [16]) through to the cortical primary (V1) and downstream visual areas (V2, V3 and V5/MT) in the brain [9,15,17–22]. This means that if Neanderthals had larger eyes than AMHs, they would also have had larger visual cortices. For a meaningful comparison of brain volume, fossil brains need to be adjusted for at least these two effects.
Here, we first show, using orbit size as a proxy [13,15], that Neanderthals had larger visual systems than contemporary AMHs. We then examine the implications of this and the difference in body size for the brains of these taxa.
2. Material and methods
We used endocast volumes for 21 Neanderthals and 38 AMHs dated 27–200 ka [23] (see the electronic supplementary material, table S1). We excluded AMHs younger than 27 ka so that the AMH specimens were as close in time to the Neanderthal specimens as possible. We excluded from the analyses specimens dated earlier than 200 ka as being taxonomically arguable, but include them for reference in figure 1. Using alternative cranial volume databases [24,25] produces essentially the same results, indicating that our findings are robust to discrepancies in volume determinations. However, we present only the analyses using the endocast dataset here.
(a) Absolute and (b) standardized endocranial volumes for different hominin taxa, split into date groups (given in thousands of years ago: ka). The three boxes represent (i) Homo heidelbergensis (Hh) and possible Denisovans (?D), (ii) the Neanderthal lineage, from archaics (AHn) to Neanderthals (Hn) dated 76–200 ka and 27–75 ka, and (iii) the Homo sapiens lineage, from archaics (AHs) to AMHs (Hs) dated 76–200 ka and 27–75 ka. Circles indicate the value for each individual fossil specimen. The horizontal bars show group means ± the s.e.m. (cumulative for b). The light grey shading illustrates that Neanderthals dated 27–75 ka have the same sized brains as Homo sapiens in terms of absolute endocranial volumes, but that once body and visual system sizes are taken into account, the Neanderthal means lie outside the standard errors of the AMH means. The dashed lines in (b) illustrate the AMH means for both date groups, to ease comparison with the Neanderthal means.
In order to standardize the AMH and Neanderthal endocranial volumes in terms of body mass, we calculated the ratio between living human and fossil mean body masses for each of the fossil date groups in a study by Ruff et al. [26] and use this ratio to scale for body size. We multiplied the absolute brain volume of each fossil in our endocast dataset [23] by this date-group-specific body mass correction factor to give the equivalent endocranial volume expected for a living human. Since the relationship between brain and body size is not isometric, we used body mass raised to the 0.646 power in this calculation, which Isler et al. [27] report as the common slope for ln-transformed endocranial volume regressed on body mass across primates once grades are taken into account.
Orbital areas were from Kappelman [28], orbital height (OBH) and breadth (OBB) measurements were supplied by C.S., and orbital volumes were measured by E.P. (see [13] for more details of the volume method). Volumes did not include brow-ridges, since the orbits were filled with beads up to a line continuous with the lateral and medial rims, following Schultz [14]. In any case, brow-ridge size is not related to orbital volume across primates [29].
We calculated visual cortex corrections using a number of computational steps, which we summarize before giving details below. Insufficient volumetric data are available for primate visual areas beyond V1 to produce an equation relating orbital size directly to total visual cortex volume. Instead, we (i) estimated fossil visual cortical surface areas from OBH (the measurement for which we had most fossil data) using a primate-derived equation, and (ii) converted these into volumes by multiplying by cortex depth. We calculated this depth by (iii) assuming a 2 mm thickness for grey matter (i.e. multiplying the surface area in mm2 by 2 mm to give volume in mm3; details below) and (iv) using a primate equation to estimate white matter volume from grey matter volume. We then (v) totalled grey and white matter volumes to give ‘total’ visual cortex volume.
We used OBH to estimate the total visual surface area for each individual fossil in the endocast volume dataset for which C.S. provided data, and converted these surfaces into total combined grey and white matter volumes. First, we calculated total grey matter volume by multiplying the surface area estimates by 2 mm, which is a reasonable estimate of cortical grey matter thickness in both humans [36–38] and macaques [38]. We then estimated white matter volume (cm3) from grey matter volume (cm3) using a reduced major axis (RMA) regression equation, using anthropoid primate neocortical data from [39]: t19 = 56.28, p < 0.001, r2 = 0.994 log10 white =−0.81 (95% CI: −0.72 to −0.93) + 1.32 (95% CI: 1.24 to 1.43) × log10 grey. These parameters do not fall outside the 95% confidence intervals of those associated with a PGLM regression model using a tree from the 10k Trees Project. However, we chose to use the RMA model because this is more appropriate when there is measurement error on both axes and the relationship between variables is symmetrical [40]. Finally, we summed the estimated white matter volumes with the grey matter volume estimates to get total visual cortex volume.
Once we had total visual cortex volume estimates for each fossil, we calculated the residual to the AMH mean for each date group for each Neanderthal fossil. We also calculated the difference between the species’ means in each date group (27–75 ka, 118 cm3; 76–200 ka, 59 cm3).
After correcting for body mass differences as outlined above, we standardized the Neanderthal endocranial volumes by subtracting the difference in visual cortex volume between AMHs and Neanderthals (individual residuals or differences between date-specific means where OBH was unavailable) from the respective Neanderthal values. In other words, all Neanderthal endocranial volumes were recalibrated as if they were organized in the same way as the average AMH brain (i.e. without enlarged visual cortices), while also taking individual differences within the Neanderthal species into account as far as possible.
The computation of these standardized brain volumes requires a series of steps, which each introduce error into the subsequent estimates. In this study, we are interested in comparisons between taxon means: rather than being interested in the error accrued for the standardized brain estimate for each individual specimen, what is crucial is the compounded error attached to the calculated mean per taxon group. In other words, the central issue is the parameter (taxon variance) error rather than the population error. We therefore calculated the cumulative standard error of the mean standardized endocranial volume for each taxon/date group by taking, at each computational step, not only the mean estimate but also the mean estimate plus or minus the standard error (s.e.; for means) or standard error of the estimate (s.e.e. for regressions, following Ruff et al. [41]). We then took these three values—(i) mean estimate, (ii) mean estimate plus s.e./s.e.e. (iii) mean estimate minus s.e./s.e.e.—into the next step of the estimation process. Once we had calculated the standardized endocranial volumes, we then calculated the sample mean for the mean, upper (cumulative ‘mean plus s.e./s.e.e.’ estimates) and lower (cumulative ‘mean minus s.e./s.e.e.’ estimates) estimates for each of the taxon and date group subgroups of specimens. The means of the upper and lower estimates for the sample were taken as values representing the mean ± the cumulative s.e. We plot these cumulative standard error bars in figure 1b.
One-sample Kolmogorov–Smirnov tests with Lilliefors corrections found no deviations from normality for any variables. Effect sizes are reported as R2.
3. Results
Independent-sample t-tests applied to three different datasets confirmed that Neanderthals had significantly larger orbits than contemporary AMHs (table 1). This implies that Neanderthals also had significantly larger eyeballs and visual cortices.
Table 1.
Orbital dimensions compared between anatomically modern humans (AMHs) and Neanderthals from all date groups.
Comparison of ‘corrected’ or ‘standardized’ endocranial volumes shows that adjusting for differences in body and visual system size results in the disparity between the two Neanderthal date groups disappearing (figure 1b). In effect, the younger Neanderthals (27–75 ka) show no increase in non-somatic/non-visual brain size compared with the older Neanderthals (76–200 ka) and archaic humans. These younger ‘standardized’ Neanderthal endocranial volumes are significantly smaller than those of contemporary AMHs within the 27–75 ka date group (table 2). This suggests that later Neanderthal brains comprised a significantly larger proportion of neural tissue associated with somatic and visual function compared with the brains of contemporary AMHs.
4. Discussion
We have demonstrated that Neanderthals had significantly larger orbits than contemporary AMHs, which, owing to scaling between the components of the visual system, suggests that Neanderthal brains contained significantly larger visual cortices. This is corroborated by recent endocast work, which found that Neanderthal occipital lobes are relatively larger than those of AMHs [42]. In addition, previous suggestions that large Neanderthal brains were associated with their high lean body mass [1,43,44] imply that Neanderthal also invested more neural tissue in somatic areas involved in body maintenance and control compared with those of contemporary AMHs.
In recent humans (dated to the last approx. 200 years), larger visual systems translate into larger brains [13]. We might therefore expect that larger Neanderthal visual cortices (and somatic areas) would similarly drive overall brain enlargement in this taxon compared with AMHs. However, we have shown that this is not the case for specimens dated 27–75 ka; Neanderthals in this date group do not show significantly larger brains than contemporary AMHs. This suggests that (i) Neanderthal and AMH brains were organized differently, and, (ii) by implication, because a greater proportion of the overall brain tissue in Neanderthals was invested in visual and somatic systems, proportionally less neural tissue was left over for other brain areas in Neanderthals compared with AMHs. Note that our analysis considered only the principal visual areas in the occipital lobe: given that the visual system projects through the parietal and temporal to the frontal lobes, our case could only be strengthened if comparative data on these projection areas were available and could be included in the analysis.
Overall, our findings tie in with the suggestion that the Neanderthal and AMH lineages underwent separate evolutionary trajectories [6]. Starting from the brain size of their common ancestor Homo heidelbergensis, we suggest that Neanderthals enlarged their visual and somatic regions, whereas AMHs achieved similarly large brains by increasing other brain areas (including, for example, their parietal lobes) [5]. Furthermore, it seems that the Neanderthal route followed a more strictly allometric trajectory [6]. Human primary visual areas are smaller than expected for a primate of our brain size [45]; larger Neanderthal V1s may thus be more in line with the expectations for a generic large-brained primate, adding support to this argument.
Macroscopic measures such as regional volume index neural network characteristics such as the number of neurons and synapses [17,46,47]. Consequently, the differences in the partitioning of brain tissue might have substantial implications for cognitive processing in Neanderthals compared with contemporary AMHs. For instance, there is a well-established relationship between brain and bonded group size across anthropoid primates [48–57], as well as between specific areas of the frontal lobe and active social network (total number of personal contacts) size at the within-species individual level in both macaques [58] and, more importantly, humans [59–61]. In addition, neuroimaging studies have shown that this relationship between key brain region volumes and group size is mediated by mentalizing (theory of mind) competences [42]. In humans, the total network/group size comprises a number of nested layers [62] and the greatest number of minds an individual can keep track of at any one time constrains the size of their most intimate support clique [63], which in turn sets a limit on the overall group size that they can maintain [62,64–66]. The mean size of the active network for living humans predicted by cross-primate neocortex ratio comparisons has been corroborated across not only historical and modern traditional subsistence societies, but also in online social environments [54,67,68]. This suggests that throughout human evolution, brain structure and cognitive function have placed a constraint on bonded group size and social complexity.
While we cannot partition fossil brains down to the refinement of specific frontal regions, there is at least sufficient evidence from comparative studies of primates [69–71] to justify using whole brain volumes to estimate cognitive capacities as a first step. To do so, we followed Dunbar [54] in using an ape PGLM equation to predict group size from the standardized endocranial volumes. Note that this equation predicts a group size of approximately 144 for living humans rather than the 150 predicted by neocortex ratio, because using cranial volumes results in additional estimation error by including brain regions, such as the visual system, that are not part of the social or mentalizing network [11], and this results in a shallower slope. However, we use cranial volume here because it reduces the number of interpolation steps, and so reduces the cumulative error variance on the fossil estimates.
Neanderthals dated 27–75 ka were predicted to have had smaller cognitive group sizes (M = 115, s.d. = 19, n = 13) than contemporary fossil AMHs, whereas fossil AMHs (M = 139, s.d. = 15, n = 32) seem to have had group sizes more in line with those demonstrated for the mean personal network sizes of living humans (figure 2). What little archaeological evidence there is offers support for this: compared with Neanderthals, contemporary Eurasian AMHs had larger [72], more geographically extensive social networks [73,74]. Group size is a convenient index of the cognitive ability to deal with increasing social complexity and may thus evidence more general differences in sociocognitive abilities between these taxa.
Group sizes estimated from standardized endocranial volumes for Neanderthals and AMHs in the 27–75 ka date group. The dashed line indicates the mean group size expected for living humans based on the size of their neocortex (150 individuals).
Such differences may have had profound implications for Neanderthals. First, assuming similar densities, the area covered by the Neanderthals' extended communities would have been smaller than those of AMHs. Consequently, the Neanderthals' ability to trade for exotic resources and artefacts would have been reduced [75], as would their capacity to gain access to foraging areas sufficiently distant to be unaffected by local scarcity [76]. Furthermore, their ability to acquire and conserve innovations may have been limited as a result [77], and they may have been more vulnerable to demographic fluctuations, causing local population extinctions.
Whereas AMHs appear to have concentrated neural investment in social adaptations to solve ecological problems, Neanderthals seem to have adopted an alternative strategy that involved enhanced vision coupled with retention of the physical robusticity of H. heidelbergensis, but not superior social cognition. For instance, only in Neanderthals, not AMHs, does body mass [26], and hence brain volume [78], increase over time. While the physical response to high latitude conditions adopted by Neanderthals may have been very effective at first, the social response developed by AMHs seems to have eventually won out in the face of the climatic instability that characterized high-latitude Eurasia at this time.
Acknowledgements
We would like to thank Kit Opie and Susanne Shultz for useful comments throughout this work, Tim Holden for mathematical advice, Anna Frangou and Thomas Woolley for coding advice, and the Oxford Museum of Natural History, Duckworth Collection (University of Cambridge) and London Museum of Natural History for allowing access to their collections. The research was funded by the Boise Fund, University of Oxford and the British Academy Lucy to Language Project. C.S. is a member of the Ancient Human Occupation of Britain Project, funded by the Leverhulme Trust, and his research is supported by the Human Origins Research Fund and the Calleva Foundation. R.I.M.D. and E.P. are funded by a European Research Council Advanced grant.
|
In effect, the younger Neanderthals (27–75 ka) show no increase in non-somatic/non-visual brain size compared with the older Neanderthals (76–200 ka) and archaic humans. These younger ‘standardized’ Neanderthal endocranial volumes are significantly smaller than those of contemporary AMHs within the 27–75 ka date group (table 2). This suggests that later Neanderthal brains comprised a significantly larger proportion of neural tissue associated with somatic and visual function compared with the brains of contemporary AMHs.
4. Discussion
We have demonstrated that Neanderthals had significantly larger orbits than contemporary AMHs, which, owing to scaling between the components of the visual system, suggests that Neanderthal brains contained significantly larger visual cortices. This is corroborated by recent endocast work, which found that Neanderthal occipital lobes are relatively larger than those of AMHs [42]. In addition, previous suggestions that large Neanderthal brains were associated with their high lean body mass [1,43,44] imply that Neanderthal also invested more neural tissue in somatic areas involved in body maintenance and control compared with those of contemporary AMHs.
In recent humans (dated to the last approx. 200 years), larger visual systems translate into larger brains [13]. We might therefore expect that larger Neanderthal visual cortices (and somatic areas) would similarly drive overall brain enlargement in this taxon compared with AMHs. However, we have shown that this is not the case for specimens dated 27–75 ka; Neanderthals in this date group do not show significantly larger brains than contemporary AMHs. This suggests that (i) Neanderthal and AMH brains were organized differently, and, (ii) by implication, because a greater proportion of the overall brain tissue in Neanderthals was invested in visual and somatic systems, proportionally less neural tissue was left over for other brain areas in Neanderthals compared with AMHs.
|
no
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.nature.com/articles/d41586-022-02895-2
|
Did this gene give modern human brains their edge?
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Human and Neanderthal brains were roughly the same size.Credit: Adapted from Alamy
More than 500,000 years ago, the ancestors of Neanderthals and modern humans were migrating around the world when a pivotal genetic mutation caused some of their brains to improve suddenly. This mutation, researchers report in Science1, drastically increased the number of brain cells in the hominins that preceded modern humans, probably giving them a cognitive advantage over their Neanderthal cousins.
“This is a surprisingly important gene,” says Arnold Kriegstein, a neurologist at the University of California, San Francisco. However, he expects that it will turn out to be one of many genetic tweaks that gave humans an evolutionary advantage over other hominins. “I think it sheds a whole new light on human evolution.”
When researchers first reported the sequence of a complete Neanderthal genome in 20142, they identified 96 amino acids — the building blocks that make up proteins — that differ between Neanderthals and modern humans, as well as some other genetic tweaks. Scientists have been studying this list to learn which of these changes helped modern humans to outcompete Neanderthals and other hominins.
Cognitive advantage
To neuroscientists Anneline Pinson and Wieland Huttner at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, one gene stood out. TKTL1 encodes a protein that is made when a fetus’s brain is first developing. A mutation in the human version changed one amino acid, resulting in a protein that is different from those found in hominin ancestors, Neanderthals and non-human primates.
The researchers suspected that this protein could increase the proliferation of neural progenitor cells, which become neurons, as the brain develops, specifically in an area called the neocortex — a region involved in cognitive function. This, they reasoned, could contribute to modern humans’ cognitive advantage.
To test their theory, Pinson and her team inserted either the human or the ancestral version of TKTL1 into the brains of mouse and ferret embryos1. The animals with the human gene had significantly more neural progenitor cells. When the researchers engineered neocortex cells from a human fetus to produce the ancestral version, they found that the fetal tissue produced fewer progenitor cells and fewer neurons than it normally would. The same was true when they inserted the ancestral version of TKTL1 into brain organoids — mini brain-like structures grown from human stem cells.
Brain size
Fossil records suggest that human and Neanderthal brains were roughly the same size, meaning that the neocortices of modern humans are either denser or take up a larger portion of the brain. Huttner and Pinson were surprised that such a small genetic change could affect neocortical development so drastically. “It was a coincidental mutation that had enormous consequences,” Huttner says.
Neuroscientist Alysson Muotri at the University of California, San Diego, is more sceptical. He points out that various cell lines behave differently when made into organoids, and he would like to see the ancestral version of TKTL1 tested in more human cell lines. Furthermore, he says, the original Neanderthal genome was compared with that of a modern European — human populations in other parts of the world might share some genetic variants with Neanderthals.
Pinson says that the Neanderthal version of TKTL1 is rare among humans today, but adds that it’s unknown whether the mutation causes any disease or cognitive differences. The only way to prove that it has a role in cognitive function, Huttner says, would be to genetically engineer mice or ferrets that have the human form of the gene and compare their behaviour to that of animals that express the ancestral version. Pinson is now planning to look further into the mechanisms through which TKTL1 drives the birth of brain cells.
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Human and Neanderthal brains were roughly the same size. Credit: Adapted from Alamy
More than 500,000 years ago, the ancestors of Neanderthals and modern humans were migrating around the world when a pivotal genetic mutation caused some of their brains to improve suddenly. This mutation, researchers report in Science1, drastically increased the number of brain cells in the hominins that preceded modern humans, probably giving them a cognitive advantage over their Neanderthal cousins.
“This is a surprisingly important gene,” says Arnold Kriegstein, a neurologist at the University of California, San Francisco. However, he expects that it will turn out to be one of many genetic tweaks that gave humans an evolutionary advantage over other hominins. “I think it sheds a whole new light on human evolution.”
When researchers first reported the sequence of a complete Neanderthal genome in 20142, they identified 96 amino acids — the building blocks that make up proteins — that differ between Neanderthals and modern humans, as well as some other genetic tweaks. Scientists have been studying this list to learn which of these changes helped modern humans to outcompete Neanderthals and other hominins.
Cognitive advantage
To neuroscientists Anneline Pinson and Wieland Huttner at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, one gene stood out. TKTL1 encodes a protein that is made when a fetus’s brain is first developing. A mutation in the human version changed one amino acid, resulting in a protein that is different from those found in hominin ancestors, Neanderthals and non-human primates.
|
no
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.bbc.com/future/article/20211008-what-if-other-human-species-hadnt-died-out
|
What if other human species hadn't died out - BBC Future
|
What if other human species hadn't died out
Would we still see our humanity in the same way if other hominin species – from Australopithecus to Neanderthals – hadn't gone extinct?
Reader Question: We now know from evolutionary science that humanity has existed in some form or another for around two million years or more. Homo sapiens are comparatively new on the block. There were also many other human species, some which we interbred with. The question is then inevitable – when can we claim personhood in the long story of evolution? Are Chimpanzees people? Do Australopithecines have an afterlife? What are the implications for how we think about rights and religion? – Anthony A. MacIsaac, 26, Scotland.
--
In our mythologies, there's often a singular moment when we became "human". Eve plucked the fruit of the Tree of Knowledge and gained awareness of good and evil. Prometheus created men from clay and gave them fire. But in the modern origin story, evolution, there's no defining moment of creation. Instead, humans emerged gradually, generation by generation, from earlier species.
Just like any other complex adaptation – a bird's wing, a whale's fluke, our own fingers – our humanity evolved step by step, over millions of years. Mutations appeared in our DNA, spread through the population, and our ancestors slowly became something more like us and, finally, we appeared.
People are animals, but we're unlike other animals. We have complex languages that let us articulate and communicate ideas. We're creative: we make art, music, tools. Our imaginations let us think up worlds that once existed, dream up worlds that might yet exist, and reorder the external world according to those thoughts. Our social lives are complex networks of families, friends and tribes, linked by a sense of responsibility towards each other. We also have an awareness of ourselves, and our universe: sentience, sapience, consciousness, whatever you call it.
And yet the distinction between ourselves and other animals is, arguably, artificial. Animals are more like humans than we might like to think.
Modern humans were preceded by a wide variety of different hominin species (Credit: Marcin Rogozinski/Alamy)
That's especially true of the great apes. Chimps, for example, have simple gestural and verbal communication. They make crude tools, even weapons, and different groups have different suites of tools – distinct cultures. Chimps also have complex social lives, and cooperate with each other.
As Charles Darwin noted in The Descent of Man, almost everything odd about Homo sapiens– emotion, cognition, language, tools, society – exists, in some primitive form, in other animals. We're different, but less different than we think.
In the past, some species were far more like us than other apes – Ardipithecus, Australopithecus, Homo erectus and Neanderthals. Homo sapiens are the only survivors of a once diverse group of humans and human-like apes, collectively known as the hominins. It is a group that includes around 20 known species and probably dozens of as yet unknown species.
The extinction of other hominins, however, has helped to create the impression of a vast, unbridgeable gulf that separates our species from the rest of life on Earth. But the division would be far less clear if those species still existed. What looks like a bright, sharp dividing line is really an artefact of extinction.
The discovery of these extinct species now blurs that line again and shows how the distance between us and other animals was crossed – gradually, over millennia.
Our lineage probably split from the chimpanzees around six million years ago. These first hominins, members of the human line, would barely have seemed human, however. For the first few million years, hominin evolution was slow.
Some of these species were startlingly like us in their skeletons, and their DNA
Life's Big Questions
This article is part of Life's Big Questions, a new series by The Conversation that is being co-published with BBC Future. It seeks to answer our readers' nagging questions about life, love, death and the Universe. We work with professional researchers who have dedicated their lives to uncovering new perspectives on the questions that shape our lives. If you have a question you would like to be answered, please email either send us a message on Facebook or Twitter or email bigquestions@theconversation.com
The first big change was walking upright, which let hominins move from forests into more open grassland and bush. But if they walked like us, nothing else suggests the first hominins were any more human than chimps or gorillas. Ardipithecus, the earliest well-known hominin, had a brain that was slightly smaller than a chimp's, and there's no evidence they used tools.
At this point, Homo erectus appeared. Erectus was taller, more like us in stature, and had large brains – several times bigger than a chimp's brain, and up to two-thirds the size of ours. They made sophisticated tools, like stone handaxes. This was a major technological advance. Handaxes needed skill and planning to create, and you probably had to be taught how to make one. It may have been a metatool – used to fashion other tools, like spears and digging sticks.
Like us, Homo erectus had small teeth. That suggests a shift from plant-based diets to eating more meat, probably obtained from hunting.
Some of these species were startlingly like us in their skeletons, and their DNA.
Homo neanderthalensis, the Neanderthals, had brains approaching ours in size, and evolved even larger brains over time until the last Neanderthals had cranial capacities comparable to a modern human's. They might have thought of themselves, even spoke of themselves, as human.
There's so much about Neanderthals we don't know, and never will. But if they were so like us in their skeletons and their behaviours, it's reasonable to guess they may have been like us in other ways that don't leave a record – that they sang and danced, that they feared spirits and worshipped gods, that they wondered at the stars, told stories, laughed with friends, and loved their children. (Read more about the secret lives of Neanderthal children.)
To the extent Neanderthals were like us, they must have been capable of acts of great kindness and empathy, but also cruelty, violence and deceit.
Many archaeologists now believe that Neanderthals were not so different from our own species (Credit: Joe McNally/Getty Images)
Far less is known about other species, like Denisovans, Homo rhodesiensis, and extinct sapiens, but it's reasonable to guess from their large brains and human-looking skulls that they were also very much like us.
I admit this sounds speculative, but for one detail. The DNA of Neanderthals, Denisovans and other hominins is found in us. We met them, and we had children together. (Read more from BBC Future about these sexual liaisons.) That says a lot about how human they were.
It's not impossible that Homo sapiens took Neanderthal women captive, or vice versa. But for Neanderthal genes to enter our populations, we had to not only mate but successfully raise children, who grew up to raise children of their own. That's more likely to happen if these pairings resulted from voluntary intermarriage. Mixing of genes also required their hybrid descendants to become accepted into their groups — to be treated as fully human.
These arguments hold not only for the Neanderthals, I'd argue, but for other species we interbred with, including Denisovans, and unknown hominins in Africa. Which isn't to say that encounters between our species were without prejudice, or entirely peaceful. It is conceivable that our own species may have been responsible for the extinction of these peoples. But there must have been times we looked past our differences to find a shared humanity.
Finally, it's telling that while we did replace these other hominins, this took time. Extinction of Neanderthals, Denisovans, and other species took hundreds of thousands of years. If Neanderthals and Denisovans were really just stupid, grunting brutes, lacking language or complex thought, it's impossible they could have held modern humans off as long as they did.
Why, if they were so like us, did we replace them? It's unclear, which suggests the difference was something that doesn't leave clear marks in fossils or stone tools. Perhaps a spark of creativity – a way with words, a knack for tools, social skills – gave us an edge. Whatever the difference was, it was subtle, or it wouldn't have taken us so long to win out.
Until now, I've dodged an important question, and arguably the most important one. It's all well and good to discuss how our humanity evolved, but what even is humanity? How can we study and recognise it, without defining it?
Most people would tend to think that it's okay to sell, cook or eat a cow, but not to do the same to the butcher
People tend to assume that there's something that makes us fundamentally different from other animals. Most people, for example, would tend to think that it's okay to sell, cook or eat a cow, but not to do the same to the butcher. This would be, well, inhuman. As a society, we tolerate displaying chimps and gorillas in cages but would be uncomfortable doing this to each other. Similarly, we can go to a store and buy a puppy or a kitten, but not a baby.
The rules are different for us and them. We inherently see ourselves as occupying a different moral and spiritual plane. We might bury our dead pet, but we wouldn't expect the dog's ghost to haunt us, or to find the cat waiting in Heaven. And yet, it's hard to find evidence for this kind of fundamental difference.
The word "humanity" implies taking care of and having compassion for each other, but that's arguably a mammalian quality, not a human one. A mother cat cares for her kittens, and a dog loves his master, perhaps more than any human does. Killer whales and elephants form lifelong family bonds. Orcas appear to grieve for their dead calves, and elephants have been seen visiting the remains of their dead companions. Emotional lives and relationships aren't unique to us.
Perhaps it's awareness that sets us apart. But dogs and cats certainly seem aware of us – they recognise us as individuals, like we recognise them. They understand us well enough to know how to get us to give them food, or let them out the door, or even when we've had a bad day and need company. If that's not awareness, what is?
We might point to our large brains as setting us apart, but does that make us human? Bottlenose dolphins have somewhat larger brains than we do. Elephant brains are three times the size of ours, orcas, four times, and sperm whales, five times. Brain size also varies in humans. Something other than brain size must make us human. Or maybe there's more going on in the minds of other animals, including extinct hominins, than we think.
We could define humanity in terms of higher cognitive abilities such as art, maths, music, language. This creates a curious problem because humans vary in how well we do all these things. I'm less literary than Jane Austen, less musical than Taylor Swift, less articulate than Martin Luther King. In these respects, am I less human than they are?
Humanity has grown to think that there is a huge gulf between our own species and other animals (Credit: Jorge Sanz/Getty Images)
If we can't even define it, how can we really say where it starts, and where it ends – or that we're unique? Why do we insist on treating other species as inherently inferior, if we're not exactly sure what makes us, us?
Neither are we necessarily the logical endpoint of human evolution. We were one of many hominin species, and yes, we won out. But it's possible to imagine another evolutionary course, a different sequence of mutations and historical events leading to Neanderthal archaeologists studying our strange, bubble-like skulls, wondering just how human we were.
The nature of evolution means that living things don't fit into neat categories. Species gradually change from one into another, and every individual in a species is slightly different – that makes evolutionary change possible. But that makes defining humanity hard.
We're both unlike other animals due to natural selection, but like them because of shared ancestry – the same, yet different. And we humans are both like and unlike each other, united by common ancestry with other Homo sapiens, different due to evolution and the unique combination of genes we inherit from our families or even other species, like Neanderthals and Denisovans.
True, in some ways, our species isn't that diverse. Homo sapiens shows less genetic diversity than your average bacterial strain, our bodies show less variation in shape than sponges, or roses, or oak trees. But in our behaviour, humanity is wildly diverse. We are hunters, farmers, mathematicians, soldiers, explorers, carpenters, criminals, artists. There are so many different ways of being human, so many different aspects to the human condition, and each of us has to define and discover what it means to be human. It is, ironically, this inability to define humanity that is one of our most human characteristics.
* Nicholas Longrich is a senior lecturer in paleontology and evolutionary biology at the University of Bath
--
This articleoriginally appeared on The Conversation, and is republished under a Creative Commons licence.
|
The first big change was walking upright, which let hominins move from forests into more open grassland and bush. But if they walked like us, nothing else suggests the first hominins were any more human than chimps or gorillas. Ardipithecus, the earliest well-known hominin, had a brain that was slightly smaller than a chimp's, and there's no evidence they used tools.
At this point, Homo erectus appeared. Erectus was taller, more like us in stature, and had large brains – several times bigger than a chimp's brain, and up to two-thirds the size of ours. They made sophisticated tools, like stone handaxes. This was a major technological advance. Handaxes needed skill and planning to create, and you probably had to be taught how to make one. It may have been a metatool – used to fashion other tools, like spears and digging sticks.
Like us, Homo erectus had small teeth. That suggests a shift from plant-based diets to eating more meat, probably obtained from hunting.
Some of these species were startlingly like us in their skeletons, and their DNA.
Homo neanderthalensis, the Neanderthals, had brains approaching ours in size, and evolved even larger brains over time until the last Neanderthals had cranial capacities comparable to a modern human's. They might have thought of themselves, even spoke of themselves, as human.
There's so much about Neanderthals we don't know, and never will. But if they were so like us in their skeletons and their behaviours, it's reasonable to guess they may have been like us in other ways that don't leave a record – that they sang and danced, that they feared spirits and worshipped gods, that they wondered at the stars, told stories, laughed with friends, and loved their children. (Read more about the secret lives of Neanderthal children.)
|
no
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://www.nature.com/articles/s42003-018-0125-4
|
Ribcage measurements indicate greater lung capacity in ...
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Subjects
Abstract
Our most recent fossil relatives, the Neanderthals, had a large brain and a very heavy body compared to modern humans. This type of body requires high levels of energetic intake. While food (meat and fat consumption) is a source of energy, oxygen via respiration is also necessary for metabolism. We would therefore expect Neanderthals to have large respiratory capacities. Here we estimate the pulmonary capacities of Neanderthals, based on costal measurements and physiological data from a modern human comparative sample. The Kebara 2 male had a lung volume of about 9.04 l; Tabun C1, a female individual, a lung volume of 5.85 l; and a Neanderthal from the El Sidrón site, a lung volume of 9.03 l. These volumes are approximately 20% greater than the corresponding volumes of modern humans of the same body size and sex. These results show that the Neanderthal body was highly sensitive to energy supply.
Introduction
Neanderthals are heavy-bodied hominins with relatively short distal limbs and wide trunks1,2,3,4,5,6,7,8,9 consisting of a wide pelvis10,11,12,13 and a wide central–lower thorax14,15,16,17,18,19,20. Therefore, their body shape was characterized as “short but massive”9, with similar proportions to current cold-adapted populations2 but heavier and more muscular. They also showed larger brains than modern humans did in absolute terms, presenting an average cranial capacity of around 1600 cc for males and 1300 cc for females21,22,23,24,25,26. It has recently been proposed that, even though the Neanderthal cerebellum was smaller than in early Homo sapiens, the differences in Neanderthal cerebrum size compared to modern humans are the result of the larger occipital lobes27.
The evolutionary origin for the Neanderthal body shape in the European hominin lineage can be certainly found in the Sima de Los Huesos site (Burgos, Spain)6,28,29. According to these studies, the short but massive body shape found in Neanderthals was inherited from their Middle Pleistocene ancestors H. heidelbergensis, even though they were probably slightly taller than Neanderthals6,28,29. Some authors have also proposed wide trunks, citing fossil evidence of the European Lower Pleistocene from the Gran Dolina site ATD66,30,31. However, this morphological evidence should be interpreted with some caution, because the fossil record at the Gran Dolina site is more scarce and fragmentary than that of the Sima de Los Huesos site6. In addition, we must be cautious about the evolutionary significance of the fact that wide and heavy bodies are found in both H. antecessor and Neanderthals: although some authors have proposed H. antecessor as an ancestor of both modern humans and Neanderthals32,33, this is only one of other potential evolutionary scenarios34.
Regardless of whether the short and massive bodies of Neanderthals evolved in the European Middle or Lower Pleistocene, there is agreement about their wide pelvises and central–lower ribcages. A potential explanation for the wide trunk of Neanderthals is based on bioenergetics9,35,36. Neanderthals should have shown greater oxygen consumption than modern humans not only in order to maintain the basic metabolism of a heavier body but also in order to provide oxygen to their large brain and muscles, which require large amounts of oxygen9,35,37. Using estimates of daily energetic expenditure (DEE) in Neanderthals and current human populations, Froehle and Churchill36 found that Neanderthals tended to expend more energy than their modern human counterparts under all climatic conditions (cold, temperate and warm) and in both sexes. However, how would Neanderthals have obtained the large amount of oxygen needed to maintain their high DEE? They would have done so with a large and powerful ribcage, which could explain (at least partially) the wide trunks of Neanderthals.
There is agreement that the Neanderthal thorax was relatively larger than ours in the central–caudal area (but see Chapman et al.38), but it is less clear whether the Neanderthal thorax was large for their mass and stature35. In addition, the Neanderthal thorax was mediolaterally expanded in the central–caudal area compared to that of modern humans15,17,18,19,20. From a functional point of view, it is important to note that the central–lower thorax (ribs 7–12) is where the diaphragm, one of the main muscles responsible for respiration, is attached39,40,41. Since this area of the ribcage is mediolaterally broader and large in Neanderthals, authors have proposed a larger diaphragmatic surface linked to greater diaphragmatic power and excursion during breathing cycles compared to modern humans14,15,16,17,18,19,20. In addition, it has recently been observed that the lower thorax contributes to kinematic thorax size changes more than the upper thorax does, so evolutionary changes in the caudal area would have had a greater functional impact than changes in the upper area42.
These interspecific functional differences in ribcages, probably caused by the greater need for oxygen intake in Neanderthals, should be reflected in their respiratory parameters. Total lung capacity (TLC), understood as the maximum amount of air that lungs can hold in maximum inspiration after a maximal expiration43, has been used to address differences in oxygen intake in modern humans44. Specifically, these authors used maximum anteroposterior and mediolateral thoracic diameters measured in X-ray images to study their correlation with TLC. However, estimating TLC in fossil individuals becomes harder, since there are only isolated elements in the fossil record, and regressions calculated for mediolateral and anteroposterior diameters in anatomically connected thoraces cannot be used. Therefore, TLC estimates for Neanderthals, based on measurements of individual elements of the ribcage, are necessary and have not been performed to date.
The aim of this paper is to fill this gap of knowledge using traditional measurements and three-dimensional (3D) geometric morphometrics of individual ribs of healthy volunteers whose TLC is known in order to calculate regressions of individual rib size on TLC. We use our regressions to estimate the TLC for a female (Tabun 1) and a male (Kebara 2) Neanderthal, using their best-preserved ribs in order to measure their costal size. We also estimate the TLC for another Neanderthal specimen of unknown sex from the El Sidrón site as well as for ATD6 hominins, in light of the scenario which hypothesized that the large TLC for Neanderthals was inherited from their possible ancestors of the Lower Pleistocene in Europe. Finally, we explore whether these species presented large TLC values for their stature (TLC/S ratio) and body mass (TLC/M ratio).
Results
Overview
Raw values of tubercle–ventral arch, calculated as the cumulative distance between semilandmarks (TVA_sml; see Methods) of individual ribs from the modern human sample, as well as TLC, stature and lean body mass associated with the individuals they come from, can be observed in Supplementary Data 1 and 2. Raw values of TVA_sml and estimated stature and lean body mass for fossil specimens can also be observed in Supplementary Data 1 and 2. Results of exponential regression analysis, by level, of rib size on TLC, can be observed in Table 1 and Fig. 1. Even though all the regressions are statistically significant (permutation test; p < 0.0003), the rib TVA_sml was more correlated with TLC in central–caudal ribs. When we studied the correlation of tubercle–ventral chord (TVC) and centroid size (CS) with TLC (linear regressions), we found smaller correlations at every rib level except for the first level, than comparing TVA_sml vs. TLC correlations (Fig. 1). Therefore, we decided to use the TVA_sml approach instead of TVC or CS to calculate TLC. Despite those lower correlations, the formulae of linear regressions for calculation of TLC using TVC and CS are given in Supplementary Tables 1 and 2.
Table 1 Results of exponential regression analysis of TVA_sml by level on TLC
Variation along the costal sequence (1–10) of r2 of linear regressions for centroid size (CS; grey) and tubercle-ventral chord (TVC; yellow) on TLC, as well as exponential regression of tubercle-ventral arch (TVA_sml; azul) on TLC. For every rib, except for ribs 1, correlations were greater using TVA_sml than using TVC or CS
Absolute TLC in Neanderthals and Lower Pleistocene hominins
We used ribs 5 and 7 left and 10 right from the Kebara 2 individual; ribs 6–8 left from the Tabun 1 individual; rib 5 right from the El Sidrón site20 and ribs 7 (ADT6–89+206) and 10 left (ADT6–39) from ATD6 hominins to calculate their TLC. These ribs were the only ribs available in the record that could be measured following our measurement protocol (TVC, TVA_sml and CS). Estimations of TLC in fossil specimens using TVA_sml regressions yielded, in the Kebara 2 male, values of 6.42 l, 11.34 l and 9.37 l for ribs 5, 7 and 10, respectively (mean 9.04 l); for the Tabun 1 individual, values of 5.71, 6.03 l and 5.80 l for ribs 6–8, respectively (mean 5.85 l); a value of 9.03 l for the El Sidrón individual and for the ATD6 hominins, values of 5.28 l and 8.70 l for ribs 7 and 10, respectively. The mean TLC value for our male comparative sample was 7.20 l (95% confidence interval (CI): 6.80–7.59) and the mean TLC value for our female comparative sample was 4.85 l (95% CI: 4.62–5.01). Therefore, TLC estimations of the Kebara 2 and Tabun 1 individuals were (on average) larger than the mean of their corresponding modern human samples and also outside the 95% CI, and estimation of TLC using the rib 5 from the El Sidrón site yielded a value (9.03 l) that was outside the corresponding CI for male modern humans. Estimation of TLC of ATD6 using rib 7 was larger than the female average and outside the 95% CI of females, and TLC estimation using rib 10 was larger than the male average and outside the 95% CI of males. TLC estimations and statistics for our modern human comparative sample can be observed in Table 2.
Table 2 Mean, standard deviation (sd.) and 95% confidence interval (95% CI) of total lung capacity (TLC), as well as TLC relative to stature (TLC/S) and lean body mass (TLC/M) for males and females of our comparative sample of modern humans
TLC in Neanderthals and Lower Pleistocene hominins relative to stature and body mass
Estimations of TLC/S ratio yielded, in the Kebara 2 individual, values of 0.039, 0.068 and 0.056 for ribs 5, 7 and 10, respectively (mean 0.054). For the Tabun 1 individual, values of 0.037, 0.039 and 0.037 were calculated for ribs 6–8, respectively (mean 0.038). For ATD6 hominins, values of 0.031 and 0.050 were calculated for ribs 7 and 10, respectively. The mean value of TLC/S for our male comparative sample was 0.041 (95% CI: 0.039–0.044) and the mean value for our female comparative sample was 0.030 (95% CI: 0.029–0.031). Therefore, TLC/S estimates of the Kebara 2 and Tabun 1 individuals were (on average) statistically larger than their corresponding modern human samples (Fig. 2). The ATD6 TLC/S ratio, estimated using rib 7, was larger than the female average but at the upper limit of the 95% CI, whereas when estimated using rib 10, it was larger than the male 95% CI (Fig. 2). In the El Sidrón rib, TLC/S could not be calculated since its stature was not available. TLC/S statistics for our modern human sample can be observed in Table 2.
Fig. 2
Bivariate plot showing TLC relation with stature, observing the 95% confidence intervals for modern humans as well as fossil values where stature was known in the literature
Finally, estimations of TLC/M ratio yielded, in the Kebara 2 individual, values of 0.099, 0.174 and 0.144 for the ribs 5, 7 and 10, respectively (mean 0.139). For the Tabun 1 individual, values of 0.120, 0.127 and 0.122 were calculated for ribs 6–8, respectively (mean 0.123). This value could not be calculated for the El Sidrón individual or the ATD6 hominins, because their body mass values are not available in the current literature. The mean value of TLC/M for our male comparative sample was 0.125 (95% CI: 0.114–0.136) and the mean value for our female comparative sample was 0.107 (95% CI: 0.098–0.117). TLC/M estimations for Kebara 2 and Tabun 1 were (on average) larger than their corresponding modern human sample, being outside their 95% CI. TLC/M statistics for our modern human comparative sample can be observed in Table 2.
Discussion
TLC as an absolute measurement related to respiratory volume can be used to address the issue of respiratory and energetic demands in modern humans44,45 and, potentially, in fossil hominins as well9,35.
However, although TLC values are obtained through a straightforward technique in hospital subjects (as in Bellemare’s44 study), it is more challenging when we deal with the fossil record. This is because we can only infer TLC from variables measured in individual elements of the ribcage such as the ribs and vertebrae. In this regard, our results are pioneering in showing that individual rib size (assessed throughout TVA_sml, TVC and CS) can be correlated with TLC. We also specify that, although our 3D measurement (CS) is more correlated with TLC than the traditional measurement TVC for ribs 3–10, the tubercle–ventral arch (TVA_sml) is even more informative about TLC. This is possibly caused by the fact that TVA captures information about mediolateral width and lung circumference, while TVC only captures anteroposterior size, which must influence also CS. In addition, we specify that the size of central–lower ribs is more correlated with TLC than is the size of upper ribs (Fig. 1). This is consistent with recent research which shows that lower thorax size is more correlated with functional size, understood as the size increase from maximum expiration to maximum inspiration42.
Our results for Neanderthals show that TLC presents absolute values that are larger than in their corresponding human counterparts (Table 3). Kebara 2, a male Neanderthal from Israel, shows a mean value of 9.04 l of TLC, which is statistically larger than our male human sample (mean = 7.20 l) and the one from Bellemare et al.44 (mean = 6.27 l). Our estimates for Tabun 1, a female Neanderthal from Israel, yielded a mean value of 5.85 l, which is statistically larger than our female sample (mean = 4.85 l) and the mean of Bellemare et al.44 (mean = 4.81 l). It should be noted that the male Kebara 2 TLC was 54% larger than the value for the female Tabun 1. The fact that this percentage is slightly larger in Neanderthals than in our modern human sample (around 48%, see above) could be the result of differences in body composition, because Kebara presents a larger lean mass compared to Tabun 1 than our male modern humans compared to our modern females (Table 3). The El Sidrón SD-1450 rib also provides insights into the Neanderthal TLC, and since it is statistically larger than our male sample, it is likely that it would have belonged to a male individual.
It is important to note that, if we had tried to estimate TLC values of these fossil specimens using other variables (such as stature) from standard human equations, Tabun 1 TLC would have been estimated at 4.67 l, 4.50 l and 4.91 l using the formulae from Crappo et al.46, Roca et al.47 and Quanjer et al.48, respectively. Had we used standard human equations to estimate the Kebara 2 TLC value, we would have obtained values of 6.18 l, 6.20 l and 6.36 l using the formulae from Quanjer et al.48, Cordero et al.49 and Neder et al.50, respectively. Therefore, both Kebara 2 and Tabun 1 present much larger values of TLC using our equations than when using human standard equations. Because different equations are used depending on the sex, we did not calculate this value for the El Sidrón Neanderthal and ATD6 hominins, since their sex was not known.
Recent evidence suggests that the large TLC observed in Neanderthals compared to modern humans was the result of large ribs in the central–lower thorax coupled with a more dorsal orientation of the transverse processes in Neanderthals compared to modern humans, causing mediolateral expansion of the ribcage18,20. This ribcage morphology (Fig. 3), combined with our results of TLC for Neanderthals, is consistent with a large oxygen intake to maintain their expected high DEE proposed by previous authors9,35,51. That large DEE must be caused for their large brains (Fig. 3, Table 3) and large lean body mass (Table 3), but alternative explanations such as the possibility that Neanderthals had large guts (liver and urinary systems) necessary for processing large amounts of meat, could also be linked with high DEE52.
Fig. 3
a Thorax and lungs' shape in the frontal view in modern humans and Neanderthals and their associated brains in the lateral view. Neanderthal thorax and skull belong to Kebara 25 and Guattari Neanderthals, respectively. Modern human thorax and skull belong to an average of four modern humans82 and OI-2053, respectively. b Superimposition in the frontal view of the Neanderthal and modern human ribcages. c Superimposition in the caudal view of the Neanderthal and modern human ribcages
Even though there is agreement about the large size of the Neanderthal ribcage14,15,16,17,18,19,20, it is not as clear whether their ribcages were larger for their body mass or their stature35. For example, estimates of the Shanidar 3 Neanderthal respiratory area of rib 8 suggest that it was proportional to his body mass but that the respiratory area of the Kebara 2 rib 8 was relatively larger for his body mass35. In this regard, our results could suggest that both Kebara 2 and Tabun 1 presented a larger TLC/M ratio than our modern human reference samples, which supports Churchill’s35 work. As for whether Neanderthals present larger TLC for their stature compared to modern humans, our results support this assertion since TLC/S values of Kebara 2 and Tabun 1 individuals were (on average) larger than their corresponding modern human samples. The fact that both Neanderthals presented larger TLC/M and TLC/S values compared to our modern human sample must be related to their large DEE.
However, some caution must be taken in the interpretation of these results since TLC/M and TLC/S ratio are based on estimates of stature and lean body mass in Neanderthals9,35,51,53 and this could introduce some error in the ratios. Even when including this potential error, it is clear that Neanderthals’ thoraces were larger for their stature (Fig. 2), which would be consistent with previous research on ribcage/body size ratios based on rib size/humerus length15. It is also important to recall that lean body mass estimates were calculated applying fat-free mass percentages to the total Neanderthal body mass, which were taken from modern Inuit individuals51,54,55. Therefore, it is possible that the percentages for Neanderthals were different than those of Inuits. In addition to differences in fat-free mass percentages, there may also be differences in other tissues, such as brown adipose tissue. Although the role of this tissue in environmental adaptation is speculative, it is the only human tissue dedicated exclusively to heat production56. Body composition in Neanderthals is not the focus of our work and should be addressed in future research.
Regarding the evolutionary origin of the large Neanderthal TLC, H. heidelbergensis (likely potential ancestors of Neanderthals) are also thought to have large thoraces, both in absolute terms and perhaps relative to their stature as well6,57. However, the lack of literature on fossil remains of the costal skeleton makes it difficult to address this issue. Lower Pleistocene hominins from the Gran Dolina site (Burgos, Spain) are hypothetical ancestors of H. heidelbergensis (and thus Neanderthals) and are also thought to have large thoraces because of their long clavicles6,30. Whether H. antecessor is actually a species itself or represents an European branch of H. erectus/ergaster34, recent Bayesian analyses58,59 suggest that H. antecessor belongs to a basal clade of modern human and Neanderthals, alongside other early Homo species such as H. erectus, ergaster and the recently discovered species named as H. naledi60. Therefore, H. antecessor could be used as an approach to test whether large bodied early Homo species already presented a large TLC.
Our results of estimated TLC based on ribs 7 and 10 yielded values of 5.28 l and 8.70 l for ATD6 hominins, respectively, which were larger than our comparative sample of female and male modern humans, respectively. In this case, we are not certain that these ribs belonged to the same individual, so we hypothesize here that ATD6–39 (the larger value) could represent a male rib, whereas ATD6–89+206 (the smaller value) could represent a female rib. If this is confirmed, we would see in ATD6 hominins the same evolutionary trend that we see in Neanderthals, males and females being larger (on average) than our modern human comparative sample. However, some caution should be taken because of the uncertainty in the composition of the ATD6 sample31. The TLC/M ratio for these hominins could not be calculated since body mass values are not available in the current literature due to the fact that this fossil site did not yield any remains of lower limbs that were well enough preserved to provide evidence of body mass30. Regarding stature, ATD6 hominins presented an average stature of 172.5 cm, which was larger than the average for Neanderthals6,30,57. The TLC/S ratio for ATD6 hominins using rib 7 was larger than the female average and larger than the male average using rib 10. This would support the possibility that ribs ATD6–89+206 and ATD6–39 are female and male ribs, respectively. It would also support what we found in Neanderthals, that is, that the large TLC relative to stature was beginning to be evident in the Lower Pleistocene of Europe, even when considering that ATD6 hominins presented larger statures than Neanderthals6,30.
Therefore, according to the evidence of TLC, if we accept that H. antecessor was in the basal clade of both Neanderthals and modern humans, we suggest here that a large ribcage relative to stature is present in the whole European hominin lineage (represented here by ATD6 hominins and Neanderthals). However, whether it is also present in other European hypothetically intermediate species such as H. heidelbergensis must be addressed in future research. The large ribcage of the European hominin lineage could be linked to the wide trunks proposed by previous authors for those species6, which would show an evolutionary trend towards Neanderthals, based on relative stature reduction and relative thorax size increase.
Regarding the adaptive significance of this evolutionary trend, it should be noted that in the Lower and Middle Pleistocene there is a trend towards large body sizes across most of the mammal clade, with herbivores showing a larger size increase than carnivores61,62,63,64,65. In carnivores, this size increase could be important for facilitating hunting tasks, whereas in herbivores it could be important for avoiding being preyed upon by carnivores. This general ecological rule could also apply to hominins and perhaps underlie the large body mass of Lower and Middle Pleistocene hominins, partially explaining their wide trunks9. Besides this general explanation, other more specific ones have been proposed: the stout (“short but massive”) Neanderthal body could be explained by the eco-geographical rules of Allen and Bergmann66,67, which could cause the shortening of distal limbs and the widening of the trunk observed in Neanderthals1,2,3,4,5,6,7,8,9. However, recent studies on bioenergetics show that Neanderthals inhabiting the same climatic conditions as modern humans present larger DEE than modern humans. This could be the result of the cost of maintaining heavy and highly muscled bodies with large brains (Fig. 3, Table 3) along with the need to exert muscular force in the accomplishment of subsistence tasks9,35,36,51. This larger muscle mass would have provided them with a greater thermogenic capacity and also greater insulation against cold compared to modern humans, which could be understood as an exaptation9,35. Future studies should include more Neanderthal ribs and also other hominin species not included here, such as H. heidelbergensis or H. erectus, in order to expand the evolutionary framework.
Finally, even though physiological function must have been of evolutionary significance, caution should be used in assuming that an enlarged thorax was result of natural selection and was passed down as an adaptation to later European Pleistocene hominins. In particular, enhanced pulmonary function as modelled in modern human populations living at high altitudes shows that developmental processes have an important role in shaping the physiology of respiration and oxygen consumption68,69,70,71,72. Developmental factors also play an important role in determining thorax morphology. Here again, humans living at high altitudes from many different regions provide important data demonstrating this point, but the small sample sizes of hominin fossil assemblages make developmental factors difficult to test. The possibility that developmental processes contributed to the emergence of a large thorax and pulmonary capacity in early Pleistocene hominins of Europe and in later Neanderthals does not alter the results of this study.
Our work is, to our knowledge, the first successful attempt to estimate TLC in fossil hominins. We have found that Neanderthals presented around 20% larger lung capacities than modern humans, both absolutely and relative to their lean mass and stature. This could be caused by the large lean body mass of Neanderthals, coupled with their large brains and gut size (liver and urinary systems), contributing to their high DEE. Assuming that H. antecessor is in the basal clade of Neanderthals (which is still a heated debate), the trend towards large lung capacities could even be observed in the lower Pleistocene of ATD6. Finally, although we used a large sample of current Europeans to create a statistical model (controlled for stature and body mass) to calculate TLC in fossil hominins, future research should include broader samples from different modern humans populations. Those that present different limb proportions compared to Europeans and that could parallel Neanderthal body proportions (populations adapted to high altitudes and extreme low temperatures) are mostly necessary. In addition, future studies should make an effort to include early H. sapiens such as Cro-Magnon, Skhul or Abri Pataud.
Methods
Material used
We used computed tomography (CT) reconstructions of rib cages that belonged to 36 adult Spanish individuals (17 males and 19 females) who were CT-scanned in maximum inspiration. The data were obtained from hospital subjects who were previously scanned as a healthy control group to be compared with pathological individuals belonging to a different research project at the Hospital Universitario La Paz (Madrid, Spain). In none of the cases could any pathologies affecting skeletal thorax form be observed. The sex composition is detailed in Supplementary Data 1. Consent was given to use these CT data for research and prior to the analyses all CT data were anonymized to comply with the Helsinki declaration73.
Fossil specimens used in this study comprise 3D surface scans of original costal remains from the Neanderthal male Kebara 2 (Mount Carmel, Israel), the Neanderthal female Tabun 1 (Mount Carmel, Israel) and the virtual reconstruction of the Neanderthal rib 5 from El Sidrón site SD-1450 (Asturias, Spain), as well as from high-quality casts of ribs from ATD6 hominins from the Atapuerca site (Burgos, Spain). Only the best-preserved fossils, where the rib shaft was complete from the articular tubercle to the distal end, were studied; this constrained the sample but prevented uncertainty that could be caused by estimates of missing data. Since recent studies have found that variation in lower ribs has a larger impact on functional dynamics than does variation in upper ribs42, in fossil specimens we only studied ribs from the central–lower thorax (from rib 5 onwards). Therefore, ribs 5 and 7 left and 10 right from the Kebara 2 individual; ribs 6–8 left from the Tabun 1 individual; rib 5 right from the El Sidrón site (virtual reconstruction of SD-145020) and ribs 7 (ADT6–89+206) and 10 left (ADT6–39) from ATD631 hominins were studied.
Measurement of TLC and anthropometric variables and 3D data acquisition and quantification
For the comparative human sample, TLC was measured in litres (l) for each individual through spirometry using standard medical techniques. In order to study TLC in relation to stature, we measured stature in our modern human sample in standardized upright standing position using standard anthropometric techniques. For stature estimations of fossils, we used data from Froehle and Churchill36 for Neanderthals and from Carretero et al.30 for ATD6 hominins. In order to study TLC in relation to mass, we measured kilograms (kg) of fat-free mass (also called lean body mass) through bioimpedance using standard medical techniques in our comparative sample. For Neanderthals, only total body mass estimates were available9,53. In order to calculate lean body mass for Neanderthals Kebara 2 and Tabun 1, we used fat-free mass percentages from current Inuits, which have been successfully used by previous authors as a proxy for calculating this variable in Neanderthals51,54,55.
Each thorax of those individuals of which we measured TLC, lean body mass and stature was CT-scanned and segmented through a semiautomatic segmentation protocol of DICOM images using the software Mimics 8.0 (http://biomedical.materialise.com/mimics). In order to reduce the possible error related to left–right laterality, only ribs 1–10 from the left side were segmented from each thorax. The post-processing of the 3D surface models of skeletal elements (cleaning, smoothing and mesh hole-filling) was carried out by the Artec Studio v12 software (www.Artec3D.com) and the final 3D costal models were imported into the Viewbox4 software (www.dhal.com) for digitization. Then landmarks and semilandmarks for sliding74,75,76 were located on rib models following previous published protocols from García-Martínez et al.77. Since some fossil specimens (Tabun 1, Kebara 2 and ATD6 hominins) did not preserve the rib head, the three landmarks which described that structure in the comparative costal sample were excluded. The rib morphology was therefore described by 17 homologous 3D landmarks and sliding semilandmarks on each rib 1–10 (Fig. 4).
Fig. 4
Landmarks and semilandmarks' digitization protocol. On the ribs, landmarks were placed at the most lateral point of the articular tubercle, the most inferior point at the angulus costae at the lower rib border (where the angle is most doubtlessly recognizable) and the most superior and inferior points of the sternal end. In addition, 13 equidistant semilandmarks were located along the lower costal border between the articular tubercle and the inferior sternal end
Once the landmarks were digitized, size data of costal elements were obtained as follows: (1) the TVC was calculated as the distance between the landmark at the tubercle and the landmark at the inferior point of the distal end. (2) The TVA_sml was calculated as the cumulative distance between the semilandmarks from the landmark at the tubercle and the landmark at the inferior point of the distal end. We used the term TVA_sml instead of TVA because our measurements do not exactly fit the definition of TVA from previous authors15,78. (3) Centroid size was obtained by Generalized Procrustes Analysis of the whole landmark data set79,80,81. Then we carried out regression analysis of the size of costal level on TLC in the comparative sample in order to explore what rib level size was most correlated to TLC. We observed that TLC on CS and TLC on TVC had a linear distribution, whereas TLC on TVA_sml had an exponential distribution. After that, we used these regressions to estimate TLC in fossil specimens using their size values. We explored the relationship between TLC estimates in the fossil specimens and the 95% CIs for the mean of the comparative sample. In addition, we also calculated TLC in relation to stature (TLC/S) and TLC in relation to lean body mass (TLC/M).
We also included a validation study, addressing the accuracy of the estimates using the different costal levels. For this aim, we estimated the TLC from the individuals of the comparative sample, which were already known, using the exponential regressions of TLC on TVA_sml that we obtained. We calculated, for every individual, the difference between the original known value and the estimate, using different rib levels. We observed that the best estimates (difference from the original of −0.01 l on average) were obtained using the seventh rib. This information can be found Supplementary Table 3 and Supplementary Figure 1.
Data availability
The data sets generated during and/or analysed during the current study, besides that given in Supplementary Data 1 and 2, are available from the corresponding author on reasonable request.
Acknowledgements
This research is funded by CGL-2012–37279, CGL-2015–63648-P, CGL2016–75109-P (Ministry of Economy, Industry and Competitiveness, Spain), PI10/02089 (Fondo de Investigación Sanitaria, Ministry of Health, Social Services and Equality Spain) and the Leakey Foundation. The authors acknowledge Mario Modesto for his discussion on statistics. We acknowledge Dr. Antonio García Tabernero for providing pictures of the Neanderthal brain and skulls for Fig. 3, as well as Dr. Luca Bondioli and Dr. Roberto Macchiarelli for providing him the CT scans.
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Subjects
Abstract
Our most recent fossil relatives, the Neanderthals, had a large brain and a very heavy body compared to modern humans. This type of body requires high levels of energetic intake. While food (meat and fat consumption) is a source of energy, oxygen via respiration is also necessary for metabolism. We would therefore expect Neanderthals to have large respiratory capacities. Here we estimate the pulmonary capacities of Neanderthals, based on costal measurements and physiological data from a modern human comparative sample. The Kebara 2 male had a lung volume of about 9.04 l; Tabun C1, a female individual, a lung volume of 5.85 l; and a Neanderthal from the El Sidrón site, a lung volume of 9.03 l. These volumes are approximately 20% greater than the corresponding volumes of modern humans of the same body size and sex. These results show that the Neanderthal body was highly sensitive to energy supply.
Introduction
Neanderthals are heavy-bodied hominins with relatively short distal limbs and wide trunks1,2,3,4,5,6,7,8,9 consisting of a wide pelvis10,11,12,13 and a wide central–lower thorax14,15,16,17,18,19,20. Therefore, their body shape was characterized as “short but massive”9, with similar proportions to current cold-adapted populations2 but heavier and more muscular. They also showed larger brains than modern humans did in absolute terms, presenting an average cranial capacity of around 1600 cc for males and 1300 cc for females21,22,23,24,25,26.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
yes_statement
|
"neanderthals" had larger "brains" than "modern" "humans".. "modern" "humans" had smaller "brains" than "neanderthals".
|
https://indianapublicmedia.org/amomentofscience/bigger-brains-smarter.php
|
Do Bigger Brains Make Us Smarter? | A Moment of Science ...
|
Noon Edition
Do Bigger Brains Make Us Smarter?
By
Jeremy Shere
Posted May 12, 2016
Dear A Moment of Science, do bigger brains make us smarter? I know that primitive humans, like Homo Erectus, had smaller brains than modern humans. But Neanderthals had slightly larger brains than modern humans, and there's no evidence that Neanderthals were smarter than we are.
--Big Brained Thinker
The short answer is that we don't really know. The Neanderthal example is a good one. Clearly, the raw size and weight of a brain doesn't necessarily translate to significantly greater intelligence.
Now, to be clear, there is some evidence that brain size correlates with increased intelligence. But the correlation is pretty weak. According to the latest research, brain size accounts for only between 9Â and 16Â percent of overall variability in human intelligence.
But on the other hand, men on average have larger brains than women, and have billions more neurons in parts of the brain that control perception, memory, language, and reasoning. Yet there is no difference between the average IQs of men and women.
Plus, some animals, such as elephants and whales, have brains that are much larger than ours. And while it's true that our brainâtoâbody ratio is much greater than elephants or whales, that doesn't solve the puzzle, either. Because shrews and some birds have a greater brainâtoâbody ratio than we do, and it's safe to assume that the average human will beat the average bird on an IQ test any day.
So while brain size does matter somewhat, it's not the only or most definitive factor to consider when pondering whatmakes some people, and some species, smarter than others.
|
Noon Edition
Do Bigger Brains Make Us Smarter?
By
Jeremy Shere
Posted May 12, 2016
Dear A Moment of Science, do bigger brains make us smarter? I know that primitive humans, like Homo Erectus, had smaller brains than modern humans. But Neanderthals had slightly larger brains than modern humans, and there's no evidence that Neanderthals were smarter than we are.
--Big Brained Thinker
The short answer is that we don't really know. The Neanderthal example is a good one. Clearly, the raw size and weight of a brain doesn't necessarily translate to significantly greater intelligence.
Now, to be clear, there is some evidence that brain size correlates with increased intelligence. But the correlation is pretty weak. According to the latest research, brain size accounts for only between 9Â and 16Â percent of overall variability in human intelligence.
But on the other hand, men on average have larger brains than women, and have billions more neurons in parts of the brain that control perception, memory, language, and reasoning. Yet there is no difference between the average IQs of men and women.
Plus, some animals, such as elephants and whales, have brains that are much larger than ours. And while it's true that our brainâtoâbody ratio is much greater than elephants or whales, that doesn't solve the puzzle, either. Because shrews and some birds have a greater brainâtoâbody ratio than we do, and it's safe to assume that the average human will beat the average bird on an IQ test any day.
So while brain size does matter somewhat, it's not the only or most definitive factor to consider when pondering whatmakes some people, and some species, smarter than others.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
no_statement
|
"neanderthals" did not have "bigger" "brains" than "modern" "humans".. "modern" "humans" did not have smaller "brains" than "neanderthals".
|
https://www.history.com/topics/pre-history/neanderthals
|
Neanderthals
|
Table of Contents
Neanderthals are an extinct species of hominids that were the closest relatives to modern human beings. They lived throughout Europe and parts of Asia from about 400,000 until about 40,000 years ago, and they were adept at hunting large Ice Age animals. There’s some evidence that Neanderthals interbred with modern humans—in fact, many humans today share a small portion of Neanderthal DNA. Theories about why Neanderthals went extinct abound, but their disappearance continues to puzzle scientists who study human evolution.
Scientists estimate that humans and Neanderthals (Homo neanderthalensis) shared a common ancestor that lived 800,000 years ago in Africa.
Fossil evidence suggests that a Neanderthal ancestor may have traveled out of Africa into Europe and Asia. There, the Neanderthal ancestor evolved into Homo neanderthalensis some 400,000 to 500,000 years ago.
The human ancestor remained in Africa, evolving into our own species—Homo sapiens. The two groups may not have cross paths again until modern humans exited Africa some 50,000 years ago.
Neanderthal Skull Discovered
In 1829, part of the skull of a Neanderthal child was found in a cave near Engis, Belgium. It was the first Neanderthal fossil ever found, though the skull wasn’t recognized as belonging to a Neanderthal until decades later.
Quarry workers cutting limestone in the Feldhofer Cave in Neandertal, a small valley of the Düssel River near the German city of Düsseldorf, uncovered the first identified Neanderthal bones in 1856.
Anatomists puzzled over the bones: Included among them was a piece of a skull which looked human, but not quite. The Neanderthal skull included a prominent, bony brow ridge and large, wide nostrils. The Neanderthal body was also stockier and shorter than ours.
In a 1857 paper, German anatomist Hermann Shaafhausen posited that the Neanderthal fossil belonged to a “savage and barbarous race of ancient human.” Seven years later, Irish geologist William King concluded that the Neanderthal fossil was not human and that it belonged to a separate species he named Homo neanderthalensis.
Neanderthal vs. Homo Sapiens
Fossil evidence suggests that Neanderthals, like early humans, made an assortment of sophisticated tools from stone and bones. These included small blades, hand axe and scrapers used to remove flesh and fat from animal skin.
Neanderthals were skilled hunters who used spears to kill large Ice Age mammals such as mammoths and wooly rhinos.
Little is known about Neanderthal culture and customs, though there’s some evidence that Neanderthals might have made symbolic or ornamental objects, created artwork, used fire and intentionally buried their dead.
Genetic analysis shows that Neanderthals lived in small, isolated groups that had little contact with each other.
Neanderthals had bigger brains than humans, though that doesn’t mean they were smarter. One recent study found that a large portion of the Neanderthal brain was devoted to vision and motor control.
This would have come in handy for hunting and coordinating movement of their stocky bodies, yet left relatively little brain space compared to modern humans for areas that controlled thinking and social interactions.
Neanderthal DNA
Most researchers agree that modern humans and Neanderthals interbred, though many believe that sex between the two species occurred rarely.
These matings introduced a small amount of Neanderthal DNA into the human gene pool. Today, most people living outside of Africa have trace amounts of Neanderthal DNA in their genomes.
People of European and Asian descent have an estimated 2 percent Neanderthal DNA. Indigenous Africans may have little or no Neanderthal DNA. That’s because the two species did not meet—and mate—until after modern humans had migrated out of Africa.
Some of the Neanderthal genes that persist in humans today may influence traits having to do with sun exposure. These include hair color, skin tone and sleeping patterns.
Neanderthals had been living in Europe and Asia for hundreds of thousands of years when modern humans arrived. Neanderthals were already adapted to the climate of Eurasia, and some experts think Neanderthal DNA may have conveyed some advantage to modern humans as they exited Africa and colonized points north.
Neanderthal Extinction
Neanderthals went extinct in Europe around 40,000 years ago, roughly 5,000 to 10,000 years after first meeting Homo sapiens. There are several theories for their extinction.
Around 40,000 years ago, the climate grew colder, transforming much of Europe and Asia into a vast, treeless steppe. Fossil evidence shows that Neanderthal prey, including wooly mammoths, may have shifted their range further south, leaving Neanderthals without their preferred foods.
Humans, who had a more diverse diet than Neanderthals and long-distance trade networks, may have been better suited to find food and survive the harsh, new climate.
Some scientists believe that Neanderthals gradually disappeared through interbreeding with humans. Over many generations of interbreeding, Neanderthals—and small amounts of their DNA—may have been absorbed into the human race.
Other theories suggest that modern humans brought some kind of disease with them from Africa for which Neanderthals had no immunity—or, modern humans violently exterminated Neanderthals when they crossed paths, though there’s no archeological evidence that humans killed off Neanderthals.
HISTORY.com works with a wide range of writers and editors to create accurate and informative content. All articles are regularly reviewed and updated by the HISTORY.com team. Articles with the “HISTORY.com Editors” byline have been written or edited by the HISTORY.com editors, including Amanda Onion, Missy Sullivan, Matt Mullen and Christian Zapata.
Fact Check
We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us! HISTORY reviews and updates its content regularly to ensure it is complete and accurate.
|
Seven years later, Irish geologist William King concluded that the Neanderthal fossil was not human and that it belonged to a separate species he named Homo neanderthalensis.
Neanderthal vs. Homo Sapiens
Fossil evidence suggests that Neanderthals, like early humans, made an assortment of sophisticated tools from stone and bones. These included small blades, hand axe and scrapers used to remove flesh and fat from animal skin.
Neanderthals were skilled hunters who used spears to kill large Ice Age mammals such as mammoths and wooly rhinos.
Little is known about Neanderthal culture and customs, though there’s some evidence that Neanderthals might have made symbolic or ornamental objects, created artwork, used fire and intentionally buried their dead.
Genetic analysis shows that Neanderthals lived in small, isolated groups that had little contact with each other.
Neanderthals had bigger brains than humans, though that doesn’t mean they were smarter. One recent study found that a large portion of the Neanderthal brain was devoted to vision and motor control.
This would have come in handy for hunting and coordinating movement of their stocky bodies, yet left relatively little brain space compared to modern humans for areas that controlled thinking and social interactions.
Neanderthal DNA
Most researchers agree that modern humans and Neanderthals interbred, though many believe that sex between the two species occurred rarely.
These matings introduced a small amount of Neanderthal DNA into the human gene pool. Today, most people living outside of Africa have trace amounts of Neanderthal DNA in their genomes.
People of European and Asian descent have an estimated 2 percent Neanderthal DNA. Indigenous Africans may have little or no Neanderthal DNA. That’s because the two species did not meet—and mate—until after modern humans had migrated out of Africa.
Some of the Neanderthal genes that persist in humans today may influence traits having to do with sun exposure. These include hair color, skin tone and sleeping patterns.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
no_statement
|
"neanderthals" did not have "bigger" "brains" than "modern" "humans".. "modern" "humans" did not have smaller "brains" than "neanderthals".
|
https://www.yourgenome.org/stories/evolution-of-the-human-brain/
|
Evolution of the human brain – YourGenome
|
Evolution of the human brain
Evolution of the human brain
Evolution of the human brain
The human brain, in all its staggering complexity, is the product of millions of years of evolution.
Evolution of the human brain
The brain has undergone some remarkable changes through its evolution. The most primitive brains are little more than clusters of cells bunched together at the front of an organism. These cells process information received from sense organs also located at the head.
Humans have the largest brain in proportion to their body size of any living creatures.
Over time, brains have evolved. The brains of vertebrate animals have developed in both size and sophistication. Humans have the largest brain in proportion to their body size of any living creatures, but also the most complex. Different regions of the brain have become specialised with distinctive structures and functions. For example, the cerebellum is involved in movement and coordination, whereas the cerebral cortex is involved in memory, language and consciousness.
Behaviour can influence the success of a species, so have been shaped by evolution.
By understanding how the human brain evolved, researchers hope to identify the biological basis of the behaviours that set humans apart from other animals. Behaviour can influence the success of a species, so it is reasonable to assume that human behaviours have been shaped by evolution. Understanding the biology of the brain may also shed some light on many conditions linked to human behaviour, such as depression, autism and schizophrenia.
Brain size and intelligence
The human brain is around four times bigger than a chimp brain and around 15 times larger than a mouse brain.
If you were to put a mouse brain, a chimp brain and a human brain next to each other and compare them it might seem obvious why the species have different intellectual abilities. The human brain is around four times bigger than the chimp’s and around 15 times larger than the mouse’s. Even allowing for differences in body size, humans have unusually large brains.
Bigger isn’t always better
But size isn’t the whole story. Studies have shown that there is not a particularly strong relationship between brain size and intelligence in humans. This is further strengthened when we compare the human brain to the Neanderthal brain. Because no Neanderthal brains exist today scientists have to study the inside of fossil skulls to understand the brains that were inside. The Neanderthal brain was just as big as ours, in fact probably bigger.
The skulls of modern humans, while generally larger than those of our earlier ancestors, are also different in shape. This suggests that the modern brain is less of a fixed shape than that of earlier humans and can be influenced over its lifetime by environmental or genetic factors (this is called plasticity).
There are some interesting differences when we compare the pattern of brain growth in humans to chimpanzees, our closest living relatives. Both brains grow steadily in the first few years, but the shape of the human brain changes significantly during the first year of life. During this period, the developing brain will be picking up information from its environment providing an opportunity for the outside world to shape the growing neural circuits.
Prehistoric skulls.
Image credit: Grant Museum, Wellcome Images
An analysis of a Neanderthal child’s skull has shown that their growth patterns were more similar to the chimpanzee than to modern humans. This suggests that although the brains of modern humans and Neanderthals reached a similar size by adulthood, this was achieved through different patterns of growth in different regions of the brain.
A major constraint on human brain size is the pelvic girdle, which (in females) has to contend with the demands of delivering a large-headed baby. Humans have evolved to extend the period when the brain grows to include the period after birth. This subtle difference in early development might have had big implications for our survival.
Language and brain development
Language is probably the key characteristic that distinguishes us from other animals. Thanks to our sophisticated language skills, we can convey information rapidly and efficiently to other members of our species. We can coordinate what we do and plan actions, things that would have provided a great advantage early on in our evolution.
To understand what someone is saying we need to detect their speech and transmit this information to the brain.
Language is complex and we are only just beginning to understand its various components. For example, we have to consider the sensory aspects of language. To understand what someone is saying we need to detect their speech and transmit this information to the brain. The brain then has to process these signals to make sense of them. Parts of our brain have to deal with syntax (how the order of words affects meaning) and semantics (what the words actually mean).
Memory is also very important as we need to remember what words mean. Then there is the entire vocalisation system which is involved in working out what we want to say and making sure we say it clearly by coordinating muscles to make the right noises.
Some birds are talented mimics but you couldn’t have a conversation with a Mynah bird!
Studying language by comparing different species is difficult because no other animals come close to our language abilities. Some birds are talented mimics but you couldn’t have a conversation with a Mynah bird! Even when our closest relatives, chimpanzees, are raised in human families they never gain verbal language skills. Although chimpanzees can learn to understand our language and use ‘graphical’ symbols, they show little inclination to communicate anything other than basic information, such as requests for food. Humans, by contrast, seem to be compulsive communicators.
A master gene for language?
Perhaps the greatest insight into the evolution of language has come from work on the FOXP2 gene. This gene plays a key role in language and vocalization and allows us to explore the changes underpinning the evolution of complex language.
The FOXP2 gene was first discovered by Simon Fisher, Anthony Monaco and colleagues at the University of Oxford in 2001. They came across the gene through their studies of DNA samples from a family with distinctive speech and language difficulties. Around 15 members of the family, across three generations, were able to understand spoken words perfectly, but struggled to string words together in order to form a response. The pattern in which this condition was inherited, suggested that it was a dominant single-gene condition (one copy of the altered gene was enough to disrupt their overall language abilities). The researchers identified the area of the genome likely to contain the affected gene but were unable to identify the specific gene mutation within this region.
They then had a stroke of luck, in the form of another unrelated child with very similar symptoms. Looking at this child’s DNA they identified a chromosome rearrangement that sliced through a gene in the region of DNA where they suspected the mutated gene was. This gene was FOXP2. After sequencing the FOXP2 gene in the family they found a specific mutation in the gene that was shared by all the affected family members. This confirmed the importance of FOXP2 in human language.
Mutations in the FOXP2 gene interfere with the part of the brain responsible for language development.
Simon and his colleagues went on to characterise FOXP2 as a ‘master controller’, regulating the activity of many different genes in several areas of the brain. One key role is in the growth of nerve cells and the connections they make with other nerve cells during learning and development. Mutations in the FOXP2 gene interfere with the part of the brain responsible for language development, leading to the language problems seen in this family.
The evolution of FOXP2
The FOXP2 gene is highly conserved between species. This means that the gene has a very similar DNA sequence in different species, suggesting it has not evolved much over time. The FOXP2 protein in the mouse only differs from the human version by three amino acids. The chimpanzee version only differs from the human version by two amino acids. These two changes in amino acids may be key steps in the evolution of language in humans.
What difference do these small changes in sequence make to the functionality of the FOXP2 protein? Studies with mice show that changing the mouse version of the FOXP2 gene to be the same sequence as the human version only has subtle effects. Remarkably, the resulting mouse pups are essentially normal but show subtle changes in the frequency of their high-pitched vocalisations. They also show distinctive changes to wiring in certain parts of their brain.
From these studies scientists have concluded that FOXP2 is involved in the brain’s ability to learn sequences of movements. In humans this has translated into the complex muscle movements needed to produce the sounds for speech, whereas in other species it may have a different role, coordinating other movements.
FOXP2 regulates many other genes in the body and evolution seems to have favoured a subset of these as well, particularly in Europeans. FOXP2 regulated genes are important not only in brain development, but they also play important roles in human reproduction and immunity.
FOXP2 and the Neanderthals
Neanderthals may have had some capacity for speech and communication.
Neanderthals have generally been characterised as a large, brutish species with little or no intellectual, social or cultural development. However, the fact that they had the same FOXP2 gene as modern humans suggests that Neanderthals may have had some capacity for speech and communication.
Various strands of evidence have helped to establish a picture of how Neanderthals might have lived and communicated. Archaeological records suggest that they probably lived in small groups and due to their high energy needs, spent most of their time hunting.
Neanderthals are unlikely to have developed social groups bound together by effective communication. This is probably because they lacked the key mental abilities needed to establish and maintain social groups. Recursive thinking (thinking about thinking), theory of mind (appreciating what is going on in someone else’s head) and inhibition of impulsive reactions (being able to control impulses) are all important elements to successful social interactions. Interestingly brain injury and developmental disorders, such as autism, can interfere with these abilities and social skills in humans.
This evidence suggests that the Neanderthal brain may not have been wired to support effective communication and diplomatic skills. They would have been extremely difficult to get along with! The Neanderthal brain was probably better adapted to maximise their visual abilities. They would have used their oversized eyes and large brains to survive and hunt in the lower-light levels in Europe. This would limit the space available in the brain to develop the systems needed for communication and social interactions. However, their smaller social brain regions could have enabled them to establish smaller social networks which may have improved their chances of survival in the harsh European environment.
|
Neanderthal brains exist today scientists have to study the inside of fossil skulls to understand the brains that were inside. The Neanderthal brain was just as big as ours, in fact probably bigger.
The skulls of modern humans, while generally larger than those of our earlier ancestors, are also different in shape. This suggests that the modern brain is less of a fixed shape than that of earlier humans and can be influenced over its lifetime by environmental or genetic factors (this is called plasticity).
There are some interesting differences when we compare the pattern of brain growth in humans to chimpanzees, our closest living relatives. Both brains grow steadily in the first few years, but the shape of the human brain changes significantly during the first year of life. During this period, the developing brain will be picking up information from its environment providing an opportunity for the outside world to shape the growing neural circuits.
Prehistoric skulls.
Image credit: Grant Museum, Wellcome Images
An analysis of a Neanderthal child’s skull has shown that their growth patterns were more similar to the chimpanzee than to modern humans. This suggests that although the brains of modern humans and Neanderthals reached a similar size by adulthood, this was achieved through different patterns of growth in different regions of the brain.
A major constraint on human brain size is the pelvic girdle, which (in females) has to contend with the demands of delivering a large-headed baby. Humans have evolved to extend the period when the brain grows to include the period after birth. This subtle difference in early development might have had big implications for our survival.
Language and brain development
Language is probably the key characteristic that distinguishes us from other animals. Thanks to our sophisticated language skills, we can convey information rapidly and efficiently to other members of our species. We can coordinate what we do and plan actions, things that would have provided a great advantage early on in our evolution.
To understand what someone is saying we need to detect their speech and transmit this information to the brain.
Language is complex and we are only just beginning to understand its various components. For example, we have to consider the sensory aspects of language. To understand what someone is saying we need to detect their speech and transmit this information to the brain.
|
yes
|
Anthropometry
|
Did Neanderthals have bigger brains than modern humans?
|
no_statement
|
"neanderthals" did not have "bigger" "brains" than "modern" "humans".. "modern" "humans" did not have smaller "brains" than "neanderthals".
|
https://www.nbcnews.com/id/wbna26625212
|
Neanderthals beat mammoths, so why not us?
|
Neanderthals beat mammoths, so why not us?
They may have been stronger, but Neanderthals looked, ate and may have even thought much like modern humans do, suggest several new studies that could help explain new evidence that the early residents of prehistoric Europe and Asia engaged in head-to-head combat with woolly mammoths.
A virtual reconstruction of a Neanderthal whose skeleton suggests their brains were comparable to — or even larger than — those of modern humans. National Academy of Sciences
Sept. 9, 2008, 5:58 PM UTC / Source: Discovery Channel
By By Jennifer Viegas
They may have been stronger, but Neanderthals looked, ate and may have even thought much like modern humans do, suggest several new studies that could help explain new evidence that the early residents of prehistoric Europe and Asia engaged in head-to-head combat with woolly mammoths.
Together, the findings call into question how such a sophisticated group apparently disappeared off the face of the earth around 30,000 years ago.
The new evidence displays the strengths and weaknesses of Neanderthals, suggesting they were skilled hunters but not as brainy and efficient as modern humans, who eventually took over Neanderthal territories.
Most notably among the new studies is what researchers say is the first ever direct evidence that a woolly mammoth was brought down by Neanderthal weapons.
Margherita Mussi and Paola Villa made the connection after studying a 60,000 to 40,000-year-old mammoth skeleton unearthed near Neanderthal stone tool artifacts at a site called Asolo in northeastern Italy. The discoveries are described in this month's Journal of Archaeological Science.
Villa, a curator of paleontology at the University of Colorado Museum of Natural History, told Discovery News that other evidence suggests Neanderthals hunted the giant mammals, but not as directly. At the English Channel Islands, for example, 18 woolly mammoths and five woolly rhinoceroses dating to 150,000 years ago "were driven off a cliff and died by falling into a ravine about 30 meters (over 98 feet) deep. They were then butchered."
Villa, however, pointed out that "there were no stone points or other possible weapons" found at the British site.
"At Asolo, instead there was a stone point that was very probably mounted on a wooden spear and used to kill the animal," she added.
Several arrowheads were excavated at the Italian site, but the one of greatest interest is fractured at the tip, indicating that it "impacted bone or the thick skin of the mammoth."
Other studies on stone points suggest that if such a weapon were rammed into a large beast, it would be likely to fracture the same way.
What's for dinner? There is no question that Neanderthals craved meat and ate a lot of it.
A study in this month's issue of the journal Antiquity by German anthropologists Michael Richard and Ralf Schmitz found that Neanderthals went for red meat, not of the woolly mammoth variety, but from red deer, roe deer, and reindeer.
The scientists came to that conclusion after grinding up bone samples taken from the remains of Neanderthals found in Germany and then analyzing the isotopes within. These forms of chemical elements -- in this case, carbon and nitrogen -- reveal if the individual being tested lived on meat, fish or plants, since each food group has its own carbon and nitrogen signature.
Richard and Schmitz conclude that the Neanderthals subsisted primarily on meat from deer, which they probably stalked in organized groups.
The researchers say their findings "reinforce the idea that Neanderthals were sophisticated hunters with an advanced ability to organize and communicate."
Villa agrees.
"Neanderthals are no longer considered inferior hunters," she said. "Neanderthals were capable of hunting a wide range of prey, from dangerous animals such as brown bears, mammoths and rhinos, to large, medium and small-size ungulates such as bison, aurochs, horse, red deer, reindeer, roe deer and wild goats."
Enter Homo sapiens Fossils suggest that Neanderthals and modern humans coexisted in Western Europe for at least 10,000 years. While there is a smattering of evidence that the two species interbred, most anthropologists believe the commingling was infrequent or not enough to substantially affect the Homo sapiens gene pool.
New evidence supports that notion, while also revealing that the world's first anatomically modern humans retained a few Neanderthal-like characteristics.
Several papers in the current Journal of Human Evolution describe the world's first known people, which shared bone, hand and ankle features with Neanderthals and possibly also Homo erectus.
John Fleagle, professor of anatomical sciences at Stony Brook University, who worked on the early human research, told Discovery News that the shared characteristics "are just primitive features retained from a common ancestor."
It's known that Neanderthals had more robust skeletons than modern humans, with particularly strong arms and hands, but were the two groups evenly matched in brainpower?
Image courtesy of National Academy of Sciences, PNAS |
No Dummy
A virtual reconstruction of a Neanderthal whose skeleton suggests their brains were comparable to -- or even larger than -- those of modern humans. |
A new study in this week's Proceedings of the National Academy of Sciences provides some intriguing clues.
Marcia Ponce de Leon of the University of Zurich's Anthropological Institute and Museum and her colleagues virtually reconstructed brain size and growth of three Neanderthal infant skeletons found in Syria and Russia.
"Neanderthal brain size at birth was similar to that in recent Homo sapiens and most likely subject to similar obstetric constraints," Ponce de Leon and her team concluded, although they added that "Neanderthal brain growth rates during early infancy were higher" than those experienced by modern humans.
It appears, therefore, that while Neanderthal brains grew at about the same rate as ours, they had a small size advantage.
Brainpower trade-offs But bigger is not always better in terms of brain function. Modern humans evolved smaller, but more efficient, brains.
Ponce de Leon and her colleagues suggest, "It could be argued that growing smaller — but similarly efficient — brains required less energy investment and might ultimately have led to higher net reproduction rates."
On the down side for people, however, brainpower efficiency doesn't come without a cost.
"Our new research suggests that schizophrenia is a byproduct of the increased metabolic demands brought about during human brain evolution," explained Philipp Khaitovich of the Max Planck Institute for Evolutionary Anthropology and the Shanghai branch of the Chinese Academy of Sciences.
Weighing the pros and cons of each species, Neanderthals and modern humans may have been evenly matched when they shared European land, with more and more scientists puzzling over how such an advanced, human-like being became extinct.
University of Exeter archaeologist Metin Eren hopes the latest findings will not only change the image of Neanderthals, but also the direction that future research on these prehistoric hominids will take.
"It is time for archaeologists to start searching for other reasons why Neanderthals became extinct while our ancestors survived," Eren said.
"When we think of Neanderthals, we need to stop thinking in terms of stupid or less advanced and more in terms of different," he added.
|
"
Enter Homo sapiens Fossils suggest that Neanderthals and modern humans coexisted in Western Europe for at least 10,000 years. While there is a smattering of evidence that the two species interbred, most anthropologists believe the commingling was infrequent or not enough to substantially affect the Homo sapiens gene pool.
New evidence supports that notion, while also revealing that the world's first anatomically modern humans retained a few Neanderthal-like characteristics.
Several papers in the current Journal of Human Evolution describe the world's first known people, which shared bone, hand and ankle features with Neanderthals and possibly also Homo erectus.
John Fleagle, professor of anatomical sciences at Stony Brook University, who worked on the early human research, told Discovery News that the shared characteristics "are just primitive features retained from a common ancestor. "
It's known that Neanderthals had more robust skeletons than modern humans, with particularly strong arms and hands, but were the two groups evenly matched in brainpower?
Image courtesy of National Academy of Sciences, PNAS |
No Dummy
A virtual reconstruction of a Neanderthal whose skeleton suggests their brains were comparable to -- or even larger than -- those of modern humans. |
A new study in this week's Proceedings of the National Academy of Sciences provides some intriguing clues.
Marcia Ponce de Leon of the University of Zurich's Anthropological Institute and Museum and her colleagues virtually reconstructed brain size and growth of three Neanderthal infant skeletons found in Syria and Russia.
"Neanderthal brain size at birth was similar to that in recent Homo sapiens and most likely subject to similar obstetric constraints," Ponce de Leon and her team concluded, although they added that "Neanderthal brain growth rates during early infancy were higher" than those experienced by modern humans.
It appears, therefore, that while Neanderthal brains grew at about the same rate as ours, they had a small size advantage.
Brainpower trade-offs But bigger is not always better in terms of brain function. Modern humans evolved smaller, but more efficient,
|
yes
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
yes_statement
|
"neanderthals" interbred with "modern" "humans".. "modern" "humans" interbred with "neanderthals".
|
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6309227/
|
Multiple episodes of interbreeding between Neanderthals and ...
|
Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
No novel datasets were generated or analysed during the current study.
Abstract
Neandertals and anatomically modern humans overlapped geographically for a period of over 30,000 years following human migration out of Africa. During this period, Neandertals and humans interbred, as evidenced by Neandertal portions of the genome carried by non-African individuals today. A key observation is that the proportion of Neandertal ancestry is ~12–20% higher in East Asian individuals relative to European individuals. Here, we explore various demographic models that could explain this observation. These include distinguishing between a single admixture event and multiple Neandertal contributions to either population, and the hypothesis that reduced Neandertal ancestry in modern Europeans resulted from more recent admixture with a ghost population that lacked a Neandertal ancestry component (the “dilution” hypothesis). In order to summarize the asymmetric pattern of Neandertal allele frequencies, we compile the joint fragment frequency spectrum (FFS) of European and East Asian Neandertal fragments and compare it to both analytical theory and data simulated under various models of admixture. Using maximum likelihood and machine learning, we found that a simple model of a single admixture does not fit the empirical data and instead favor a model of multiple episodes of gene flow into both European and East Asian populations. These findings indicate more long-term, complex interaction between humans and Neandertals than previously appreciated.
2. Introduction
When anatomically modern humans dispersed out of Africa, they encountered and hybridized with Neandertals [6]. The Neandertal component of the modern human genome is ubiquitous in non-African populations, and yet is quantitatively small, representing on average only ~2% of those genomes [6, 22]. This pattern of Neandertal ancestry in modern human genomes was initially interpreted as evidence of a single period of admixture, occurring shortly after the out-of-Africa bottleneck [6, 26]. However, subsequent research showed that Neandertal ancestry is higher by ~12–20% in modern East Asian individuals relative to modern European individuals [22, 18, 36].
Neandertals occupied a vast area of Asia and Europe at the time AMH dispersed outside of Africa (~75,000 BP [12]), and later Europe and Asia (~47–55,000 BP [20, 32]). Moreover, the breakdown of Neandertal segments in modern human genomes is indicative of a time-frame for admixture of 50,000–60,000 BP [26, 32] prior to the diversification of East Asian and European lineages. The genome of Ust’-Ishim, an ancient individual of equidistant relation to modern East Asians and Europeans, has similar levels of Neandertal ancestry as modern Eurasians, but found in longer haplotypes, consistent with an admixture episode occurring ~52,000–58,000 BP [5]. Given the extensive support for a single, shared admixture among Eurasians, there is extensive debate surrounding the observation of increased Neandertal ancestry in East Asians.
There are several hypotheses that may explain the discrepancy in Neandertal ancestry between Europeans and East Asians. It is possible that admixture occurred in a single episode, or ‘pulse’, of gene flow, but demographic and/or selective forces shifted the remaining Neandertal alleles into the frequencies we see in modern populations. Among these explanations are differential strength of purifying selection across Eurasia [27] and that modern Europeans lost part of their Neandertal ancestry through ‘dilution’ by a ghost population which was unadmixed [34, 16]. It is also possible that admixture occurred multiple times; the first pulse of Neandertal gene flow into the population ancestral to East Asians and Europeans was supplemented by additional pulses after both populations had diverged [34, 35].
Sankararaman et al. [27] proposed that differences in the level of Neandertal ancestry in East Asian individuals could be explained by their lower ancestral effective population size relative to Europeans, which would reduce the efficacy of purifying selection against deleterious Neandertal alleles [7]. However, Kim and Lohmueller [14] found that differences in the strength of purifying selection and population size are unlikely to explain the enrichment of Neandertal ancestry in East Asian individuals. This conclusion was further strengthened by Juric et al. [10].
Another hypothesis consistent with a single episode of gene flow is that Neandertal ancestry in modern Europeans was diluted by one of the populations that mixed to create modern Europeans [15, 16]. This population, dubbed ‘Basal Eurasian’, possibly migrated out of Africa separately from the population receiving the pulse of Neandertal gene flow, and thus had little to no Neandertal ancestry.
On the other hand, admixture may have occurred multiple times; the first pulse of Neandertal gene flow into the population ancestral to East Asians and Europeans was supplemented by additional episodes after both populations had diverged [34, 35]. The finding of an individual from Peștera cu Oase, Romania with a recent Neandertal ancestor provides direct evidence of additional episodes of interbreeding, although this individual is unlikely to have contributed to modern-day diversity [5]. However, Neandertal ancestry has remained relatively constant across tens of thousands of years of Eurasian history [19], suggesting that any additional admixture events must have been smaller scale than the initial episodes of interbreeding.
Here, we study the asymmetry in the pattern of Neandertal introgression in modern human genomes between individuals of East Asian and European ancestry. We summarize the asymmetric distribution of Neandertal ancestry tracts in the East Asian and European individuals in the 1000 Genomes Project panel in a joint fragment frequency spectrum (FFS) matrix. We first fit analytical models using maximum likelihood to explain the distribution of fragments in European and Asian individuals marginally. We then compare the joint FFS to the output of genomic data simulated under specific models of admixture between Neandertals and AMH to achieve a higher resolution picture of the interplay of different demographic forces. Our results support a complex model of admixture, with early admixture occurring before the diversification of European and East Asian lineages, and secondary episodes of gene flow into both populations independently.
3. Results
We constructed the joint fragment frequency spectrum by analyzing published datasets of Neandertal fragment calls in 1000 Genomes Project individuals [27, 33]. To avoid complications due to partially overlapping fragments and difficulties calling the edges of fragments, we computed fragment frequencies by sampling a single site every 100kb and asking how many haplotypes were introgressed with confidence above a certain cutoff (Figure 1). Our main results make use of fragments called with posterior probability of 0.45 in the Steinrücken dataset, although we verified robustness across a range of cutoffs and between datasets (Supplementary Material). The observed average proportion of Neandertal ancestry in European individuals was 0.0137 and 0.0164 in East Asian individuals, corresponding to an average enrichment of 19.6% in East Asian individuals (Supplementary Figure 5 shows how this quantity changes across cutoffs).
Graphic representation of the process to make an FFS based on the posterior probability calls from Steinrücken et al. [33]. At a single position every 100 kb, introgression is assigned for individuals presenting a posterior probability above the global cut-off. The count of individuals between East Asian and European samples determines the cell in the FFS where that site is counted
We first developed analytic theory to understand what the FFS would look like in each population separately under different demographic models. To our surprise, we found that when looking only at the marginal distribution of introgressed fragment frequencies, the one pulse model and the dilution model are not statistically identifiable (Supplementary Material). On the other hand, the two pulse model is identifiable. Moreover, the analytic theory reveals that population size history only impacts the FFS within each population as a function of effective population size; intuitively, this arises because once fragments enter the population, their frequency dynamics only depend on the effective population size, rather than the specifics of the population size history. With this in mind, we developed a maximum likelihood procedure to fit a one pulse and a two pulse model to the European and East Asian marginal spectra (Methods), and found strong support for the two pulse model in both cases (Λ = 193.91 in East Asians, nominal p = 7 × 10−43; Λ = 212.64 in Europeans, nominal p = 6 × 10−47. Figure 2a and b). A subsequent goodness-of-fit test strongly rejected the fit of the one pulse model (p = 2 × 10−26 in East Asians, p = 0.0 in Europeans; χ2 goodness of fit test) but could not reject the fit of the two pulse model in either population (p = 1 in East Asians, p = 0.95 in Europe; χ2 goodness of fit test); see also Supplementary Figure 1, which shows the residuals of each fit. Thus, we concluded from analyzing each population in isolation that that the history of admixture was complex, and involved multiple matings with Neandertals.
Individual and joint Fragment Frequency Spectra. a,b) Marginal FFS for the East Asian and European populations (first 20 bins, excluding the 0 bin). The lines represent the best fitted one pulse and two pulse model for each population. c) FFS of the Steinrücken et al. [33] introgression data. d)FFS of the Steinrücken et al. [33] introgression data projected down to 64×64 bins, as used to train the FCNN.
Nonetheless, looking at each population individually, we did not have power to estimate the relative contribution of dilution and multiple admixtures in shaping the patterns of Neandertal fragments seen between Europe and Asia. To gain a more global picture of the history of human-Neandertal interbreeding, we developed a supervised machine learning approach. A difficulty when simulating Neandertal admixture is the large number of free parameters associated with modeling multiple populations from which we have incomplete demographic information. Supervised machine learning applied to genomic datasets is becoming a popular solution for inference [for examples see: 28, 25, 30]. Of particular interest to this study, supervised machine learning has demonstrated the capacity for optimizing the predictive accuracy of an algorithm in datasets that cannot be adequately modeled with a reasonable number of parameters [29]. In practice, this results in the ability to describe natural processes even based on incomplete or imprecise models [29]. Supervised machine learning implementing hidden layers, or deep learning, is particularly effective in population genetic inference and learning informative features of data [30]. A definitive advantage of deep learning is how it makes full use of datasets to learn the mapping of data to parameters, allowing inference from sparse data sets [29]. Comparable likelihood-free inference methods, such as ABC, typically use a rejection algorithm, resulting in most simulations being thrown away. This necessitates a very large number of simulations for accurate inference [30, 1]. Deep learning methods also have the potential to generalize in non-local ways, allowing them to make predictions for data not covered by the training set [1, 29].
We simulated Neandertal admixture by specifying five demographic models with different numbers of admixture events (Figure 3), and produce FFS under a wide range of parameters. We used the simulated FFS to train a fully-connected neural network (FCNN). The trained network classified models successfully ~58% of the time, well above the 20% expected by chance, and was not overfit to the training data (Supplementary Figure 2). We then examined how the precision of the prediction changed when we required different levels of support for the chosen model (Figure 4a). Crucially, we see that when the classifier has high confidence in a prediction, it is very often correct, and that multiple pulse models are not often confused with the dilution model (Supplementary Figure 3).
Representation of the five different demographic models simulated in msprime. Lines within each population phylogeny indicate different paths why which alleles could enter into modern populations, with red lines indicating Neandertal alleles and blue lines indicating non-Neandertal alleles. NEAN: Neandertal, ASN: East Asians, EUR: Europeans, BE: Basal Eurasians
Results from the FCNN classifier. (a) Posterior probability that a chosen model is correct (precision), for all models under different levels of support for the chosen model. The x axis shows the probability cutoff that we used to classify models, and the y axis shows the precision. Each line corresponds to a different model. Simulated datasets in which no model surpassed the cutoff were deemed unclassified. (b) Posterior probability of the empirical introgression data matching each of the five demographic models, determined by the FCNN classifier.
Finally, we applied the trained FCNN to our empirical joint FFS (Figure 4b). Strikingly, we found that the FCNN supported our two most complicated demographic models, favoring a model with 3 pulses of admixture (Posterior probability ~ 0.55), and with a lower probability, a model with 3 pulses of admixture and dilution (Posterior probability ~ 0.44). These results are consistent across a range of cutoffs for calling introgressed fragments (Supplementary Figure 4), are robust to errors in fragment calling (Supplementary Figures 9 and 10), and dovetail with our maximum likelihood results showing that the best fit model must include multiple episodes of human-Neandertal interbreeding.
4. Discussion
Despite initial indications of a simple history of admixture between humans and Neandertals, more detailed analyses suggested that there might be additional, population specific episodes of admixture. By analyzing the joint fragment frequency spectrum of introgressed Neandertal haplotypes in modern Europeans and Asians, we found strong support for a model of multiple admixture events. Specifically, our results support a model in which the original pulse of introgression into the ancestral Eurasian population is supplemented with additional pulses to both European and East Asian populations after those populations diverge, resulting in elevated Neandertal ancestry in East Asians relative to Europeans. This is similar to a model recently proposed by Vernot et al. [35] for explaining differential levels of Neandertal ancestry across Europe, Asia, and Melanesia. Importantly, our results exclude a demographic model where the difference in Neandertal ancestry between Europeans and East Asians is driven primarily through dilution of Neandertal ancestry in Europe due to recent admixture with Basal Eurasians, a population lacking Neandertal ancestry. Nonetheless, we cannot exclude dilution as playing a role in the differences in Neandertal ancestry between Europe and East Asia; a model which includes multiple pulses of Neandertal introgression and dilution through Basal Eurasians was the second likeliest model in the five model comparison. Given the evidence that Basal Eurasians contributed to the modern European gene pool [16], we suspect that dilution does play a role in shaping the pattern of Neandertal ancestry across Eurasia. However, a large amount of dilution would be necessary if it were the only factor explaining the ~19.6% difference in Neandertal ancestry between Europe and East Asia, in contrast with recent work that inferred a smaller (~9.4%) contribution of Basal Eurasians to modern European individuals [11].
Several confounding factors could impact our inference. Although it is unlikely that differential purifying selection is responsible for the discrepancy between European and East Asian Neandertal ancestry [14, 34, 10], some Neandertal ancestry was likely deleterious [7, 10] and our models assume neutrality. However, the strength of selection against introgressed fragments is likely to be small compared to the demographic forces at work; moreover, there is relatively little evidence of strong differences in the strength of selection between different non-African populations [31, 4]. To explore the impact of selection, we obtained the FFS from simulations of deleterious Neandertal ancestry by Petr et al. [19] and asked if we classified their scenarios with selection as a two pulse model using maximum likelihood. We found that we rejected a one pulse model at the 5% level in only 1 out of 15 different simulations with selection, suggesting that we are not likely to misclassify selection against Neandertal ancestry as a two pulse model.
Of additional concern is power to detect fragments in each population. To address this, we implemented a model of fragment calling errors (Supplementary Material). Based on simulations done by Steinrücken et al. [33], we expect false positive rates of approximately 0.1% and false negative rates of approximate 1%; such rates do not cause substantial shifts in the FFS (Supplementary Figure 6). Moreover, after extensive simulations, we found that the neural network trained with errors is robust to false positive fragment calls at a rate of 0.2%, and produce consistent results when applied to the real data (Supplementary Material). Finally, in an attempt to see inside the “black-box” of the fully connected network, we examined how the weights propagate from each entry of the JFFS to the final assignments (Supplementary Material). In doing so, we found that moderate frequency haplotypes are most important to distinguish between models (Supplementary Figure 7), whereas errors in calling fragments are most likely to impact low frequency haplotypes. This, combined with the fact that our results are robust across two different datasets and a range of cutoffs for determining archaic ancestry, convince us that our results are robust to errors in fragment calling.
In addition, it is possible that some of the Neandertal ancestry in East Asia has been misclassified, and in fact originated from Denisovan introgression. The misclassified archaic fragments could then mimic the signal of additional pulses of Neandertal introgression. To address this concern, we inferred the position of Denisovan fragments based on data from Browning et al. [2] and masked 1.6% of positions across the genome (Methods). The masking resulted in 0.49% of sites called as Neandertal introgression in the Steinrücken et al. [33] data to be removed from the introgression data we used in all analyses. Though we do not believe we removed all misclassified Denisova ancestry, we think it is unlikely that a substantial enough proportion remains as to mimic the signal of additional Neandertal pulses. These problems are likely to be further resolved as our ability to make accurate introgression calls for the various ancient human populations improves in the future.
Our work provides additional evidence for the ubiquity of archaic admixture in recent human history, consistent with recent work showing that humans interbred with Denisovans multiple times [2]. Though we find that additional pulses of admixture in both East Asians and Europeans are necessary to explain the distribution of Neandertal ancestry in Eurasia, we are unable to settle why East Asians have elevated Neandertal ancestry. Interestingly, in contrast to Denisovans, there does not seem to be evidence of Neandertal population structure within introgressed fragments [2]. Combined with our results, this indicates that the Neandertal population or populations that admixed with Eurasians must have been relatively closely related. This is consistent with the established inference of a long-term small effective size across Neandertals [21, 22], which has held up to scrutiny despite some claims of a larger Neandertal effective size [23, 24, 17]. Thus, we believe that a likely explanation for our results is that gene flow between humans and Neandertals was intermittent and ongoing, but in a somewhat geographically restricted region. Differential levels of admixture between different Eurasian groups may primarily reflect how long those populations coexisted with Neandertals in that region.
5. Methods
5.1. Data
We obtained the joint fragment frequency spectrum by first downloading publicly available Neandertal introgression calls from two sources [27, 33]. The Sankararaman data consists of the location of introgressed fragments along the genome in each phased haplotype for the 1000 Genomes Project populations. The Steinrücken data consists of the probability of Neandertal origin in 500 bp windows of the genome across each phased haplotype for the Central European (CEU) and Han Chinese (CHB) and Southern Han Chinese (CHS) from the 1000 Genomes Project. We computed fragment frequencies by sampling a single site every 100kb from both sources of data. To compute the Joint Fragment Frequency Spectrum (FFS), we counted how many haplotypes were introgressed at each position. For the Steinrücken data, we called a site introgressed if it has a posterior probability of being introgressed above 45%. We then applied the 1000 Genomes accessibility mask (downloaded from ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/supporting/accessible_genome_masks/20141020.pilot_mask.whole_genome.bed). In the Supplement, we show that our results are robust to the cutoff and consistent between both datasets. We also masked Denisova fragments that were falsely called as Neandertal by downloading the S’ fragment calls from Browning et al. [2] and masking any fragment that matched Denisova > 35% and Neandertal < 25%, resutling in removal of 1.6% of the genome overall and 0.49% of Neandertal fragments. Given that the Denisovan and Neandertal populations diverged relatively early following the divergence of the Neandertal/Denisova lineage and the modern human lineage, it is unsurprising that a relatively small fraction of Denisova haplotypes were falsely assigned as Neandertal introgression. Finally, we masked the (0,0), (0,1), (1,0), and (1,1) position of the FFS matrix, in order to reduce the impact of false negative and false positive fragment calls.
5.2. Analytical Model
We model introgression of intensity f as injection of alleles at frequency f into the population at the time of introgression. In the one pulse model, this results in an exact expression for the expected fragment frequency spectrum under the Wright-Fisher diffusion model (Supplementary Material). Multiple pulse models could be solved analytically using a dynamic programming algorithm as in Kamm et al. [11], but we instead approximate the expected frequency spectrum by making the approximation that the probability of sampling k introgressed haplotypes in a sample of size n + 1 is the same as sampling k haplotypes in a sample of size n for large n (c.f. Jouganous et al. [9]). This results in closed-form expressions for the expected frequency spectrum under both the two pulse and the dilution model (Supplementary Material). With an expected frequency spectrum given parameters θ and model M,pn,k(θ,M)=ℙ (k out of n haplotypes are introgressed|θ,M), we compute the likelihood
L(θ,M)=∑k=0nxklog(pn,k(θ,M))
where xk is the number of fragments found in k out of n individuals. We optimized the likelihood using scipy, and compared models using the likelihood ratio statistic
Λ=2(L(θ2,M2)−L(θ1,M1))
where θi and Mi correspond to the i-pulse model. Under the null, Λ should be χ2 distributed with 2 degrees of freedom (since there are 2 additional parameters in the two pulse model). Simulations in the Supplementary Material suggest that p-values under this model are well calibrated, despitre the impact of linkage; thus, we opted to report nominal p-values from our likelihood ratio test.
5.3. Simulations
We used msprime [13], to simulate Neandertal introgression into two modern populations with multiple potential admixture episodes and dilution from Basal Eurasians (Figure 3). For each replicate, we simulated the complete genomes for 170 European individuals, and 394 East Asian individuals, matching the sampling available from the 1000 Genomes Project panel. We used the human recombination map (downloaded from http://www.well.ox.ac.uk/~anjali/AAmap/; Hinch et al. [8]). In each simulation we mimicked our sampling scheme on the real data by sampling 1 site every 100kb and calling a Neandertal fragment by asking which individuals coalesced with the Neandertal sample more recently than the human-Neandertal population split time.
For each simulation, we drew demographic parameters, including effective population sizes and divergence times, from uniform distributions. For effective population sizes we used 500–5000 individuals for Neandertals, 5000–50000 for Eurasians, 5000–100000 individuals for the European and East Asian populations. For divergence times, we used 12000–26000 generations for Neandertals and humans, and 1300–2000 generations for the Eurasian split. The divergence between Basal Eurasians and Eurasians was fixed at 3000 generations. Lastly, we drew introgression times between 1500–3000 generations for gene flow into Eurasians, 800–2000 generations for gene flow into the European and East Asian populations. The time for the introgression event between Basal Eurasians and Europeans (dilution) was drawn from an uniform distribution of 200–2000 generations.
In order to ensure that our simulations focused on the correct parameter space, we constrained the resulting amount of Neandertal introgression in the modern European and East Asian genomes. The average Neandertal ancestry a was drawn from an uniform distribution between 0.01–0.03, and the difference in ancestry d between the East Asian and European populations was drawn from an uniform distribution between 0–0.01. We then determined the introgression intensity given a and d (Supplementary Material).
5.4. Machine Learning (FCNN)
Using the resulting joint FFS, we trained a simple fully-connected neural network (FCNN) to categorize a joint FFS into one of five demographic models. The network was implemented in Keras [3] using a TensorFlow back-end. The network used a simple architecture of three Dense layers (from 1024 nodes, to 512 nodes, to 64 nodes), each followed by a dropout layer (0.20).
5.5. Data availability
No novel datasets were generated or analysed during the current study.
Supplementary Material
1
2
Sup Table 1
Sup Table 2
6 Acknowledgments
We are grateful to Sara Mathieson and Jeff Spence for several useful discussions about neural network architecture and appropriate methods for training neural networks. We also wish to thank Jeff Spence and Matthias Steinrücken for extensive discussions on errors in fragment calling. Iain Mathieson, Sara Mathieson, and Jeff Spence provided invaluable feedback on an early draft of this manuscript that helped improve its clarity. Kelley Harris provided invaluable discussions during the conception and work of this manuscript. We are grateful to Martin Petr and Benjamin Vernot for sharing processed simulation data with us, and for discussions about the impact of selection on Neandertal ancestry. The comments of two anonymous reviewers substantially improved the rigor of this manuscript, particularly regarding fragment calling errors. JGS and FAV were supported by NIH grant R35 GM124745. This research was supported in part by the National Science Foundation through major research instrumentation grant number 1625061 for the Owl’s Nest high performance cluster at Temple University.
[17] Mafessoni Fabrizio and Prüfer Kay. Better support for a small effective population size of neandertals and a long shared history of neandertals and denisovans. Proceedings of the National Academy of Sciences, page 201716918, 2017. [PMC free article] [PubMed] [Google Scholar]
|
Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
No novel datasets were generated or analysed during the current study.
Abstract
Neandertals and anatomically modern humans overlapped geographically for a period of over 30,000 years following human migration out of Africa. During this period, Neandertals and humans interbred, as evidenced by Neandertal portions of the genome carried by non-African individuals today. A key observation is that the proportion of Neandertal ancestry is ~12–20% higher in East Asian individuals relative to European individuals. Here, we explore various demographic models that could explain this observation. These include distinguishing between a single admixture event and multiple Neandertal contributions to either population, and the hypothesis that reduced Neandertal ancestry in modern Europeans resulted from more recent admixture with a ghost population that lacked a Neandertal ancestry component (the “dilution” hypothesis). In order to summarize the asymmetric pattern of Neandertal allele frequencies, we compile the joint fragment frequency spectrum (FFS) of European and East Asian Neandertal fragments and compare it to both analytical theory and data simulated under various models of admixture. Using maximum likelihood and machine learning, we found that a simple model of a single admixture does not fit the empirical data and instead favor a model of multiple episodes of gene flow into both European and East Asian populations. These findings indicate more long-term, complex interaction between humans and Neandertals than previously appreciated.
2. Introduction
When anatomically modern humans dispersed out of Africa, they encountered and hybridized with Neandertals [6]. The Neandertal component of the modern human genome is ubiquitous in non-African populations, and yet is quantitatively small, representing on average only ~2% of those genomes [6, 22].
|
yes
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
yes_statement
|
"neanderthals" interbred with "modern" "humans".. "modern" "humans" interbred with "neanderthals".
|
https://www.bbc.com/future/article/20210112-heres-what-sex-with-neanderthals-was-like
|
Here's what we know sex with Neanderthals was like - BBC Future
|
He cleared his throat, looked her up and down, and – in an absurdly high-pitched, nasal voice – deployed his best chat-up line. She stared back blankly. Luckily for him, they didn’t speak the same language. They had an awkward laugh and, well, we can all guess what happened next.
Of course, it could have been far less like a scene from a steamy romance novel. Perhaps the woman was actually the Neanderthal and the man belonged to our own species. Maybe their relationship was of the casual, pragmatic kind, because there just weren’t many people around at the time. It’s even been suggested, too, that such hook-ups weren’t consensual.
While we will never know what really happened in this encounter – or others like it – what we can be sure of is that such a couple did get together. Around 37,000-42,000 years later, in February 2002, two explorers made an extraordinary discovery in an underground cave system in the southwestern Carpathian mountains, near the Romanian town of Anina.
Even getting there was no easy task. First they waded neck-deep in an underground river for 200m (656ft). Then came a scuba dive for 30m (98ft) along an underwater passage, followed by a 300-metre (984ft) ascent up to the poarta, or “mouse hole” – an opening through which they entered a previously unknown chamber.
Inside the Peştera cu Oase, or "Cave with Bones", they found thousands of mammalian bones. Over its long history, it’s thought to have primarily been inhabited by male cave bears – extinct relatives of the brown bear – to which they largely belong. Resting on the surface among them was a human jawbone, which radiocarbon dating revealed to be from one of the oldest known early modern humans in Europe.
Evidence for one tryst between early modern humans and Neanderthals has been found in the Southern Carpathian mountains (Credit: NPL/Alamy)
As we head towards the end of another extraordinary year, BBC Future is taking a look back at some of our favourite stories for our “Best of 2021” collection. Discover more of our picks here.
The remains are thought to have washed inside the cave naturally and lain undisturbed ever since. At the time, scientists noticed that – while the jawbone was unmistakeably modern in its appearance, it also contained some unusual, Neanderthal-like features. Years later, this hunch was confirmed.
When scientists analysed DNA extracted from the find in 2015, they found that the individual was male, and likely to have been 6-9% Neanderthal. This is the highest concentration ever encountered in an early modern human, and around three times the amount found in present-day Europeans and Asians, whose genetic makeup is roughly 1-3% Neanderthal.
Because the genome contained large stretches of uninterrupted Neanderthal sequences, the authors calculated that the jaw’s owner is likely to have had a Neanderthal ancestor as recently as four to six generations ago – equivalent to a great-great-grandparent, great-great-great-grandparent or great-great-great-great grandparent. They determined that the liaison probably occurred fewer than 200 years before the time he lived.
In addition to the jawbone, the team found skull fragments from another individual at Peştera cu Oase, who possessed a similar mixture of features. Scientists have not yet been able to extract DNA from these remains, but like the jawbone, it’s thought that they may have belonged to someone who had recent Neanderthal ancestry.
Neanderthal DNA can be found in everyone alive today, including people of African descent, whose ancestors aren’t thought to have come into contact with this group directly
In fact, Neanderthal DNA can be found in everyone alive today, including people of African descent, whose ancestors aren’t thought to have come into contact with this group directly. And the transfer also happened the other way around. In 2016, scientists discovered that Neanderthals from the Altai mountains in Siberia may have shared 1-7% of their genetics with the ancestors of modern humans, who lived roughly 100,000 years ago.
Crucially, though you might think the salacious details of these ancient liaisons have been lost to pre-history, there are still clues as to what they might have been like around today. Here’s everything you’ve ever wanted to know about this titillating episode in human history.
Kissing
In 2017, Laura Weyrich – an anthropologist at Pennsylvania State University – discovered the ghostly signature of a microscopic 48,000-year-old hitchhiker clinging to a prehistoric tooth.
“I look at ancient microbes as a way of learning more about the past, and dental calculus is really the only reliable way to reconstruct the microorganisms that lived within ancient humans,” says Weyrich. She was particularly interested in what Neanderthals were eating and how they interacted with their environment. To find out, she sequenced DNA from the dental plaque on teeth found in three different caves.
The Last Neanderthal: How Neanderthal are you?
Two of the samples were taken from among 13 Neanderthals found at El Sidrón in north-west Spain. The site was recently beset by intrigue, when it was revealed that many of these individuals seem to have suffered from congenital abnormalities, such as misshapen kneecaps and vertebrae, and baby teeth which had remained long after childhood. The group is suspected to have been composed of close relatives, who had accumulated recessive genes after of a long history of inbreeding. The family met an unfortunate end – their bones are etched with tell-tale signs that they were cannibalised. It’s thought that they were among the last Neanderthals to walk the Earth.
To Weyrich’s surprise, one of the teeth from El Sidrón contained the genetic signature of a bacteria-like microorganism, Methanobrevibacter oralis, which is still found in our mouths to this day. By comparing the Neanderthal version with the modern human version, she was able to estimate that the two had drifted apart around 120,000 years ago.
If Neanderthals and present-day humans had always shared the same oral companions, you would expect this to have happened much, much earlier – at least 450,000 years ago, when the two subspecies took different paths. “What this means is that the microorganism has been transferred since then,” says Weyrich.
It’s impossible to know for sure how this happened, but it could be linked to something else that occurred 120,000 years ago. “For me what’s fascinating is this is also one of the first time periods where we have described interbreeding between humans and Neanderthals,” says Weyrich. “So it's wonderful to see a microbe sort of being wrapped up in that interaction.”
Weyrich explains that one possible route for the transfer is kissing. “When you kiss someone, oral microbes will go back and forth between your mouths,” she says. “It could have happened once but then sort of been somehow magically propagated, if it happened that the group of people who were infected went on to be very successful. But it could also be something that occurred more regularly.”
Is our microbiome working correctly because we picked up microorganisms from Neanderthals? - Laura Weyrich
Another way to transfer your oral microbes is by sharing food. And although there is no direct evidence of a Neanderthal preparing a meal for an early modern human, a romantic meal could have been an alternative source of M. oralis.
For Weyrich, the discovery is exciting because it suggests that our interactions with other types of humans long ago have shaped the communities of microorganisms that we carry around today.
This raises a question for Weyrich: “Is our microbiome working correctly because we picked up microorganisms from Neanderthals?”
For example, while M. oralis tends to be associated with gum disease in modern humans, Weyrich says that it’s been found in lots of prehistoric individuals who had perfectly healthy teeth. In the future, she envisages using the insights gleaned from ancient dental plaque to reconstruct healthier oral microbiomes for people living in the modern world.
Male or female Neanderthals
It’s impossible to say for certain whether it was mostly female Neanderthals scoring with early modern human males, or the other way around – but there are some clues.
In 2008, archaeologists discovered a broken finger bone and single molar tooth in the Denisova Cave in Russia’s Altai Mountains, from which a brand new subspecies of human was revealed. For years, the “Denisovans” were known only from the handful of samples unearthed at this site, along with their DNA, from which scientists discovered that their legacy continues to this day in the genomes of people of East Asian and Melanesian descent.
Around 130,000 years ago, a Neanderthal in what is now Croatia sliced the talon off the toe of an eagle – possibly to make jewellery (Credit: STR/AFP/Getty Images)
Denisovans were a lot more closely related to Neanderthals than present-day humans; the two subspecies may have had ranges that overlapped in Asia for hundreds of thousands of years. This became particularly apparent in 2018, with the discovery of a bone fragment which belonged to a young girl – nicknamed Denny – who had a Neanderthal mother and Denisovan father.
Consequently, it would make sense if the male sex chromosomes of Neanderthals looked similar to those of Denisovans. But when scientists sequenced the DNA from three Neanderthals, who lived 38,000-53,000 years ago, they were surprised to discover that their Y chromosomes had more in common with those of present-day humans.
The researchers say this is evidence of "strong gene flow” between Neanderthals and early modern humans – they were interbreeding rather a lot. So often, in fact, that as Neanderthal numbers dwindled towards the end of their existence, their Y chromosomes may have gone extinct, and been replaced entirely with our own. This suggests that a substantial number of ancestral human men were having sex with female Neanderthals.
But the story doesn’t end there. Other research has shown that almost exactly the same fate befell Neanderthal mitochondria – cellular machinery that help to turn sugars into useable energy. These are exclusively passed down from mothers to their children, so when early modern human mitochondria were found in Neanderthal remains in 2017, it hinted that our ancestors were also having sex with male Neanderthals. This time, the interbreeding is likely to have happened between 270,000 and 100,000 years ago, when humans were mostly confined to Africa.
Sexually transmitted diseases
A few years ago, Ville Pimenoff was studying the sexually transmitted infection human papillomavirus (HPV) when he noticed something odd.
These sexual encounters must have been rather typical in Eurasia, in areas where both human populations were present – Ville Pimenoff
But there is a clear divide globally between where certain variants of this virus are found. Across the majority of the planet, it’s most likely you’ll encounter type A, while in sub-Saharan Africa most people are infected with types B and C. Intriguingly, the pattern exactly matches the distribution of Neanderthal DNA worldwide – not only do people in sub-Saharan Africa carry unusual strains of HPV, but they carry relatively little Neanderthal genetic material.
To find out what was going on, Pimenoff used the genetic diversity among type A today to calculate that it first emerged roughly 60,000 to 120,000 years ago. This makes it much younger than the other kinds of HPV-16 – and crucially, this happens to be around the time that early modern humans emerged from Africa, and came into contact with Neanderthals. Though it’s hard to prove definitively, Pimenoff believes they immediately began swapping sexually transmitted diseases – and that the split in the variants of HPV-16 reflects that fact that we acquired type A from their antescendants.
“I tested it thousands of times using computational techniques, and the result was always the same – that this is the most plausible scenario,” says Pimenoff. Based on the way HPV viruses are spread today, he suspects that the virus wasn’t just transferred to humans once, but on many separate occasions.
“It is very unlikely that it just happened once, because then it would be more probable that transmission would not survive further,” says Pimenoff. “These sexual encounters must have been rather typical in Eurasia, in areas where both human populations were present.”
Intriguingly, Pimenoff also believes the acquisition of type A from Neanderthals explains why it’s so cancerous in humans – because we first encountered it relatively recently, our immune systems haven’t yet evolved to be able to clear the infection.
Neanderthals (right) had distinctive facial features, but some skulls have been found with a mixture of traits (Credit: Sabena Jane Blackbird/Alamy)
In fact, sex with Neanderthals might have left us with a number of other viruses, including an ancient relative of HIV. But there’s no need to feel resentful towards our long-lost relatives, because there’s also evidence that we gave them STDs – including herpes.
Sexual organs
Though it might seem crass to wonder what Neanderthal penises and vaginas were like, the genitals of different organisms have been the subject of a vast body of scientific research; at the time of writing, searching for “penis evolution” on Google Scholar returned 98,000 results, while “vagina evolution” yielded 87,000.
It turns out an animal’s sexual organs can reveal a surprising amount about their lifestyle, mating strategy and evolutionary history – so asking questions about their equipment is just another route to understanding them.
The animal kingdom contains a kaleidoscopic array of imaginative designs. These include the argonaut octopus and its worm-like detachable penis, which can swim off alone to mate with females – a practical feature thought to have evolved because the males are only around 10% of the size of their lovers – and the triple vaginas of kangaroos, which make it possible for females to be perpetually pregnant.
One way in which human penises are unusual is that they are smooth. Our closest living relatives, common and bonobo chimpanzees – with whom we share around 99% of our DNA – have “penile spines”. These tiny barbs, which are made from the same substance as skin and hair (keratin), are thought to have evolved to clear out the sperm of competing males, or to lightly chafe the female’s vagina and put her off having sex again for a while.
Penis spines are thought to be most useful in promiscuous species, where they may help males to maximise their chances of reproducing
Back in 2013, scientists discovered that the genetic code for penile spines is lacking from Neanderthal and Denisovan genomes, just as it is from modern humans, suggesting that it vanished from our collective ancestors at least 800,000 years ago. This is significant, because penis spines are thought to be most useful in promiscuous species, where they may help males to compete with others and maximise the chances of reproducing. This has led to speculation that – like us – Neanderthals and Denisovans were mostly monogamous.
Sleeping around
However, there’s some evidence to suggest that Neanderthals did sleep around more than modern humans.
Studies in foetuses have shown that the presence of androgens such as testosterone in the womb can affect a person’s “digit ratio” as an adult – a measure of how the lengths of their index finger and ring fingers compare, calculated by dividing the first by the second. In a higher-testosterone environment, people tend to end up with lower ratios. This is true regardless of biological sex.
Both male and female Neanderthals are known to have interbred with our ancestors (Credit: Lambert/Ullstein Bild/Getty Images)
Back in 2010, a team of scientists noticed a pattern among the closest relatives of humans, too. It turns out chimpanzees, gorillas and orangutans – which are generally more promiscuous – have lower digit ratios on average, while an early modern human found in an Israeli cave and present-day humans had higher ratios (0.935 and 0.957, respectively).
Humans are broadly monogamous, so the researchers suggested that there might be a link between a species’ digit ratio and sexual strategy. If they are right, Neanderthals – who had ratios in between the two groups (0.928) – were slightly less monogamous than both early modern and present-day humans.
Walking off into the sunset
Once a Neanderthal-early-modern-human couple had found each other, they may have settled down near where the man lived, with each generation following the same pattern. Genetic evidence from Neanderthals suggests that households were composed of related men, their partners and children. Women seemed to leave their family home when they found a partner.
Another insight into happily-ever-after scenarios between early modern humans and Neanderthals comes from a study of the genes they left behind in Icelandic people today. Last year, an analysis of the genomes of 27,566 such individuals revealed the ages that Neanderthals tended to have children: while the women were usually older than their early modern human counterparts, the men were generally young fathers.
It’s now thought that the Neanderthals’ extinction roughly 40,000 years ago may have been partly driven by our mutual attraction, as well as factors such as sudden climate change and inbreeding.
One emerging theory is that diseases carried by the two subspecies – such as HPV and herpes – initially formed an invisible barrier, which prevented either from expanding their territory and potentially coming into contact. In the few areas where they did overlap, they interbred and early modern humans acquired useful immunity genes which suddenly made it possible to venture further.
But Neanderthals had no such luck – modelling suggests that if they had a higher burden of disease to begin with, they may have remained vulnerable to these exotic new strains for longer, regardless of interbreeding – and this means they were stuck. Eventually, the ancestors of present-day humans made it to their territories, and wiped them out.
|
Consequently, it would make sense if the male sex chromosomes of Neanderthals looked similar to those of Denisovans. But when scientists sequenced the DNA from three Neanderthals, who lived 38,000-53,000 years ago, they were surprised to discover that their Y chromosomes had more in common with those of present-day humans.
The researchers say this is evidence of "strong gene flow” between Neanderthals and early modern humans – they were interbreeding rather a lot. So often, in fact, that as Neanderthal numbers dwindled towards the end of their existence, their Y chromosomes may have gone extinct, and been replaced entirely with our own. This suggests that a substantial number of ancestral human men were having sex with female Neanderthals.
But the story doesn’t end there. Other research has shown that almost exactly the same fate befell Neanderthal mitochondria – cellular machinery that help to turn sugars into useable energy. These are exclusively passed down from mothers to their children, so when early modern human mitochondria were found in Neanderthal remains in 2017, it hinted that our ancestors were also having sex with male Neanderthals. This time, the interbreeding is likely to have happened between 270,000 and 100,000 years ago, when humans were mostly confined to Africa.
Sexually transmitted diseases
A few years ago, Ville Pimenoff was studying the sexually transmitted infection human papillomavirus (HPV) when he noticed something odd.
These sexual encounters must have been rather typical in Eurasia, in areas where both human populations were present – Ville Pimenoff
But there is a clear divide globally between where certain variants of this virus are found.
|
yes
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
yes_statement
|
"neanderthals" interbred with "modern" "humans".. "modern" "humans" interbred with "neanderthals".
|
https://en.wikipedia.org/wiki/Interbreeding_between_archaic_and_modern_humans
|
Interbreeding between archaic and modern humans - Wikipedia
|
In Eurasia, interbreeding between Neanderthals and Denisovans with modern humans took place several times. The introgression events into modern humans are estimated to have happened about 47,000–65,000 years ago with Neanderthals and about 44,000–54,000 years ago with Denisovans.
Neanderthal-derived DNA has been found in the genomes of most or possibly all contemporary populations, varying noticeably by region. It accounts for 1–4% of modern genomes for people outside Sub-Saharan Africa, although estimates vary, and either none or possibly up to 0.3% — according to recent research[3] — for those in Africa. It is highest in East Asians, intermediate in Europeans, and lower in Southeast Asians.[4] According to some research, it is also lower in Melanesians compared to both East Asians and Europeans.[4] However, other research finds higher Neanderthal admixture in Melanesians, as well as in Native Americans, than in Europeans (though not higher than in East Asians).[5]
Denisovan-derived ancestry is largely absent from modern populations in Africa and Western Eurasia. The highest rates, by far, of Denisovan admixture have been found in Oceanian and some Southeast Asian populations. An estimated 4–6% of the genome of modern Melanesians is derived from Denisovans, but the highest amounts detected thus far are found in the Negrito populations of the Philippines. While some Southeast Asian Negrito populations carry Denisovan admixture, others have none, such as the Andamanese. In addition, low traces of Denisovan-derived ancestry have been found in mainland Asia, with an elevated Denisovan ancestry in South Asian populations compared to other mainland populations.[6]
In Africa, archaic alleles consistent with several independent admixture events in the subcontinent have been found. It is currently unknown who these archaic African hominins were.[4]
A 2016 paper in the journal Evolutionary Biology argued that introgression of DNA from other lineages enabled humanity to migrate to, and succeed in, numerous new environments, with the resulting hybridization being an essential force in the emergence of modern humans.[7]
On 7 May 2010, following the genome sequencing of three VindijaNeanderthals, a draft sequence of the Neanderthal genome was published and revealed that Neanderthals shared more alleles with Eurasian populations (e.g. French, Han Chinese, and Papua New Guinean) than with sub-Saharan African populations (e.g. Yoruba and San).[8] According to the authors Green et al. (2010), the observed excess of genetic similarity is best explained by recent gene flow from Neanderthals to modern humans after the migration out of Africa.[8] They estimated the proportion of Neanderthal-derived ancestry to be 1–4% of the Eurasian genome.[8] Prüfer et al. (2013) estimated the proportion to be 1.5–2.1% for non-Africans,[9] Lohse and Frantz (2014) infer a higher rate of 3.4–7.3% in Eurasia.[10] In 2017, Prüfer et al. revised their estimate to 1.8–2.6% for non-Africans outside Oceania.[11]
According to a later study by Chen et al. (2020), Africans (specifically, the 1000 Genomes African populations) also have Neanderthal admixture,[12] with this Neanderthal admixture in African individuals accounting for 17 megabases,[12] which is 0.3% of their genome.[3] According to the authors, Africans gained their Neanderthal admixture predominantly from a back-migration by peoples (modern humans carrying Neanderthal admixture) that had diverged from ancestral Europeans (postdating the split between East Asians and Europeans).[12] This back-migration is proposed to have happened about 20,000 years ago.[3] However, some scientists, such as geneticist David Reich, have doubts about how extensive the flow of DNA back to Africa would have been, finding the signal of Neanderthal admixture "really weak".[13]
About 20% of the Neanderthal genome has been found introgressed or assimilated in the modern human population (by analyzing East Asians and Europeans),[14] but the figure has also been estimated at about a third.[15]
A higher Neanderthal admixture was found in East Asians than in Europeans,[14][16][17][18][19] which is estimated to be about 20% more introgression into East Asians.[14][16][19] This could possibly be explained by the occurrence of further admixture events in the early ancestors of East Asians after the separation of Europeans and East Asians,[4][14][16][17][19] dilution of Neanderthal ancestry in Europeans by populations with low Neanderthal ancestry from later migrations,[4][16][19] or natural selection that may have been relatively lower in East Asians than in Europeans.[4][18][19] Studies simulating admixture models indicate that a reduced efficacy of purifying selection against Neanderthal alleles in East Asians could not account for the greater proportion of Neanderthal ancestry of East Asians, thus favoring more-complex models involving additional pulses of Neanderthal introgression into East Asians.[20][21] Such models show a pulse to ancestral Eurasians, followed by separation and an additional pulse to ancestral East Asians.[4] It is observed that there is a small but significant variation of Neanderthal admixture rates within European populations, but no significant variation within East Asian populations.[14] Prüfer et al. (2017) remarked that East Asians carry more Neanderthal DNA (2.3–2.6%) than Western Eurasians (1.8–2.4%).[11]
It was later determined by Chen et al. (2020) that East Asians have 8% more Neanderthal ancestry, revised from the previous reports of 20% more Neanderthal ancestry, compared to Europeans.[12] This stems from the fact that Neanderthal ancestry shared with Africans had been masked, because Africans were thought to have no Neanderthal admixture and were therefore used as reference samples.[12] Thus, any overlap in Neanderthal admixture with Africans resulted in an underestimation of Neanderthal admixture in non-Africans and especially in Europeans.[12] The authors give a single pulse of Neanderthal admixture after the out-of-Africa dispersal as the most parsimonious explanation for the enrichment in East Asians, but they add that variation in Neanderthal ancestry may also be attributed to dilution to account for the now-more-modest differences found.[12] As a proportion of the total amount of Neanderthal sequence for each population, 7.2% of the sequence in Europeans is shared exclusively with Africans, while 2% of the sequence in East Asians is shared exclusively with Africans.[12]
Genomic analysis suggests that there is a global division in Neanderthal introgression between sub-Saharan African populations and other modern human groups (including North Africans) rather than between African and non-African populations.[22] North African groups share a similar excess of derived alleles with Neanderthals as do non-African populations, whereas sub-Saharan African groups are the only modern human populations that generally did not experience Neanderthal admixture.[23] The Neanderthal genetic signal among North African populations was found to vary depending on the relative quantity of autochthonous North African, European, Near Eastern and sub-Saharan ancestry. Using f4 ancestry ratio statistical analysis, the Neanderthal inferred admixture was observed to be: highest among the North African populations with maximal autochthonous North African ancestry such as Tunisian Berbers, where it was at the same level or even higher than that of Eurasian populations (100–138%); high among North African populations carrying greater European or Near Eastern admixture, such as groups in North Morocco and Egypt (~60–70%); and lowest among North African populations with greater Sub-Saharan admixture, such as in South Morocco (20%).[24] Quinto et al. (2012) therefore postulate that the presence of this Neanderthal genetic signal in Africa is not due to recent gene flow from Near Eastern or European populations since it is higher among populations bearing indigenous pre-Neolithic North African ancestry.[25] Low but significant rates of Neanderthal admixture has also been observed for the Maasai of East Africa.[26] After identifying African and non-African ancestry among the Maasai, it can be concluded that recent non-African modern human (post-Neanderthal) gene flow was the source of the contribution since around an estimated 30% of the Maasai genome can be traced to non-African introgression from about 100 generations ago.[17]
Presenting a high-quality genome sequence of a female Altai Neanderthal, it has been found that the Neanderthal component in non-African modern humans is more related to the Mezmaiskaya Neanderthal (North Caucasus) than to the Altai Neanderthal (Siberia) or the Vindija Neanderthals (Croatia).[9] By high-coverage sequencing the genome of a 50,000-year-old female Vindija Neanderthal fragment, it was later found that the Vindija and Mezmaiskaya Neanderthals did not seem to differ in the extent of their allele-sharing with modern humans.[11] In this case, it was also found that the Neanderthal component in non-African modern humans is more closely related to the Vindija and Mezmaiskaya Neanderthals than to the Altai Neanderthal.[11] These results suggest that a majority of the admixture into modern humans came from Neanderthal populations that had diverged (about 80–100kya) from the Vindija and Mezmaiskaya Neanderthal lineages before the latter two diverged from each other.[11]
Analysis of chromosome 21 of the Altai, El Sidrón (Spain), and Vindija Neanderthals indicates that of these three lineages, only the El Sidrón and Vindija Neanderthals display significant rates of gene flow (0.3–2.6%) into modern humans, suggesting that the El Sidrón and Vindija Neanderthals are more closely related than the Altai Neanderthal to the Neanderthals that interbred with modern humans about 47,000–65,000 years ago.[28] Conversely, significant rates of modern human gene flow into Neanderthals occurred—of the three examined lineages—for only the Altai Neanderthal (0.1–2.1%), suggesting that modern human gene flow into Neanderthals mainly took place after the separation of the Altai Neanderthals from the El Sidrón and Vindija Neanderthals that occurred roughly 110,000 years ago.[28] The findings show that the source of modern human gene flow into Neanderthals originated from a population of early modern humans from about 100,000 years ago, predating the out-of-Africa migration of the modern human ancestors of present-day non-Africans.[28]
No evidence of Neanderthal mitochondrial DNA has been found in modern humans.[29][30][31] This suggests that successful Neanderthal admixture happened in pairings with Neanderthal males and modern human females.[32][33] Possible hypotheses are that Neanderthal mitochondrial DNA had detrimental mutations that led to the extinction of carriers, that the hybrid offspring of Neanderthal mothers were raised in Neanderthal groups and became extinct with them, or that female Neanderthals and male Sapiens did not produce fertile offspring.[32] However, the hypothesized incompatibility between Neanderthals and modern humans is contested by findings that suggest that the Y chromosome of Neanderthals was replaced by an extinct lineage of the modern human Y chromosome, which introgressed into Neanderthals between 100,000 and 370,000 years ago.[34] Furthermore, the study concludes that the replacement of the Y chromosomes and mitochondrial DNA in Neanderthals after gene flow from modern humans is highly plausible given the increased genetic load in Neanderthals relative to modern humans.[34]
As shown in an interbreeding model produced by Neves and Serva (2012), the Neanderthal admixture in modern humans may have been caused by a very low rate of interbreeding between modern humans and Neanderthals, with the exchange of one pair of individuals between the two populations in about every 77 generations.[35] This low rate of interbreeding would account for the absence of Neanderthal mitochondrial DNA from the modern human gene pool as found in earlier studies, as the model estimates a probability of only 7% for a Neanderthal origin of both mitochondrial DNA and Y chromosome in modern humans.[35]
There are large genomic regions with strongly reduced Neanderthal contribution in modern humans due to negative selection,[14][18] partly caused by hybrid male infertility.[18] These regions were most-pronounced on the X chromosome, with fivefold lower Neanderthal ancestry compared to autosomes.[4][18] They also contained relatively high numbers of genes specific to testes.[18] This means that modern humans have relatively few Neanderthal genes that are located on the X chromosome or expressed in the testes, suggesting male infertility as a probable cause.[18] It may be partly affected by hemizygosity of X chromosome genes in males.[4]
Deserts of Neanderthal sequences may also be caused by genetic drift involving intense bottlenecks in the modern human population and background selection as a result of strong selection against deleterious Neanderthal alleles.[4] The overlap of many deserts of Neanderthal and Denisovan sequences suggests that repeated loss of archaic DNA occur at specific loci.[4]
It has also been shown that Neanderthal ancestry has been selected against in conserved biological pathways, such as RNA processing.[18]
Consistent with the hypothesis that purifying selection has reduced Neanderthal contribution in present-day modern human genomes, Upper Paleolithic Eurasian modern humans (such as the Tianyuan modern human) carry more Neanderthal DNA (about 4–5%) than present-day Eurasian modern humans (about 1–2%).[36]
Rates of selection against Neanderthal sequences varied for European and Asian populations.[4]
In Eurasia, modern humans have adaptive sequences introgressed from archaic humans, which provided a source of advantageous genetic variants that are adapted to local environments and a reservoir for additional genetic variation.[4] Adaptive introgression from Neanderthals has targeted genes involved with keratin filaments, sugar metabolism, muscle contraction, body fat distribution, enamel thickness, and oocyte meiosis, as well as brain size and functioning.[37] There are signals of positive selection, as the result of adaptation to diverse habitats, in genes involved with variation in skin pigmentation and hair morphology.[37] In the immune system, introgressed variants have heavily contributed to the diversity of immune genes, of which there's an enrichment of introgressed alleles that suggest a strong positive selection.[37]
Researchers found Neanderthal introgression of 18 genes—several of which are related to UV-light adaptation—within the chromosome 3p21.31 region (HYAL region) of East Asians.[38] The introgressive haplotypes were positively selected in only East Asian populations, rising steadily from 45,000 years BP until a sudden increase of growth rate around 5,000 to 3,500 years BP.[38] They occur at very high frequencies among East Asian populations in contrast to other Eurasian populations (e.g. European and South Asian populations).[38] The findings also suggests that this Neanderthal introgression occurred within the ancestral population shared by East Asians and Native Americans.[38]
Evans et al. (2006) had previously suggested that a group of alleles collectively known as haplogroup D of microcephalin, a critical regulatory gene for brain volume, originated from an archaic human population.[39] The results show that haplogroup D introgressed 37,000 years ago (based on the coalescence age of derived D alleles) into modern humans from an archaic human population that separated 1.1 million years ago (based on the separation time between D and non-D alleles), consistent with the period when Neanderthals and modern humans co-existed and diverged respectively.[39] The high frequency of the D haplogroup (70%) suggest that it was positively selected for in modern humans.[39] The distribution of the D allele of microcephalin is high outside Africa but low in sub-Saharan Africa, which further suggest that the admixture event happened in archaic Eurasian populations.[39] This distribution difference between Africa and Eurasia suggests that the D allele originated from Neanderthals according to Lari et al. (2010), but they found that a Neanderthal individual from the Mezzena Rockshelter (Monti Lessini, Italy) was homozygous for an ancestral allele of microcephalin, thus providing no support that Neanderthals contributed the D allele to modern humans and also not excluding the possibility of a Neanderthal origin of the D allele.[40] Green et al. (2010), having analyzed the Vindija Neanderthals, also could not confirm a Neanderthal origin of haplogroup D of the microcephalin gene.[8]
It has been found that HLA-A*02, A*26/*66, B*07, B*51, C*07:02, and C*16:02 of the immune system were contributed from Neanderthals to modern humans.[41] After migrating out of Africa, modern humans encountered and interbred with archaic humans, which was advantageous for modern humans in rapidly restoring HLA diversity and acquiring new HLA variants that are better adapted to local pathogens.[41]
It is found that introgressed Neanderthal genes exhibit cis-regulatory effects in modern humans, contributing to the genomic complexity and phenotype variation of modern humans.[42] Looking at heterozygous individuals (carrying both Neanderthal and modern human versions of a gene), the allele-specific expression of introgressed Neanderthal alleles was found to be significantly lower in the brain and testes relative to other tissues.[4][42] In the brain, this was most pronounced at the cerebellum and basal ganglia.[42] This downregulation suggests that modern humans and Neanderthals possibly experienced a relative higher rate of divergence in these specific tissues.[42]
Furthermore, correlating the genotypes of introgressed Neanderthal alleles with the expression of nearby genes, it is found that archaic alleles contribute proportionally more to variation in expression than nonarchaic alleles.[4] Neanderthal alleles affect expression of the immune genes OAS1/2/3 and TLR1/6/10, which can be specific to cell type and is influenced by environmental stimuli.[4]
Examining European modern humans in regards to the Altai Neanderthal genome in high-coverage, results show that Neanderthal admixture is associated with several changes in cranium and underlying brain morphology, suggesting changes in neurological function through Neanderthal-derived genetic variation.[43] Neanderthal admixture is associated with an expansion of the posterolateral area of the modern human skull, extending from the occipital and inferior parietal bones to bilateral temporal locales.[43] In regards to modern human brain morphology, Neanderthal admixture is positively correlated with an increase in sulcal depth for the right intraparietal sulcus and an increase in cortical complexity for the early visual cortex of the left hemisphere.[43] Neanderthal admixture is also positively correlated with an increase in white and gray matter volume localized to the right parietal region adjacent to the right intraparietal sulcus.[43] In the area overlapping the primary visual cortexgyrification in the left hemisphere, Neanderthal admixture is positively correlated with gray matter volume.[43] The results also show evidence for a negative correlation between Neanderthal admixture and white matter volume in the orbitofrontal cortex.[43]
In Papuans, assimilated Neanderthal inheritance is found in highest frequency in genes expressed in the brain, whereas Denisovan DNA has the highest frequency in genes expressed in bones and other tissues.[44]
Although less parsimonious than recent gene flow, the observation may have been due to ancient population sub-structure in Africa, causing incomplete genetic homogenization within modern humans when Neanderthals diverged while early ancestors of Eurasians were still more closely related to Neanderthals than those of Africans were to Neanderthals.[8] On the basis of allele frequency spectrum, it was shown that the recent admixture model had the best fit to the results while the ancient population sub-structure model had no fit—demonstrating that the best model was a recent admixture event that was preceded by a bottleneck event among modern humans – thus confirming recent admixture as the most parsimonious and plausible explanation for the observed excess of genetic similarities between modern non-African humans and Neanderthals.[45] On the basis of linkage disequilibrium patterns, a recent admixture event is likewise confirmed by the data.[46] From the extent of linkage disequilibrium, it was estimated that the last Neanderthal gene flow into early ancestors of Europeans occurred 47,000–65,000 years BP.[46] In conjunction with archaeological and fossil evidence, the gene flow is thought likely to have occurred somewhere in Western Eurasia, possibly the Middle East.[46] Through another approach—using one genome each of a Neanderthal, Eurasian, African, and chimpanzee (outgroup), and dividing it into non-recombining short sequence blocks—to estimate genome-wide maximum-likelihood under different models, an ancient population sub-structure in Africa was ruled out and a Neanderthal admixture event was confirmed.[10]
The early Upper Paleolithic burial remains of a modern human child from Abrigo do Lagar Velho (Portugal) features traits that indicate Neanderthal interbreeding with modern humans dispersing into Iberia.[47] Considering the dating of the burial remains (24,500 years BP) and the persistence of Neanderthal traits long after the transitional period from a Neanderthal to a modern human population in Iberia (28,000–30,000 years BP), the child may have been a descendant of an already heavily admixed population.[47]
The modern human Oase 2 skull (cast depicted), found in Peştera cu Oase, displays archaic traits due to possible hybridization with Neanderthals.[49]
The early modern human Oase 1 mandible from Peștera cu Oase (Romania) of 34,000–36,000 14C years BP presents a mosaic of modern, archaic, and possible Neanderthal features.[50] It displays a lingual bridging of the mandibular foramen, not present in earlier humans except Neanderthals of the late Middle and Late Pleistocene, thus suggesting affinity with Neanderthals.[50] Concluding from the Oase 1 mandible, there was apparently a significant craniofacial change of early modern humans from at least Europe, possibly due to some degree of admixture with Neanderthals.[50]
The earliest (before about 33 ka BP) European modern humans and the subsequent (Middle Upper Paleolithic) Gravettians, falling anatomically largely in line with the earliest (Middle Paleolithic) African modern humans, also show traits that are distinctively Neanderthal, suggesting that a solely Middle Paleolithic modern human ancestry was unlikely for European early modern humans.[51]
Manot 1, a partial calvarium of a modern human that was recently discovered at the Manot Cave (Western Galilee, Israel) and dated to 54.7±5.5 kyr BP, represents the first fossil evidence from the period when modern humans successfully migrated out of Africa and colonized Eurasia.[52] It also provides the first fossil evidence that modern humans inhabited the southern Levant during the Middle to Upper Palaeolithic interface, contemporaneously with the Neanderthals and close to the probable interbreeding event.[52] The morphological features suggest that the Manot population may be closely related to or may have given rise to the first modern humans who later successfully colonized Europe to establish early Upper Palaeolithic populations.[52]
The interbreeding has been discussed ever since the discovery of Neanderthal remains in the 19th century, though earlier writers believed that Neanderthals were a direct ancestor of modern humans. Thomas Huxley suggested that many Europeans bore traces of Neanderthal ancestry, but associated Neanderthal characteristics with primitivism, writing that since they "belong to a stage in the development of the human species, antecedent to the differentiation of any of the existing races, we may expect to find them in the lowest of these races, all over the world, and in the early stages of all races".[53]
Until the early 1950s, most scholars thought Neanderthals were not in the ancestry of living humans.[54][55] Nevertheless, Hans Peder Steensby proposed interbreeding in 1907 in the article Race studies in Denmark. He strongly emphasised that all living humans are of mixed origins.[56] He held that this would best fit observations, and challenged the widespread idea that Neanderthals were ape-like or inferior. Basing his argument primarily on cranial data, he noted that the Danes, like the Frisians and the Dutch, exhibit some Neanderthaloid characteristics, and felt it was reasonable to "assume something was inherited" and that Neanderthals "are among our ancestors."
Carleton Stevens Coon in 1962 found it likely, based upon evidence from cranial data and material culture, that Neanderthal and Upper Paleolithic peoples either interbred or that the newcomers reworked Neanderthal implements "into their own kind of tools."[57]
It has been shown that Melanesians (e.g. Papua New Guinean and Bougainville Islander) share relatively more alleles with Denisovans when compared to other Eurasian-derived populations and Africans.[64] It is estimated that 4% to 6% of the genome in Melanesians derives from Denisovans, while no Eurasians or Africans displayed contributions of the Denisovan genes.[64] It has been observed that Denisovans contributed genes to Melanesians but not to East Asians, indicating that there was interaction between the early ancestors of Melanesians with Denisovans but that this interaction did not take place in the regions near southern Siberia, where as-of-yet the only Denisovan remains have been found.[64] In addition, Aboriginal Australians also show a relative increased allele sharing with Denisovans, compared to Eurasians and African populations, consistent with the hypothesis of increased admixture between Denisovans and Melanesians.[65]
Reich et al. (2011) produced evidence that the highest presence of Denisovan admixture is in Oceanian populations, followed by many Southeast Asian populations, and none in East Asian populations.[66] There is significant Denisovan genetic material in eastern Southeast Asian and Oceanian populations (e.g. Aboriginal Australians, Near Oceanians, Polynesians, Fijians, eastern Indonesians, Philippine Mamanwa and Manobo), but not in certain western and continental Southeast Asian populations (e.g. western Indonesians, Malaysian Jehai, Andaman Onge, and mainland Asians), indicating that the Denisovan admixture event happened in Southeast Asia itself rather than mainland Eurasia.[66] The observation of high Denisovan admixture in Oceania and the lack thereof in mainland Asia suggests that early modern humans and Denisovans had interbred east of the Wallace Line that divides Southeast Asia according to Cooper and Stringer (2013).[67]
Skoglund and Jakobsson (2011) observed that particularly Oceanians, followed by Southeast Asians populations, have a high Denisovans admixture relative to other populations.[68] Furthermore, they found possible low traces of Denisovan admixture in East Asians and no Denisovan admixture in Native Americans.[68] In contrast, Prüfer et al. (2013) found that mainland Asian and Native American populations may have a 0.2% Denisovan contribution, which is about twenty-five times lower than Oceanian populations.[9] The manner of gene flow to these populations remains unknown.[9] However, Wall et al. (2013) stated that they found no evidence for Denisovan admixture in East Asians.[17]
Findings indicate that the Denisovan gene flow event happened to the common ancestors of Aboriginal Filipinos, Aboriginal Australians, and New Guineans.[66][69] New Guineans and Australians have similar rates of Denisovan admixture, indicating that interbreeding took place prior to their common ancestors' entry into Sahul (Pleistocene New Guinea and Australia), at least 44,000 years ago.[66] It has also been observed that the fraction of Near Oceanian ancestry in Southeast Asians is proportional to the Denisovan admixture, except in the Philippines where there is a higher proportional Denisovan admixture to Near Oceanian ancestry.[66] Reich et al. (2011) suggested a possible model of an early eastward migration wave of modern humans, some who were Philippine/New Guinean/Australian common ancestors that interbred with Denisovans, respectively followed by divergence of the Philippine early ancestors, interbreeding between the New Guinean and Australian early ancestors with a part of the same early-migration population that did not experience Denisovan gene flow, and interbreeding between the Philippine early ancestors with a part of the population from a much-later eastward migration wave (the other part of the migrating population would become East Asians).[66]
Finding components of Denisovan introgression with differing relatedness to the sequenced Denisovan, Browning et al. (2018) suggested that at least two separate episodes of Denisovan admixture has occurred.[70] Specifically, introgression from two distinct Denisovan populations is observed in East Asians (e.g. Japanese and Han Chinese), whereas South Asians (e.g. Telugu and Punjabi) and Oceanians (e.g. Papuans) display introgression from one Denisovan population.[70]
Exploring derived alleles from Denisovans, Sankararaman et al. (2016) estimated that the date of Denisovan admixture was 44,000–54,000 years ago.[5] They also determined that the Denisovan admixture was the greatest in Oceanian populations compared to other populations with observed Denisovan ancestry (i.e. America, Central Asia, East Asia, and South Asia).[5] The researchers also made the surprising finding that South Asian populations display an elevated Denisovan admixture (when compared to other non-Oceanian populations with Denisovan ancestry), albeit the highest estimate (which are found in Sherpas) is still ten times lower than in Papuans.[5] They suggest two possible explanations: There was a single Denisovan introgression event that was followed by dilution to different extents or at least three distinct pulses of Denisovan introgressions must have occurred.[5]
A study in 2021 analyzing archaic ancestry in 118 Philippine ethnic groups discovered an independent admixture event into Philippine Negritos from Denisovans. The Ayta Magbukon in particular were found to possess the highest level of Denisovan ancestry in the world, with ~30%–40% more than even that found in Australians and Papuans (Australo-Melanesians), suggesting that distinct Islander Denisovan populations existed in the Philippines which admixed with modern humans after their arrival.[71]
It has been shown that Eurasians have some but significantly lesser archaic-derived genetic material that overlaps with Denisovans, stemming from the fact that Denisovans are related to Neanderthals—who contributed to the Eurasian gene pool—rather than from interbreeding of Denisovans with the early ancestors of those Eurasians.[16][64]
The skeletal remains of an early modern human from the Tianyuan cave (near Zhoukoudian, China) of 40,000 years BP showed a Neanderthal contribution within the range of today's Eurasian modern humans, but it had no discernible Denisovan contribution.[72] It is a distant relative to the ancestors of many Asian and Native American populations, but post-dated the divergence between Asians and Europeans.[72] The lack of a Denisovan component in the Tianyuan individual suggests that the genetic contribution had been always scarce in the mainland.[9]
There are large genomic regions devoid of Denisovan-derived ancestry, partly explained by infertility of male hybrids, as suggested by the lower proportion of Denisovan-derived ancestry on X chromosomes and in genes that are expressed in the testes of modern humans.[5]
Exploring the immune system's HLA alleles, it has been suggested that HLA-B*73 introgressed from Denisovans into modern humans in western Asia due to the distribution pattern and divergence of HLA-B*73 from other HLA alleles.[41] Even though HLA-B*73 is not present in the sequenced Denisovan genome, HLA-B*73 was shown to be closely associated to the Denisovan-derived HLA-C*15:05 from the linkage disequilibrium.[41] From phylogenetic analysis, however, it has been concluded that it is highly likely that HLA-B*73 was ancestral.[37]
The Denisovan's two HLA-A (A*02 and A*11) and two HLA-C (C*15 and C*12:02) allotypes correspond to common alleles in modern humans, whereas one of the Denisovan's HLA-B allotype corresponds to a rare recombinant allele and the other is absent in modern humans.[41] It is thought that these must have been contributed from Denisovans to modern humans, because it is unlikely to have been preserved independently in both for so long due to HLA alleles' high mutation rate.[41]
Tibetan people received an advantageous EGLN1 and EPAS1 gene variant, associated with hemoglobin concentration and response to hypoxia, for life at high altitudes from the Denisovans.[37] The ancestral variant of EPAS1 upregulates hemoglobin levels to compensate for low oxygen levels—such as at high altitudes—but this also has the maladaption of increasing blood viscosity.[73] The Denisovan-derived variant on the other hand limits this increase of hemoglobin levels, thus resulting in a better altitude adaption.[73] The Denisovan-derived EPAS1 gene variant is common in Tibetans and was positively selected in their ancestors after they colonized the Tibetan plateau.[73]
Rapid decay of fossils in Sub-Saharan African environments makes it currently unfeasible to compare modern human admixture with reference samples of archaic Sub-Saharan African hominins.[4][74]
Ancient DNA Data from a ~4,500 BP Ethiopian highland individual,[75] and from Southern (~2,300–1,300 BP), and Eastern and South-Central Africa (~8.100–400 BP) has clarified that some West Africa populations have small amounts of excess alleles best explained by an archaic source in West Africans that is not included in the pre-agricultural Eastern African hunter-gatherers, Southern African hunter-gatherer populations, or the genetic gradation between them. The West African groups carrying the archaic DNA include Yoruba from coastal Nigeria and Mende from Sierra Leon indicating that the ancient DNA was acquired long before the spread of agriculture and likely well before the Holocene (starting 11,600 BP), Such an archaic lineage must have separated before the divergence of San ancestors, which is estimated to have begun on the order of 200–300 thousand years ago.[76][77]
The hypothesis that there has been archaic line in the ancestry of present-day Africans that originated before the San, Pygmies and East African hunter gatherers (and the Eurasians) is supported by a line of evidence independent from the Skoglund findings based on long haplotypes with deep divergences from other human haplotypes including Lachance et al.(2012),[74] Hammer et al., 2011,[78] and Plagnol and Wall (2006).[79]
In the archaic DNA differences found by Hammer, et al., the pygmies (of Central Africa) are grouped with the San (of Southern Africa) in contrast to the Yoruba (of West Africa). Further clarification of the presence of archaic DNA in current West African populations with the extraction and sequencing of DNA from 4 fossils found at Shum Laka in Cameroon dating from 8,000 to 3,000 BP. These individuals were found to derive most of their DNA from Central African hunter gatherers (Pygmy ancestors) and did not share the archaic DNA found in the Yoruba and Mande.[80] The pattern of differences between Eastern, Central and Southern hunter gatherers when compared to the West African groups which had been found by Hammer was confirmed. In a second study Lipson et. al. (2020) studied DNA extracted from 6 additional Eastern and Southcentral African fossils from the last 18,000 years. It was determined that their genetic origins could be accounted for by DNA contributions from Southern, Central and Eastern hunter gatherers, and that none of them had the archaic DNA found in the Yoruba.[81]
According to a study published in 2020, there are indications that 2% to 19% (or about ≃6.6 and ≃7.0%) of the DNA of four West African populations may have come from an unknown archaic hominin which split from the ancestor of humans and Neanderthals between 360 kya to 1.02 mya. However in contrast to the studies of Skoglund and Lipson with ancient African DNA, the study also finds that at least part of this proposed archaic admixture is also present in Eurasians/non-Africans, and that the admixture event or events range from 0 to 124 ka B.P, which includes the period before the Out-of-Africa migration and prior to the African/Eurasian split (thus affecting in part the common ancestors of both Africans and Eurasians/non-Africans).[82][83][84] Another recent study, which discovered substantial amounts of previously undescribed human genetic variation, also found ancestral genetic variation in Africans that predates modern humans and was lost in most non-Africans.[85]
^Barras, Colin (2017). "Who are you? How the story of human origins is being rewritten". New Scientist. Archived from the original on 25 August 2017. Retrieved 25 August 2017. Most of us alive today carry inside our cells at least some DNA from a species that last saw the light of day tens of thousands of years ago. And we all carry different bits—to the extent that if you could add them all up, Krause says you could reconstitute something like one-third of the Neanderthal genome and 90 per cent of the Denisovan genome.
^Sánchez-Quinto, F.; Botigué, L.R.; Civit, S.; Arenas, C.; Ávila-Arcos, M.C.; Bustamante, C.D.; et al. (2012). "North African Populations Carry the Signature of Admixture with Neandertals". PLOS ONE. 7 (10): e47765. Bibcode:2012PLoSO...747765S. doi:10.1371/journal.pone.0047765. PMC3474783. PMID23082212. We found that North African populations have a significant excess of derived alleles shared with Neandertals, when compared to sub-Saharan Africans. This excess is similar to that found in non-African humans, a fact that can be interpreted as a sign of Neandertal admixture. Furthermore, the Neandertal's genetic signal is higher in populations with a local, pre-Neolithic North African ancestry. Therefore, the detected ancient admixture is not due to recent Near Eastern or European migrations. Sub-Saharan populations are the only ones not affected by the admixture event with Neandertals.
^Sánchez-Quinto, F.; Botigué, L.R.; Civit, S.; Arenas, C.; Ávila-Arcos, M.C.; Bustamante, C.D.; et al. (2012). "North African Populations Carry the Signature of Admixture with Neandertals". PLOS ONE. 7 (10): e47765. Bibcode:2012PLoSO...747765S. doi:10.1371/journal.pone.0047765. PMC3474783. PMID23082212. North African populations have a complex genetic background. In addition to an autochthonous genetic component, they exhibit signals of European, sub-Saharan and Near Eastern admixture as previously described[...] Tunisian Berbers and Saharawi are those populations with highest autochthonous North African component[...] The results of the f4 ancestry ratio test (Table 2 and Table S1) show that North African populations vary in the percentage of Neandertal inferred admixture, primarily depending on the amount of European or Near Eastern ancestry they present (Table 1). Populations like North Morocco and Egypt, with the highest European and Near Eastern component (~40%), have also the highest amount of Neandertal ancestry (~60–70%) (Figure 3). On the contrary, South Morocco that exhibits the highest Sub-Saharan component (~60%), shows the lowest Neandertal signal (20%). Interestingly, the analysis of the Tunisian and N-TUN populations shows a higher Neandertal ancestry component than any other North African population and at least the same (or even higher) as other Eurasian populations (100–138%) (Figure 3).
^Arun Durvasula; Sriram Sankararaman (2020). "Recovering signals of ghost archaic introgression in African populations". Science Advances. 6 (7): eaax5097. Bibcode:2020SciA....6.5097D. doi:10.1126/sciadv.aax5097. PMC7015685. PMID32095519. "Non-African populations (Han Chinese in Beijing and Utah residents with northern and western European ancestry) also show analogous patterns in the CSFS, suggesting that a component of archaic ancestry was shared before the split of African and non-African populations...One interpretation of the recent time of introgression that we document is that archaic forms persisted in Africa until fairly recently. Alternately, the archaic population could have introgressed earlier into a modern human population, which then subsequently interbred with the ancestors of the populations that we have analyzed here. The models that we have explored here are not mutually exclusive, and it is plausible that the history of African populations includes genetic contributions from multiple divergent populations, as evidenced by the large effective population size associated with the introgressing archaic population...Given the uncertainty in our estimates of the time of introgression, we wondered whether jointly analyzing the CSFS from both the CEU (Utah residents with Northern and Western European ancestry) and YRI genomes could provide additional resolution. Under model C, we simulated introgression before and after the split between African and non-African populations and observed qualitative differences between the two models in the high-frequency–derived allele bins of the CSFS in African and non-African populations (fig. S40). Using ABC to jointly fit the high-frequency–derived allele bins of the CSFS in CEU and YRI (defined as greater than 50% frequency), we find that the lower limit on the 95% credible interval of the introgression time is older than the simulated split between CEU and YRI (2800 versus 2155 generations B.P.), indicating that at least part of the archaic lineages seen in the YRI are also shared with the CEU..."
^[1]Archived 7 December 2020 at the Wayback Machine Supplementary Materials for Recovering signals of ghost archaic introgression in African populations", section "S8.2" "We simulated data using the same priors in Section S5.2, but computed the spectrum for both YRI [West African Yoruba] and CEU [a population of European origin] . We found that the best fitting parameters were an archaic split time of 27,000 generations ago (95% HPD: 26,000–28,000), admixture fraction of 0.09 (95% HPD: 0.04–0.17), admixture time of 3,000 generations ago (95% HPD: 2,800–3,400), and an effective population size of 19,700 individuals (95% HPD: 19,300–20,200). We find that the lower bound of the admixture time is further back than the simulated split between CEU and YRI (2155 generations ago), providing some evidence in favor of a pre-Out-of-Africa event. This model suggests that many populations outside of Africa should also contain haplotypes from this introgression event, though detection is difficult because many methods use unadmixed outgroups to detect introgressed haplotypes [Browning et al., 2018, Skov et al., 2018, Durvasula and Sankararaman, 2019] (5, 53, 22). It is also possible that some of these haplotypes were lost during the Out-of-Africa bottleneck."
^Bergström, A; McCarthy, S; Hui, R; Almarri, M; Ayub, Q (2020). "Insights into human genetic variation and population history from 929 diverse genomes". Science. 367 (6484): eaay5012. doi:10.1126/science.aay5012. PMC7115999. PMID32193295. "An analysis of archaic sequences in modern populations identifies ancestral genetic variation in African populations that likely predates modern humans and has been lost in most non-African populations...We found small amounts of Neanderthal ancestry in West African genomes, most likely reflecting Eurasian admixture. Despite their very low levels or absence of archaic ancestry, African populations share many Neanderthal and Denisovan variants that are absent from Eurasia, reflecting how a larger proportion of the ancestral human variation has been maintained in Africa."
|
Conversely, significant rates of modern human gene flow into Neanderthals occurred—of the three examined lineages—for only the Altai Neanderthal (0.1–2.1%), suggesting that modern human gene flow into Neanderthals mainly took place after the separation of the Altai Neanderthals from the El Sidrón and Vindija Neanderthals that occurred roughly 110,000 years ago.[28] The findings show that the source of modern human gene flow into Neanderthals originated from a population of early modern humans from about 100,000 years ago, predating the out-of-Africa migration of the modern human ancestors of present-day non-Africans.[28]
No evidence of Neanderthal mitochondrial DNA has been found in modern humans.[29][30][31] This suggests that successful Neanderthal admixture happened in pairings with Neanderthal males and modern human females.[32][33] Possible hypotheses are that Neanderthal mitochondrial DNA had detrimental mutations that led to the extinction of carriers, that the hybrid offspring of Neanderthal mothers were raised in Neanderthal groups and became extinct with them, or that female Neanderthals and male Sapiens did not produce fertile offspring.[32] However, the hypothesized incompatibility between Neanderthals and modern humans is contested by findings that suggest that the Y chromosome of Neanderthals was replaced by an extinct lineage of the modern human Y chromosome, which introgressed into Neanderthals between 100,000 and 370,000 years ago. [
|
yes
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
yes_statement
|
"neanderthals" interbred with "modern" "humans".. "modern" "humans" interbred with "neanderthals".
|
https://www.nhm.ac.uk/discover/news/2016/february/earliest-evidence-humans-breeding-neanderthals.html
|
Earliest evidence of modern humans breeding with Neanderthals ...
|
Accept cookies?
We use cookies to give you the best online experience. We use them to improve our website and content, and to tailor our digital advertising on third-party platforms. You can change your preferences at any time.Â
Modern human DNA has been found in the genes of a Neanderthal woman from the Altai Mountains in Siberia.
The research, published in the journal Nature, is the first evidence of DNA from Homo sapiens entering a Neanderthal population instead of the other way around.
Until now, scientists thought the earliest interbreeding between the two happened after 60,000 years ago. But the new findings suggest the woman's Neanderthal ancestors bred with earlier modern humans much earlier - about 100,000 years ago.
Prof Stringer says: 'So far there has been no evidence suggesting modern humans and Neanderthals met and interbred before 60,000 years ago.
'Some human fossils have been found in Israel from about 100,000 years ago, but it is generally agreed that this was a failed dispersal from Africa, and it went no further.
'But this study shows the increasing power of genetics, and it means the search is on for further traces of these mysterious early moderns, and their Neanderthal relatives in Asia.'
Migration from Africa
Comparison of Neanderthal and modern human DNA suggests they diverged from a common ancestor at least 430,000 years ago.
Neanderthals, one of our closest extinct relatives, continued to evolve in Europe and Asia, while our ancestors evolved in Africa. It is generally agreed that it wasn't until about 60,000 years ago that modern humans migrated across Eurasia, breeding with their Neanderthal cousins as they did so.
The older human fossils from the Israeli sites of Skhul and Qafzeh, dated to around 100,000 years ago, are usually regarded as representing a failed dispersal of a small group of modern humans from Africa, which did not establish itself further afield.
But the new findings provide a surprising source of evidence for the presence of modern humans in Asia before 60,000 years ago.
Together with the recent dating of 47 modern human teeth in southern China to at least 80,000 years ago, the finding suggests that modern humans did successfully migrate beyond Israel into Asia before the 60,000-year-old dispersal.
Earlier breeding
Geneticists at the Max Planck Institute for Evolutionary Anthropology and their colleagues compared the genes of a Neanderthal from the Altai Mountains in Siberia with those of two Neanderthals from Spain and Croatia.
They found that a population of modern humans contributed genetically to the ancestors of the Siberian Neanderthal roughly 100,000 years ago.
But no modern human DNA has yet been identified in the European Neanderthals.
A model of an early modern human (left) and a Neanderthal (right).
Â
The findings suggest that as well as later interbreeding, the ancestors of Neanderthals from the Altai Mountains and early modern humans met and interbred, somewhere in southern Asia, many thousands of years earlier than previously thought.
Anywhere in southern Asia could have been the location of this early interbreeding, since scientists donât know how widespread Neanderthals and early modern humans might have been in the regions between Arabia and China at this time.
Prof Stringer says: 'At the moment we simply don't know how these matings happened and the possibilities range from relatively peaceful exchanges of partners, to one group raiding another and stealing females, through to adopting abandoned or orphaned babies.
'Eventually, geneticists should be able to show if the transfer of DNA in either direction was mainly via males, females, or about equal, but it will need a lot more data before that becomes possible.
'We must also hope for discoveries of significant human fossils from the many 'empty' areas of southern Asia, in order to properly map the people who were there before modern humans made their successful dispersal across the region about 55,000 years ago.'
Don't miss a thing
Receive email updates about our news, science, exhibitions, events, products, services and fundraising activities. We may occasionally include third-party content from our corporate partners and other museums. We will not share your personal details with these third parties. You must be over the age of 13. Privacy notice.
|
Accept cookies?
We use cookies to give you the best online experience. We use them to improve our website and content, and to tailor our digital advertising on third-party platforms. You can change your preferences at any time. Â
Modern human DNA has been found in the genes of a Neanderthal woman from the Altai Mountains in Siberia.
The research, published in the journal Nature, is the first evidence of DNA from Homo sapiens entering a Neanderthal population instead of the other way around.
Until now, scientists thought the earliest interbreeding between the two happened after 60,000 years ago. But the new findings suggest the woman's Neanderthal ancestors bred with earlier modern humans much earlier - about 100,000 years ago.
Prof Stringer says: 'So far there has been no evidence suggesting modern humans and Neanderthals met and interbred before 60,000 years ago.
'Some human fossils have been found in Israel from about 100,000 years ago, but it is generally agreed that this was a failed dispersal from Africa, and it went no further.
' But this study shows the increasing power of genetics, and it means the search is on for further traces of these mysterious early moderns, and their Neanderthal relatives in Asia.'
Migration from Africa
Comparison of Neanderthal and modern human DNA suggests they diverged from a common ancestor at least 430,000 years ago.
Neanderthals, one of our closest extinct relatives, continued to evolve in Europe and Asia, while our ancestors evolved in Africa. It is generally agreed that it wasn't until about 60,000 years ago that modern humans migrated across Eurasia, breeding with their Neanderthal cousins as they did so.
The older human fossils from the Israeli sites of Skhul and Qafzeh, dated to around 100,000 years ago, are usually regarded as representing a failed dispersal of a small group of modern humans from Africa, which did not establish itself further afield.
|
yes
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
yes_statement
|
"neanderthals" interbred with "modern" "humans".. "modern" "humans" interbred with "neanderthals".
|
http://humanorigins.si.edu/evidence/genetics/ancient-dna-and-neanderthals
|
Ancient DNA and Neanderthals | The Smithsonian Institution's ...
|
DNA (deoxyribonucleic acid) is arguably one of the most useful tools that scientists can use to understand living organisms. Our genetic code can tell us a lot about who we are, where come from, and even what diseases we may be predisposed to contracting and acquiring. When studying evolution, DNA is especially important in its application to identifying and separating organisms into species. However, DNA is a fragile molecule, and it degrades over time. For most fossil species, there is essentially no hope of ever acquiring DNA from their fossils, so answers to questions about their appearance, physiology, population structure, and more may never be fully answerable. For more recently extinct species scientists have, and continue to, extract ancient DNA (aDNA) which they use to reconstruct the genome of long-gone ancestors and relatives. One such species is Neanderthals, Homo neanderthalensis.
Neanderthals were the first species of fossil hominins discovered and have secured their place in our collective imagination ever since. The first Neanderthal fossils were found in Engis, Belgium in 1829, but not identified as belonging to Neanderthals until almost 100 years later. The first fossils to be called Neanderthals were found in 1856 in Germany, at a site in the Neander Valley (where Neanderthals get their name from). Neanderthals diverged from modern humans around 500,000 years ago, likely evolving outside of Africa. Most ancestors of Homo sapiens remained in Africa until around 100,000 years ago when modern humans began migrating outwards. In that time, Neanderthals evolved many unique adaptations that helped them survive in cold environments of Europe and Asia. Their short limbs and torso help conserved heat, and their wide noses helped warm and humidify air as they breathed it in. Despite these differences, modern humans and Neanderthals are very closely related and looked similar. We even overlapped with each other-living in the same place at roughly the same time in both the Middle East and Europe. If this is the case, why did Neanderthals go extinct while we survived? We can use DNA to help to answer this question and others, including:
What was the relationship between Neanderthals and anatomically modern humans?
Did Neanderthals and modern humans interbreed? If so, where and to what degree?
Did Neanderthals contribute to the modern human genome? How much?
What do the Neanderthal genes in the human genome actually do?
Are there any other species like Neanderthals that we have DNA evidence for?
Scientists answer these questions by comparing genomes as whole, as well as specific genes, between humans and Neanderthals. Before getting into the specifics of Neanderthal DNA, it is important to appreciate the structure of DNA itself, why it is so important, and why aDNA can be so difficult to work with.
Fast Facts
DNA degrades over time, so is only available for recently extinct species
Neanderthals and modern humans shared habitats in Europe, Asia, and the Middle East
We can study Neanderthal and modern human DNA to see if they interbred
DNA: The Language of Life
DNA structure and function
You may recognize the basic structure of DNA: two strands arranged in a double-helix pattern with individual bases forming rungs, like a twisting ladder. These bases are adenine (A), thymine (T), guanine (G), and cytosine (C). They form complementary pairs on opposite ends of each ladder rung: adenine across from thymine and cytosine across from guanine. For example, if one side of the twisting ladder reads AATG, the opposing side will read TTAC. It is the sequence of these individual base pairs that makes up our genetic code, orour genome. Errors can occur when DNA is unwound to be replicated with one or more bases being deleted, substituted for others, or newly added. Such errors are called mutations and range from being essentially harmless to deadly.
The main function of DNA is to control the production, timing, and quantity of proteins produced by each cell. This process is called protein synthesis and comes in two main stages: transcription and translation. When the cell needs to produce a protein, an enzyme called RNA polymerase ‘unzips’ the DNA double-helix and aids in pairing RNA (ribonucleic acid, a molecule related to DNA) bases to the complementary DNA sequence. This first step is called transcription, the product of which is a single-sided strand of RNA that exits the cell. This messenger RNA, or mRNA, goes into the cell’s cytoplasm to locate an organelle called a ribosome where the genetic information in the mRNA can be translated into a protein. The process of translation involves another kind of RNA, transfer RNA or tRNA, binding to the base sequences on the mRNA. tRNA is carrying amino acids, molecules that will make up the final protein, binding in sequence to create an amino acid chain. This amino acid chain will then twist and fold into the final protein.
Base pairs are arranged in groups of three, or codons, on the mRNA and tRNA, Each codon codes for a single amino acid. Each individual amino acid can be coded for by more than one codon. For example, both AAA and AAG code for the same amino acid lysine. Therefore, a mutation changing the last A to a G will be functionally meaningless. This is known as a silent, or synonymous change. If that last A in the codon mutated to a C, however, the codon AAC codes for asparagine, a different amino acid. This new amino acid could lead to the formation of a completely new protein or make the amino acid chain unable to form a protein at all. This is known as a nonsynonymous change. Nonsynonymous changes are the basis for diversity within a gene pool on which evolution acts.
The total DNA sequence is made up of base pairs, but not all sequences of base pairs serve the same function. Not all parts of the DNA sequence directly code for protein. Base pair sequences within DNA can be split into exons, sequences that directly code for proteins, and introns, sequences that do not directly code for a specific protein. The exon portion of our genome is collectively called the exome, and accounts for only about 1% of our total DNA. Exons and introns together form genes, sequences that code for a protein. On average there are 8.8 exons and 7.8 introns in each gene. The noncoding, or intron, parts of DNA used to be called “junk DNA,” random or repeating sequences that did not seem to code for anything. Recent research has shown that the majority of the genome does serve a function even if not coding for protein synthesis. These intron sequences can help regulate when genes are turned on or off, control how DNA winds itself to form chromosomes, be remnant clues of an organism’s evolutionary history, or serve other noncoding functions.
Types of DNA
Most of our total genome is made of up nuclear DNA, or the genetic material located in the nucleus of a cell. This DNA forms chromosomes, X-shaped bundles of DNA, that separate during cell division. Homo sapiens have 23 pairs of chromosomes. Nuclear DNA is directly inherited from both parents with 50% each coming from an organism’s biological male and female parents. Therefore, both parents’ lineages are represented by nuclear DNA with one exception. One of those pairs of chromosomes are called sex chromosomes. Everyone gets some combination of X (male female parent) and Y (male parent only) chromosomes that determine an organism’s biological sex. These combinations can come in a variety of possible alternatives outside of XX and XY including XXY, X, and others. Because the Y chromosome is only inherited from a biological male parent, the sequence of the Y chromosome can be used to trace patrilineal ancestry.
DNA is also found in the mitochondria, an organelle colloquially referred to as the “powerhouse of the cell.” This mitochondrial DNA or mtDNA is much smaller than the nuclear genome, only composing about 37 genes. mtDNA is only inherited from an organism’s biological female parent and can be used to trace matrilineal ancestry. Because both Y-chromosome DNA and mtDNA are smaller and inherited form only one parent, thus less subject to mutations and changes, they are more useful in tracing lineages through deep time. However, they pale in comparison to the entire nuclear genome in terms of size and available base sequences to analyze.
Fast Facts
Our genome is made up of sequences of base pairs within DNA, organized into introns and exons
DNA codes for the synthesis of proteins that control almost all aspects of our biology
DNA can be found in 23 chromosome pairs in the nucleus, or in the mitochondria
Both nuclear and mitochondrial DNA have specific uses in genetic analyses
aDNA: A Window into Our Genetic Past
Challenges in Extracting Ancient DNA
Recall that DNA is made up of base pair sequences that are chemically bonded to the sides of the double-helix structure forming a sort of twisting ladder. As an organic molecule, the component parts of that twisting ladder are subject to degradation over time. Without the functioning cells of a living organism to fix these issues and make new DNA, DNA can degrade into meaningless components somewhat rapidly. While DNA is abundant and readily extracted in living organisms (you can even do your own at-home experiment to extract DNA! https://learn.genetics.utah.edu/content/labs/extraction/howto/) finding useable DNA in extinct organisms gets harder and harder the further back in time that organism died.
The record for the oldest DNA extracted used to go to an ancient horse, dating to around 500,000-700,000 years old (Miller and Lambert 2013). However, in 2021 this was blown out of the water with the announcement of mammoth DNA extracted from specimens over 1 million years old found in eastern Siberian permafrost, permanently frozen ground (van der Valk et al., 2021). These cases of extreme DNA preservation are rare and share a few important factors in common: the specimens are found in very cold, very dry environments, typically buried in permafrost or frozen in caves. The oldest hominin DNA recovered comes from a Neanderthal around 400,000 years old (Meyer et al. 2016), near the beginnings of the Neanderthal species. Finding older DNA in other hominins is unlikely as for most of our evolutionary history hominins lived in the warm, sometimes wet, tropics and subtropics of Africa and Asia where DNA does not preserve well.
When scientists are lucky enough to find a specimen that may preserve aDNA, they must take the utmost care to extract it in such a way as to preserve it and prevent contamination. Just because aDNA is preserved does not mean it is preserved perfectly; it still decomposes and degrades over time, just at a slower rate in cool, dry environments. Because of this, there is always going to be much less DNA from the old organism than there is in even the loose hair and skin cells from the scientists excavating it. Because of this, there are stringent guidelines in place for managing aDNA extraction in the field that scientists must follow (Gilbert et al., 2005 for example). In hominins, this is even more important since human and Neanderthal DNA are so similar that most sequences will be indistinguishable from each other.
Challenges in Sequencing aDNA
When aDNA does preserve, is often highly fragmented, degraded, and has undergone substantial changes from how the DNA appeared in a living organism. In order to sequence the DNA, or read the base pair coding, these damages and changes have to be taken into account and fixed wherever possible. aDNA comes in tattered, fragmented strands that are difficult to read and analyze. One way scientists deal with this is to amplify the aDNA that is preserved so that it is more readily accessible via a process known as polymerase chain reaction (PCR). PCR essentially forces the DNA to self-replicate exponentially so that there are many more copies of the same sequence to compare. Due to the exponential duplication, it is especially important for there to be no contamination of modern DNA in the sample. The amplified sequences can then be compared and aligned to create longer sequences, up to and including entire genes and genomes.
The component parts of DNA also degrade over time. One example is deamination, when cytosine bases degrade into a thymine molecule and guanine bases degrade into an adenine. This could potentially lead to misidentification of sequences, but scientists have developed chemical methods to reverse these changes. Comparison between closely related genomes, such as humans and Neanderthals, can also identify where deamination may have occurred in sequences that do not vary between the two species. Deamination can actually be useful because it is an excellent indicator that the sample you are looking at is genuine aDNA and not DNA from a contaminated source.
aDNA extraction and sequencing is inherently destructive and requires destroying at least part of the fossil sample you are attempting to extract DNA from. That is something that paleoanthropologists want to avoid whenever possible! To rationalize destroying a fossil to extract aDNA, it is common practice to first test this technique other non-hominin fossils from the same site to first confirm that DNA is accessible and in reasonable quantities/qualities. Testing aDNA form other sources, such as a cave bear at a Neanderthal site, can also identify any potential sources of contamination more easily since cave bears and humans are more distantly related.
Fast Facts:
DNA preserves best in cold, dry environments
The oldest DNA recovered is over 1 million years old, but the oldest hominin DNA is only ~400,000 years old
aDNA must be destructively sampled, amplified, and analyzed prior to looking at the sequence
Decoding the Neanderthal Genome
Neanderthal skull La Ferrassie 1 from La Ferrassie, France
(Copyright Smithsonian Institution)
Neanderthal mtDNA
The first analysis of any Neanderthal DNA was mitochondrial DNA (mtDNA), published in 1997. The sample was taken from the first Neanderthal fossil discovered, found in Feldhofer Cave in the Neander Valley in Germany. A small sample of bone was ground up to extract mtDNA, which was then replicated and analyzed.
Researchers compared the Neanderthal mtDNA to modern human and chimpanzee mtDNA sequences and found that the Neanderthal mtDNA sequences were substantially different from both (Krings et al. 1997, 1999). Most human sequences differ from each other by an average of 8.0 substitutions, while the human and chimpanzee sequences differ by about 55.0 substitutions. The Neanderthal and modern human sequences differed by approximately 27.2 substitutions. Using this mtDNA information, the last common ancestor of Neanderthals and modern humans dates to approximately 550,000 to 690,000 years ago, which is about four times older than the modern human mtDNA pool. Since this study was completed, many more samples of Neanderthal mtDNA have been replicated and studied.
Sequencing the Complete Neanderthal Mitochondrial Genome
After successfully sequencing large amounts of mtDNA, a team led by Svante Pääbo from the Max Planck Institute reported the first complete mitochondrial DNA (mtDNA) sequence for a Neanderthal (Green et al. 2008). The sample was taken from a 38,000 year old Neanderthal from Vindija Cave, Croatia. The complete mtDNA sequence allowed researchers to compare this Neanderthal mtDNA to modern human mtDNA to see if any modern humans carried the mtDNA from a related group to the Neanderthal group.
Later, Svante Pääbo’s lab sequenced the entire mitochondrial genome of five more Neanderthals (Briggs et al. 2009). Sequences came from two individuals from the Neander Valley in Germany and one each from Mezmaiskaya Cave in Russia, El Sidrón Cave in Spain, and Vindija Cave in Croatia. Though the Neanderthal samples came from a wide geographic area, the Neanderthal mtDNA sequences were not particularly genetically diverse. The most divergent Neanderthal sequence came from the Mezmaiskaya Cave Neanderthal from Russia, which the oldest and eastern-most specimen. Further analysis and sampling or more individuals has led researchers to believe that this diversity was more closely related to age than it was to population-wide variance (Briggs et al. 2009).On average, Neanderthal mtDNA genomes differ from each other by 20.4 bases and are only 1/3 as diverse as modern humans (Briggs et al. 2009). The low diversity might signal a small population size.
There is evidence that some other hominin contributed to the Neanderthal mtDNA gene pool around 270,000 years ago (Posth et al., 2017). A femur discovered in Germany had its mtDNA genotyped and it was found that there was introgression from a non-Neanderthal African hominin, either Homo sapiens or closely related to us, around 270,000 years ago. This mitochondrial genome is also highly divergent from the Neanderthal average discussed previously, indicating that Neanderthals may have been much more genetically diverse in their more distant past.
As for Neanderthal introgression into the modern human mtDNA genome, it is possible that the evidence of such admixture is obscured for a variety of reasons (Wang et al 2013). Primary among these reasons is sample size: There are to date only a dozen or so Neanderthal mtDNA sequences that have been sampled. Because the current sample of Neanderthal mtDNA is so small, it is possible that researchers simply have not yet found the mtDNA in Neanderthals that corresponds to that of modern humans.
Map of Neanderthal extent througout Eurasia.
Neanderthal Nuclear DNA
There have been many efforts to sequence Neanderthal nuclear genes, with an eventual goal to sequence as much of the Neanderthal genome as possible. In 2014, the complete genome of a Neanderthal from the Altai Mountains in Siberia was published (Prufer et al., 2014). This female individual’s genome showed that her parents were likely half siblings and that her genetic line showed evidence of high rates of incestuous pairings. It is unclear whether this is due to her living in a small and isolated population or if other factors may have influenced the lineage’s inbreeding. Their analysis also showed that this individual was closely related to both modern humans and the Denisovans, another ancient human population. By their analysis, there was only a very small margin by which Neanderthal and Denisovan DNA differed exclusively from modern humans.
Fast Facts:
Neanderthals are genetically distinct from modern humans, but are more closely related to us than chimpanzees are
The Neanderthal and modern human lineages diverged about 550,000 years ago
So far, we have no evidence of Neanderthal mtDNA lineages in modern humans
Neanderthals were not as genetically diverse as modern humans were at the same period, indicating that Neanderthals had a smaller population size
Neanderthal nuclear DNA shows further evidence of small population sizes, including genetic evidence of incest
As technology improves, researchers are able to detect and analyze older and more fragmentary samples of DNA
Another Lost Relative: Identifying the Denisovans
Who Were the Denisovans?
Scientists have also found DNA from another extinct hominin population: the Denisovans. The first remains of the species found were a single fragment of a phalanx (finger bone) and two teeth, all of which date back to about 40,000 years ago (Reich et al., 2010). Since then, a Denisovan mandible, or lower jaw, has been found in Tibet (Chen et al., 2019) and a Denisovan molar has been found in Laos (Demeter et al., 2022). Other fossil hominins, such as the Homo longi remains from northern China (Ji et al., 2021) and the Dali cranium from northwestern China may belong to the Denisovans, but without comparable fossils and genetics it is difficult to know for sure.
This species is the first fossil hominin identified as a new species based on its DNA alone. Denisovans are close relatives of both modern humans and Neanderthals, and likely diverged from these lineages around 300,000 to 400,000 years ago; they are more closely related to Neanderthals than to modern humans. You might be wondering: If we have the DNA of Denisovans, why can’t we compare them to modern humans like we do Neanderthals? Why isn’t this article about them too? The answer is simply that we don’t have enough DNA and fossils to make a comparison. The single-digit specimen pool of Denisovans found to date is statistically far too small a data set to derive any meaningful comparisons. Until we find more Denisovan material, we cannot begin to understand their full genome in the way that we can study Neanderthals. The lack of more (and more morphologically diagnostic) Denisovan fossils is the reason why scientists have not yet given them a species name.
Fast Facts:
Denisovans are known form only a few isolated fossils, but are the first hominins identified as a new species on a genetic basis
There are not enough Denisovan fossils and genomes to have as clear a picture of their species as we do Neanderthals
Evidence for Interbreeding
Shared DNA: What Does It Mean?
Homo sapiens and Homo neanderthalensis are different species, yet you are reading this webpage about them potentially interbreeding with each other. So, what does that mean, exactly? Modern humans and Neanderthals lived in separate regions evolving along separate evolutionary lineages for hundreds of thousands of years. Even so, Neanderthals are still our closest currently known relative. Because of that evolutionary proximity, despite being recognized as different species, it is still possible that members of our two species exchanged genetic information. This exchange of DNA is called introgression, or interbreeding.
When looking for evidence of interbreeding, scientists do not search billions and billions of base pairs. Instead, there are specific regions of the genomes that are known to be highly variable in modern humans along with several million single nucleotide polymorphisms (SNP’s), where the given base at a single location can vary among people. The difference between the total genome and these specific regions/sites can lead to some confusion. In terms of the total genome, humans and chimpanzees are 98-99% similar. Yet, it is possible for individuals to have up to 4% Neanderthal DNA. That difference is accounted for in that 4% of the highly variable genome is inherited from a Neanderthal source, not 4% of the entire genome. If one was to look at the modern human genome as a whole, at least 98-99% is the same, inherited from our common ancestor with Neanderthals.
Neanderthal-Homo sapiens interbreeding
Neanderthals are known to contribute up to 1-4% of the genomes of non-African modern humans, depending on what region of the word your ancestors come from, and modern humans who lived about 40,000 years ago have been found to have up to 6-9% Neanderthal DNA (Fu et al., 2015). Because Neanderthals likely evolve outside of Africa (no Neanderthal fossils have been found in Africa to date) it was thought that there would be no trace of Neanderthal DNA in African modern humans. However, a study in 2020 demonstrated that there is Neanderthal DNA in all African Homo sapiens (Chen at el., 2020). This is a good indicator of how human migration out of Africa worked: that Homo sapiens did not leave Africa in one or more major dispersals, but that there was gene flow back and forth over time that brough Neanderthal DNA into Africa.
The evidence we have of Neanderthal-modern human interbreeding sheds light on the expansion of modern humans out of Africa. These new discoveries refute many previous hypotheses in which anatomically modern humans replaced archaic hominins, like Neanderthals, without any interbreeding. However, even with some interbreeding between modern humans and now-extinct hominins, most of our genome still derives from Africa.
For many years, the only evidence of human-Neanderthal hybridization existed within modern human genes. However, in 2016 researchers published a new set of Neanderthal DNA sequences from Altai Cave in Siberia, as well as from Spain and Croatia, that show evidence of human-Neanderthal interbreeding as far back as 100,000 years ago -- farther back than many previous estimates of humans’ migration out of Africa (Kuhlwilm et al., 2016). Their findings are the first to show human gene flow into the Neanderthal genome as opposed to Neanderthal DNA into the human genome. These data tells us that not only were human-Neanderthal interbreeding events more frequent than previously thought, but also that an early migration of humans did in fact leave Africa before the population that survived and gave rise to all contemporary non-African modern humans.
We previously mentioned the lack of genetic contributions by Neanderthals into the modern human mtDNA gene pool. As we have shown that Neanderthal-human interbreeding did occur, why wouldn’t we find their DNA in our mtDNA as well as our nuclear DNA? There are several potential explanations for this. It is possible that there were at one point modern humans who possessed the Neanderthal mtDNA, but that their lineages died out. It is also highly possible that Neanderthals did not contribute to the mtDNA genome by virtue of the nature of human-Neanderthal admixture. While we know that humans and Neanderthals bred, we have no way of knowing what the possible social or cultural contexts for such breeding would have been.
Because mtDNA is passed down exclusively from mother to offspring, if Neanderthal males were the only ones contributing to the human genome, their contributions would not be present in the mtDNA line. It is also possible that while interbreeding between Neanderthal males and human females could have produced fertile offspring, interbreeding between Neanderthal females and modern human males might not have produced fertile offspring, which would mean that the Neanderthal mtDNA could not be passed down. Finally, it is possible that modern humans do carry at least one mtDNA lineage that Neanderthals contributed to our genome, but that we have not yet sequenced that lineage in either modern humans or in Neanderthals. Any of these explanations could underlie the lack of Neanderthal mtDNA in modern human populations.
Human-Denisovan and Neanderthal-Denisovan Interbreeding
Given that scientists have DNA evidence of another hominin species, the Denisovans, is there any evidence for interbreeding among all three species? Yes! Comparison of the Denisovan genome to various modern human populations shows up to 4-6% contribution from Denisovans in non-African modern human populations. This concentration is highest in people from Papua New Guinea and Oceania. It makes sense that interbreeding would appear in these Southeast Asian and Pacific Island communities, as their ancestors migrated from mainland Asia where Denisovan fossils have been found. There is also substantial evidence for Denisovan-Neanderthal interbreeding, including one juvenile female that appears to be a fist generation hybrid of a Neanderthal female parent and Denisovan male parent (Slon et al., 2018). Finding more Denisovan fossils will hopefully mean developing a more complete picture of Denisovan genetics so that scientists can explore these interactions in more detail.
Fast Facts:
Neanderthals have contributed 1-4% of the DNA of humans of Eurasian descent
Neanderthals have also indirectly contribute to the genome of modern humans of African descent via ancient modern human migrations back into Africa
There are multiple possible explanations as to why there is not Neanderthal mtDNA in the modern human gene pool
Denisovans also contributed up to 4-6% of the genome of modern humans in certain regions and interbred with Neanderthals as well
DNA Genotypes and Phenotypes: What Do Neanderthals Genes Do?
Genes and Evolution
While much of the genetic diversity discussed above came from inactive, noncoding, or otherwise evolutionarily neutral segments of the genome, there are many sites that show clear evidence of selective pressure on the variations between modern humans and Neanderthals. Researchers found 78 loci at which Neanderthals had an ancestral state and modern humans had a newer, derived state (Green et al 2010). Five of these genes had more than one sequence change that affected the protein structure. These proteins include SPAG17, which is involved in the movement of sperm, PCD16, which may be involved in wound healing, TTF1, which is involved in ribosomal gene transcription, and RPTN, which is found in the skin, hair and sweat glands. Other changes may not alter the sequence of the gene itself, but alter the factors that control that gene’s replication in the cell, changing its expression secondarily.
This tells us that these traits were selected for in the evolution of modern humans and were possibly selected against in Neanderthals. Though some of the genomic areas that may have been positively selected for in modern humans may have coded for structural or regulatory regions, others may have been associated with energy metabolism, cognitive development, and the morphology of the head and upper body. These are just a few of the areas where we have non-genetic evidence of differentiation between modern humans and Neanderthals.
While the study of DNA reveals aspects of relatedness and lineage, its primary function is, of course, to control the production of proteins that regulate an organism’s biology. Each gene may have a variety of genotypes, which are the variances that can occur within the site of a particular gene. Each genotype codes for a respective phenotype, which is the physical expression of that gene. When we study Neanderthal DNA, we can examine the genotypes at loci of known function and can infer what phenotype the Neanderthal’s mutations may have expressed in life. Below, explore several examples of Neanderthal genes and the possible phenotypes that they would have displayed.
Red-Headed, Pale-Skinned Neanderthals?
Ancient DNA has been used to reconstruct aspects of Neanderthal appearance. A fragment of the gene for the melanocortin 1 receptor (MRC1) was sequenced using DNA from two Neanderthal specimens from Spain and Italy: El Sidrón 1252 and Monte Lessini (Lalueza-Fox et al. 2007). MC1R is a receptor gene that controls the production of melanin, the protein responsible for pigmentation of the hair and skin. Neanderthals had a mutation in this receptor gene which changed an amino acid, making the resulting protein less efficient and likely creating a phenotype of red hair and pale skin. (The reconstruction below of a male Neanderthal by John Gurche features pale skin, but not red hair) .How do we know what this phenotype would have looked like? Modern humans display similar mutations of MC1R, and people who have two copies of this mutation have red hair and pale skin. However, no modern human has the exact mutation that Neanderthals had, which means that both Neanderthals and humans evolved this phenotype independent of each other.
If modern humans and Neanderthals living in Europe at the same time period both evolved this reduction of pigmentation, it is likely that there was an advantage to this trait. One hypothesis to explain this adaptation’s advantage involves the production of vitamin D. Our bodies primarily synthesize our supply of vitamin D, rather than relying on vitamin D from food sources. Vitamin D is synthesized when the sun’s UV rays penetrate our skin. Darker skin makes it harder for sunlight to penetrate the outermost layers and stimulate the production of vitamin D, and while people living in areas of high sun exposure will still get plenty of vitamin D, people who live far from the equator are not exposed to as much sunlight and need to optimize their exposure to the sun. Therefore, it would be beneficial for populations in colder climates to have paler skin so that they can create enough vitamin D even with less sun exposure.
Neanderthals, Language and FOXP2
The FOXP2 gene is involved in speech and language (Lai et al. 2001). Mutations in the FOXP2 gene sequence in modern humans led to problems with speech, and oral and facial muscle control. The human FOXP2 gene is on a haplotype that was subject to a strong selective sweep. A haplotype is a set of alleles that are inherited together on the same chromosome, and a selective sweep is a reduction or elimination of variation among the nucleotides near a particular DNA mutation. Modern humans and Neanderthals share two changes in FOXP2 compared with the sequence in chimpanzees (Krause et al. 2007). How did this FOXP2 variant come to be found in both Neanderthals and modern humans? One scenario is that it could have been transferred between species via gene flow. Another possibility is that the derived FOXP2 was present in the ancestor of both modern humans and Neanderthals, and that the gene was so heavily favored that it proliferated in both populations. A third scenario, which the authors think is most likely, is that the changes and selective sweep occurred before the divergence between the populations. While it can be tempting to infer that the presence of the same haplotype in Neanderthals and humans means that Neanderthals had similar complex language capabilities, there is not yet enough evidence for such a conclusion. Neanderthals may also have their own unique derived characteristics in the FOXP2 gene that were not tested for in this study. Genes are just one factor of many in the development of language.
ABO Blood Types and Neanderthals
The gene that produces the ABO blood system is polymorphic in humans, meaning that there are more than two possible expressions of this gene. The genes for both A and B blood types are dominant, and O type is recessive, meaning that people who are type A or B can have genotypes of either AA or AO (or BB and BO) and still be A (or B) blood type, but to have type O blood one must have a genotype of OO. Various selection factors may favor different alleles, leading to the maintenance of distinct blood groups in modern human populations. Though chimpanzees also have different blood groups, they are not the same as human blood types. While the mutation that causes the human B blood group arose around 3.5 Ma, the O group mutation dates to around 1.15 Ma. When scientists tested whether Neanderthals had the O blood group they found that two Neanderthal specimens from Spain probably had the O blood type, though there is the possibility that they were OA or OB (Lalueza-Fox et al. 2008). Though the O allele was likely to have already appeared before the split between humans and Neanderthals, it could also have arisen in the Neanderthal genome via gene flow from modern humans.
Bitter Taste Perception and Neanderthals
The ability to taste bitter substances is controlled by a gene, TAS2R38. Some individuals are able to taste bitter substances, while others have a different version of the gene that does not allow them to taste bitter foods. Possession of two copies of the positive tasting allele gives the individual greater perception of bitter tastes than the heterozygous state in which individuals have one tasting allele and one non-tasting allele. Two copies of a non-tasting allele leads to inability to taste bitter substances.
When scientists sequenced the DNA of a Neanderthal from El Sidrón, Spain for the TAS23R38 gene, they found that this individual was heterozygous and thus was able to perceive bitter taste - although not as strongly as a homozygous individual with two copies of the tasting allele would be able to (Lalueza-Fox et al. 2009). Both of these haplotypes are still present in modern people, and since the Neanderthal sequenced was heterozygous, the two alleles (tasting and non-tasting) were probably both present in the common ancestor of Neanderthals and modern humans. Though chimpanzees also vary in their ability to taste bitterness, their abilities are controlled by different alleles than those found in humans, indicating that non-tasting alleles evolved separately in the hominin lineage.
Microcephalin and Archaic Hominins
The microcephalin gene relates to brain size during development. A mutation in the microcephalin gene, MCPH1, is a common cause of microcephaly. Mutations in microcephalin cause the brain to be 3 to 4 times smaller in size. A variant of MCPH1, haplogroup D, may have been positively selected for in modern humans – and may also have come from an interbreeding event with an archaic population (Evans et al. 2006). All of the haplogroup D variants come from a single copy that appeared in modern humans around 37,000 years ago. However, haplogroup D itself came from a lineage that had diverged from the lineage that led to modern humans around 1.1 million years ago. Although there was speculation that the Neanderthals were the source of the microcephalin haplogroup D (Evans et al. 2006), Neanderthal DNA sequenced does not contain the microcephalin haplogroup D (Green et al. 2010).
Enamel Formation and Dental Morphology
While changes to the genome can directly affect the phenotypes displayed in an organism, altering the timing mechanism of protein production can cause very similar effects. MicroRNA (miRNA) is one such mechanism: a cell uses miRNA to suppress the expression of a gene until that gene becomes necessary. One miRNA can target multiple genes by binding its seed region to messenger RNA that would otherwise have carried that information to the ribosome to be transcribed into proteins, preventing transcription from taking place. In hominins, one particular miRNA called miR-1304 is exhibited in both an ancestral and derived condition. The derived condition has a mutation at the seed region which allows it to target more mRNA segments but less effectively. This means that in the derived state, some genes will be more strongly expressed due to a lack of suppression. One such trait is the production of enamelin and amelotin proteins, both used in dental formation during development. The suppression of production in Neanderthals, and subsequent lack of suppression in modern humans, could be a contributing factor to some of the morphological differences between Neanderthal and modern human dentition.
Immune Response
Research shows that Neanderthal DNA has contributed to our immune systems today. A study of the human genome found a surprising incursion of Neanderthal DNA into the modern human genome, specifically within the region that codes for our immune response to pathogens (Dannemann et al 2016). These particular Neanderthal genes would have been useful for the modern humans arriving in Europe whose immune systems had never encountered the pathogens within Europe and would be vulnerable to them, unlike the Neanderthals who had built up generations of resistance against these diseases. When humans and Neanderthals interbred, they passed this genetic resistance to diseases on to their offspring, allowing them a better chance at survival than those without this additional resistance to disease. The evidence of this genetic resistance shows that there have been at least three incursions of nonhuman DNA into the genes for immune response, two coming from Neanderthals and one from our poorly understood evolutionary cousins, the Denisovans.
Clotting, Depression, and Allergies
While many of the genes that we retain for generations are either beneficial or neutral, there are some that have become deleterious in our new, modern lives. There are several genes that our Neanderthal relatives have contributed to our genome that were once beneficial in the past but can now cause health-related problems (Simonti et al 2016). One of these genes allows our blood to coagulate (or clot) quickly, a useful adaptation in creatures who were often injured while hunting. However, in modern people who live longer lives, this same trait of quick-clotting blood can cause harmful blood clots to form in the body later in life. Researchers found another gene that can cause depression and other neurological disorders and is triggered by disturbances in circadian rhythms. Since it is unlikely that Neanderthals experienced such disturbances to their natural sleep cycles, they may never have expressed this gene, but in modern humans who can control our climate and for whom our lifestyle often disrupts our circadian rhythms, this gene is expressed more frequently.
|
So, what does that mean, exactly? Modern humans and Neanderthals lived in separate regions evolving along separate evolutionary lineages for hundreds of thousands of years. Even so, Neanderthals are still our closest currently known relative. Because of that evolutionary proximity, despite being recognized as different species, it is still possible that members of our two species exchanged genetic information. This exchange of DNA is called introgression, or interbreeding.
When looking for evidence of interbreeding, scientists do not search billions and billions of base pairs. Instead, there are specific regions of the genomes that are known to be highly variable in modern humans along with several million single nucleotide polymorphisms (SNP’s), where the given base at a single location can vary among people. The difference between the total genome and these specific regions/sites can lead to some confusion. In terms of the total genome, humans and chimpanzees are 98-99% similar. Yet, it is possible for individuals to have up to 4% Neanderthal DNA. That difference is accounted for in that 4% of the highly variable genome is inherited from a Neanderthal source, not 4% of the entire genome. If one was to look at the modern human genome as a whole, at least 98-99% is the same, inherited from our common ancestor with Neanderthals.
Neanderthal-Homo sapiens interbreeding
Neanderthals are known to contribute up to 1-4% of the genomes of non-African modern humans, depending on what region of the word your ancestors come from, and modern humans who lived about 40,000 years ago have been found to have up to 6-9% Neanderthal DNA (Fu et al., 2015). Because Neanderthals likely evolve outside of Africa (no Neanderthal fossils have been found in Africa to date) it was thought that there would be no trace of Neanderthal DNA in African modern humans. However, a study in 2020 demonstrated that there is Neanderthal DNA in all African Homo sapiens (Chen at el., 2020).
|
yes
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
yes_statement
|
"neanderthals" interbred with "modern" "humans".. "modern" "humans" interbred with "neanderthals".
|
https://www.nationalgeographic.com/culture/article/100506-science-neanderthals-humans-mated-interbred-dna-gene
|
Neanderthals, Humans Interbred—First Solid DNA Evidence
|
In addition, all modern ethnic groups, other than Africans, carry traces of Neanderthal DNA in their genomes, the study says—which at first puzzled the scientists. Though no fossil evidence has been found for Neanderthals and modern humans coexisting in Africa, Neanderthals, like modern humans, are thought to have arisen on the continent.
"If you told an archaeologist that you'd found evidence of gene exchange between Neanderthals and modern humans and asked them to guess which [living] population it was found in, most would say Europeans, because there's well documented archaeological evidence that they lived side by side for several thousand years," said study team member David Reich.
So how did modern humans with Neanderthal DNA end up in Asia and Melanesia?
Neanderthals, the study team says, probably mixed with early Homo sapiens just after they'd left Africa but before Homo sapiens split into different ethnic groups and scattered around the globe.
The first opportunity for interbreeding probably occurred about 60,000 years ago in Middle Eastern regions adjacent to Africa, where archaeological evidence shows the two species overlapped for a time, the team says.
And it wouldn't have taken much mating to make an impact, according to study co-author Reich. The results could stem from a Neanderthal-modern human one-night stand or from thousands of interspecies assignations, he said.
Genetic anthropologist Jeffrey Long, who calls the Science study "very exciting," co-authored a new, not yet published study that found DNA evidence of interbreeding between early modern humans and an "archaic human" species, though it's not clear which. He presented his team's findings at a meeting of the American Association of Physical Anthropologists in Albuquerque, New Mexico, last month.
Long's team reached its conclusions after searching the genomes of hundreds of modern humans for "signatures of different evolutionary processes in DNA variation."
Like the new Science paper, Long's study speculates that interbreeding occurred just after our species had left Africa, but Long's study didn't include analysis of the Neanderthal genome.
"At the time we started the project, I never imagined I'd ever see an empirical confirmation of it," said Long, referring to the Science team's Neanderthal-DNA evidence, "so I'm pretty happy to see it."
|
In addition, all modern ethnic groups, other than Africans, carry traces of Neanderthal DNA in their genomes, the study says—which at first puzzled the scientists. Though no fossil evidence has been found for Neanderthals and modern humans coexisting in Africa, Neanderthals, like modern humans, are thought to have arisen on the continent.
"If you told an archaeologist that you'd found evidence of gene exchange between Neanderthals and modern humans and asked them to guess which [living] population it was found in, most would say Europeans, because there's well documented archaeological evidence that they lived side by side for several thousand years," said study team member David Reich.
So how did modern humans with Neanderthal DNA end up in Asia and Melanesia?
Neanderthals, the study team says, probably mixed with early Homo sapiens just after they'd left Africa but before Homo sapiens split into different ethnic groups and scattered around the globe.
The first opportunity for interbreeding probably occurred about 60,000 years ago in Middle Eastern regions adjacent to Africa, where archaeological evidence shows the two species overlapped for a time, the team says.
And it wouldn't have taken much mating to make an impact, according to study co-author Reich. The results could stem from a Neanderthal-modern human one-night stand or from thousands of interspecies assignations, he said.
Genetic anthropologist Jeffrey Long, who calls the Science study "very exciting," co-authored a new, not yet published study that found DNA evidence of interbreeding between early modern humans and an "archaic human" species, though it's not clear which. He presented his team's findings at a meeting of the American Association of Physical Anthropologists in Albuquerque, New Mexico, last month.
Long's team reached its conclusions after searching the genomes of hundreds of modern humans for "signatures of different evolutionary processes in DNA variation. "
Like the new Science paper, Long's study speculates that interbreeding occurred just after our species had left Africa, but Long's study didn't include analysis of the Neanderthal genome.
|
yes
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
no_statement
|
"neanderthals" did not "interbreed" with "modern" "humans".. "modern" "humans" did not "interbreed" with "neanderthals".
|
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC532389/
|
Modern Humans Did Not Admix with Neanderthals during Their ...
|
Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
Associated Data
Figure S1: Proportion of Neanderthal Lineages in the European Population as a Function of the Average Number of Admixture Events per Deme between HN and HS These values are given for the nine scenarios (A–I) listed in Table 1, and for a new scenario A+Neol. This latter scenario is similar to A, except that the carrying capacity of the modern humans is increased by a factor 250 at the time of the Neolithic transition (320 generations BP). The influence of this demographic increase on the simulated HN proportion is very weak, as shown on this figure.
Figure S2: Evolution of the densities of demes HN (in black) and HS (in gray) within a cell simulated under demographic scenario A for γij = 0.4. The cell is colonized by HS at time −1520 ( 0 = present). The thin black line with white circles represents the distribution of admixture events, whose numbers are reported on the right axis (322 KB TIF).
Abstract
The process by which the Neanderthals were replaced by modern humans between 42,000 and 30,000 before present is still intriguing. Although no Neanderthal mitochondrial DNA (mtDNA) lineage is found to date among several thousands of Europeans and in seven early modern Europeans, interbreeding rates as high as 25% could not be excluded between the two subspecies. In this study, we introduce a realistic model of the range expansion of early modern humans into Europe, and of their competition and potential admixture with local Neanderthals. Under this scenario, which explicitly models the dynamics of Neanderthals' replacement, we estimate that maximum interbreeding rates between the two populations should have been smaller than 0.1%. We indeed show that the absence of Neanderthal mtDNA sequences in Europe is compatible with at most 120 admixture events between the two populations despite a likely cohabitation time of more than 12,000 y. This extremely low number strongly suggests an almost complete sterility between Neanderthal females and modern human males, implying that the two populations were probably distinct biological species.
A model of human expansion into Europe reveals almost complete sterility between Neanderthal females and modern human males, implying that the two populations were probably distinct biological species
Introduction
The “Neanderthals” or Homo sapiens neanderthalensis (HN) constitute a group of hominids, whose particular morphology developed in Europe during the last 350,000 y under the effect of selection and genetic drift, reaching its final form approximately 130,000 y ago (Klein 2003). This subgroup of hominids populated Europe and western Asia until the arrival of the first modern humans, Homo sapiens sapiens (HS), approximately 45,000 y ago (Mellars 1992). This arrival coincided with the beginning of Neanderthal decline, a process that occurred in less than 15,000 y and that is still not fully understood (Stringer and Davies 2001). An important question which remains to be assessed is whether Neanderthals could hybridize with modern humans and if they left some traces in the current modern human gene pool. While this hypothesis is excluded under the Recent African Origin Model (RAO), which postulates a complete replacement of former members of the genus by H. sapiens, it is central to the tenets of the multiregional hypothesis (Eckhardt et al. 1993; Wolpoff et al. 2000), which assumes a gradual transition from H. erectus to modern humans on different continents. From a paleontological and archaeological point of view the debate is still open, even if the supporters of the RAO (Stringer and Davies 2001; Rak et al. 2002; Schmitz et al. 2002) are gaining momentum over those supporting European regional continuity (Duarte et al. 1999; but see also Tattersall and Schwartz 1999). Recent morphological studies support a clear distinction between Neanderthals and modern humans (Harvati 2003; Ramirez Rozzi and Bermudez De Castro 2004), and genetic evidence, such as the clear divergence and monophyly of the HN mitochondrial DNA (mtDNA) control region (Krings et al. 1997, 1999; Ovchinnikov et al. 2000), suggested a long separation of the HN and HS female lineages (Krings et al. 2000; Scholz et al. 2000; Schmitz et al. 2002; Caramelli et al. 2003), with a divergence time estimated to lie between 300,000 and 750,000 y ago (Krings et al. 1997, 1999). The complete absence of Neanderthal mtDNA sequences in the current European gene pool, attested from the study of more than 4,000 recorded sequences (Richards et al. 1996; Handt et al. 1998) supported the absence of Neanderthal mtDNA leakage in the modern gene pool, but it was argued that even if some HN genes could have passed in the ancient Cro-Magnon gene pool, they could have been lost through genetic drift (Relethford 2001; Hagelberg 2003). Recently, several attempts were made at circumventing the drift problem by the direct sequencing of modern human fossils contemporary with the last Neanderthals. Cro-Magnon sequences were found very similar to those of current Europeans (Caramelli et al. 2003), even though contamination from modern DNA could not be completely excluded (Serre et al. 2004). All studies nevertheless agreed in showing the absence of Neanderthal sequence motifs among early modern human fossil DNA (Caramelli et al. 2003; Serre et al. 2004), but only Neanderthal contributions larger than 25% to the modern gene pool could be statistically excluded under a simple model (Figure 1A and and1B)1B) of instantaneous mixing of Neanderthals and modern humans (Nordborg 1998; Serre et al. 2004). Thus, the problem of the genetic relationships between Neanderthals and modern humans remains fully open.
Different Models of the Interactions between Neanderthals and Modern Humans
(A) Model of instantaneous mixing of unsubdivided Neanderthal and modern human populations.
(B) Same as (A), but with an exponential growth of the modern human population having started before the admixture with Neanderthals.
(C) Model of a progressive range expansion of modern humans into Europe. This model is spatially explicit, and the modern human population occupies a different range than the Neanderthal population before the admixture. Under this model, admixture is progressive and occurs because modern humans move into the territory of Neanderthals, a territory that shrinks with the advance of modern humans.
In order to further investigate this issue, we have developed a more realistic modeling of the admixture process between Neanderthals and early modern humans. In brief, the differences with previous approaches are the following (see Figure 1 and the Materials and Methods section for further details): (1) Europe is assumed to be subdivided into small territories potentially harboring two subpopulations (demes): an HN and an HS deme; (2) Europe is settled progressively by modern humans, resulting in a range expansion from the Near East. This range expansion implies also a demographic expansion of early modern Europeans, which stops when Europe is fully settled; (3) local population size is logistically regulated for both Neanderthals and modern humans; (4) we assume there is competition between modern humans and Neanderthals, resulting in the progressive replacement of Neanderthals by modern humans due to their higher carrying capacity caused by a better exploitation of local resources (Klein 2003); (5) Consequently, admixture between the two populations is also progressive and occurs in subdivisions occupied by both populations, in a narrow strip at the front of the spatially expanding modern human population (Figure 2); (6) coalescent simulations are used to estimate the likelihood of different rates of local admixture between modern humans and Neanderthals, given that Neanderthal mtDNA sequences are not observed in current Europeans.
Simulations begin 1,600 generations ago, with the area of Europe already colonized by Neanderthals shown in light gray, and an origin of modern human expansion indicated by a black arrow (lane A). Lanes (B–F) show the progression of the wave of advance of modern humans (dark gray) into Europe at different times before present. The black band at the front of the expansion wave represents the restricted zone of cohabitation between modern humans and Neanderthals.
The additional realism of this model makes it also more complex, and the range expansion and admixture processes will depend on several parameters, like the carrying capacities of the local populations, their intrinsic growth rate, the amount of gene flow between adjacent demes, the local rate of admixture between populations, or the geographical origin of the range expansion. Since it is difficult to explore this complex parameter space, we used archeological and paleodemographic information to calibrate the values of these parameters. For instance, the estimated duration of the replacement process (about 12,500 y, Bocquet-Appel and Demars 2000a) was used to adjust the speed of the expansion of modern humans and, thus, provided strong constraints on local growth and emigration rates. Based on available information, we thus defined a set of plausible parameter values considered as a basic scenario (scenario A). Local admixture rate, which is the parameter of interest here, was then varied, and its effect on the estimated contribution of Neanderthals to the current modern human gene pool was recorded. The sensitivity of admixture estimates to alternative parameterization of our model was studied in eight alternative scenarios (scenarios B to I), by varying each time the values of a few parameters.
Results
Expected Neanderthal Contribution to the Current European Gene Pool as a Function of Admixture Rates
The description of the nine envisioned scenarios for the colonization of Europe by modern humans is reported in Table 1. For each of these scenarios, the admixture rate, which is the parameter of interest in this study, was allowed to vary and only marginally influenced the cohabitation period and the replacement time of HN by HS (Table 1). Note that the cohabitation period at any given place (shown as a narrow black band on Figure 2) is limited to 6–37 generations, depending on the scenario.
Table 1
Expected Proportion of Neanderthal Lineages in the Present Modern Human Gene Pool under Different Demographic Scenarios
The expected contribution of Neanderthal lineages in the current gene pool of modern humans (over all the simulated demes) was obtained from 10,000 simulations. Standard deviations are shown in italic. Demographic scenarios: (A) The basic scenario with realistic parameters; (B) identical to (A), with an origin in Iran at the extreme east of the simulated area; (C) identical to (A), but with a diffused source area consisting of 25 demes at KHS = 40, instead of only one deme; (D) identical to (A), but HS occupied all the south of the Neanderthal range (North Africa and North of the Arabian Peninsula) before the onset of the expansion, which corresponds to an HS initial population size of 14,000 breeding females; (E) identical to (A), with rHS = 0.8, and KHN = 25; (F) identical to (A), with mHS = 0.5 and KHN = 25 as in (D); (G) identical to (A) with a faster colonization time, due to larger growth and migration rates (mHS = 0.35 and rHS = 0.6); (H) identical to (A), with interbreeding resulting in symmetrical transfer of genes between modern humans and Neanderthals; (I) identical to (A), but with carrying capacity KHS being reached instantaneously and a local recruitment of γKHS Neanderthal lineages. In this latter scenario, there is thus a single event of admixture at demographic equilibrium and no logistic growth
cThe different rates of admixture are given in number of admixture events per deme. For instance, a value of 1/10 implies an average of one admixture event for ten demes for the whole period of cohabitation between Neanderthals and modern humans
The expected proportion of Neanderthal genes in the gene pool of modern humans was estimated by coalescent simulations and is reported in Table 1 for different rates of admixture between Neanderthals and modern humans. At odds with previous estimates (Nordborg 1998 ; Gutierrez et al. 2002; Serre et al. 2004), our simulations show that even for very few admixture events, the contribution of the Neanderthal lineages in the current gene pool should be very large (see also Figure S1). For instance, in scenario A, with a 4-fold advantage in exploitation of local resources by modern humans, a single fertile admixture event in one deme out of ten over the whole period of coexistence between HN and HS should lead to the observation of 38% of HN genes in the present mtDNA HS gene pool (scenario A in Table 1). This proportion would be lower but still amount to 15% if the advantage of modern humans was reduced to 1.6 times over Neanderthals with the same admixture rate (scenario F in Table 1). With higher but still relatively low levels of admixture, a majority of Neanderthal genes should be expected in the current European gene pool (Table 1). For instance, with as much as two admixture events per cell over the total coexistence period of Neanderthals and modern humans, more than 95% of the current HS gene pool should be tracing back to Neanderthals, for all scenarios with logistic demographic regulation described in Table 1 (scenarios A to H). As shown on Figure 3, the proportion of current lineages that can be traced to Neanderthals is, however, not uniformly distributed over Europe in scenario of moderate or low interbreeding. A gradient should be visible from the source of the range expansion (which shows the largest proportion of modern human genes) toward the margins of the expansion (the British Isles and the Iberian Peninsula), which should then be expected to harbor a larger proportion of Neanderthal genes than the rest of Europe (Figure 3). However, this gradient would be relatively weak, and the expected proportion of HN lineages at any position is primarily affected by the degree of admixture between the two populations.
Expected Proportion of Neanderthal Lineages (in Black) among European Samples under Demographic Scenario A (Table 1) at Different Geographic Locations, for Different Interbreeding Rates
(A) One admixture event on average per 50 demes over the whole period of cohabitation between Neanderthals and modern humans; (B) one admixture event per five demes; (C) one admixture event per two demes; (D) one admixture event per deme.
The finding that even minute amounts of interbreeding between Neanderthals and modern humans should lead to a massive introgression of Neanderthals' mtDNAs into the Cro-Magnon gene pool is somehow counterintuitive and deserves further explanations. The successful introgression of Neanderthal mtDNAs is due to a massive dilution of the modern human mtDNA gene pool into that of the pre-existing population (Chikhi et al. 2002) and to a low probability of being lost by drift at the time of introgression (see below). The dilution process can be seen as follows: An HN gene entering the HS gene pool at an early stage of the colonization process will lower the frequency of HS genes in the HS deme; the migrants sent from this deme to colonize an adjacent new territory can themselves harbor HN genes, so that a further HS deme can be founded by a mixture of HS and HN genes; additional admixture events will further lower the proportion of HS genes in HS demes. The repetition of these admixture and migration steps will thus rapidly dilute HS genes. Under this process, the European HS population can be fully introgressed by HN genes under scenarios A to H, if two or more admixture events occurred in each deme (see Table 1, last two columns). For such large rates of admixture, the fraction of HS genes in demes adjacent to the source of HS expansion is already diluted by more than 28% with HN genes (results not shown). Therefore, in the absence of counteracting selective forces, the dilution process repeated over several demes would hinder the spread of HS genes away from the source of the colonization. The range expansion would thus be mainly carried out by individuals having HN genes, explaining why the HS European population would appear fully introgressed by HN genes. The success of introgressing HN genes is also due to their integration into the HS deme while it is in a period of demographic (logistic) growth (see Figure S2), so that these introgressing genes are unlikely to be lost by genetic drift, and will, rather, be amplified by the logistic growth process occurring in the HS deme. In order to assess the importance of the period of logistic growth relative to the dilution process, we have modeled a range expansion process where a newly founded deme reaches instantaneously its carrying capacity, and where a given proportion of genes is recruited from the local Neanderthal gene pool. The results of those simulations (reported in Table 1 as scenario I) show that without logistic growth much larger interbreeding rates would be necessary to have the same impact on current human diversity. Indeed, the occurrence of two admixture events per deme over the whole cohabitation period would only lead to 5% of the current gene pool being of Neanderthal ancestry, instead of 100% when logistic growth is implemented.
Estimation of Admixture Rates between Neanderthals and Modern Humans
The present results show that if Neanderthals could freely breed with modern humans, having progressively invaded their territory, their contribution to our gene pool would be immense. Since no Neanderthal mtDNA sequence has been observed so far among present Europeans, it is of interest to estimate the maximum admixture rate between Neanderthals and modern humans that would be compatible with an absence of Neanderthal genes, accounting for the current sampling effort and genetic drift over the last 30,000 y. This assessment was done by coalescent simulations. The likelihoods of different admixture rates are reported in Figure 4 for each scenario. Maximum-likelihood estimates are obviously obtained for a total absence of interbreeding between HS and HN, but here the interest lies in the upper limit of a 95% confidence interval. We see that the scenarios A to H can be divided into three groups. Scenarios A, C, G, and H lead to very similar upper bounds for the estimation of the maximum admixture rate (approximately 0.015 admixture events per deme; see Table 2). Similarity of results obtained for scenarios A and C show that the fact that the origin of the spread of modern humans was diffused over a large area or concentrated at a single point does not substantially influence our results. A shorter duration of the colonization of Europe by HS (approximately 8,000 y; scenario G) leads to an estimation very similar to that obtained under scenario A. Also the implementation of fully symmetric interbreeding between HN and HS (scenario H) leads to results almost identical to those obtained when we only allow breeding between HN females and HS males (scenario A). The place of origin for modern humans seems more important, as a putative origin in Iran (scenario B) or in North Africa (scenario D) leads to even lower maximum interbreeding rates (approximately 0.01 admixture events per deme) than if the source is located closer to Europe as in scenario A. Moreover, scenario D also shows that a much larger initial size of the HS population (14,000 breeding females instead of 40 in scenario A) does not reduce the final Neanderthal contribution to the HS gene pool. This is because we model local (at the deme level) and not global contacts between the two populations. Finally, scenarios E and F, corresponding to larger carrying capacities of Neanderthals, would be compatible with a larger amount of admixture between the two species (approximately 0.03 admixture events per deme), which is understandable given the longer cohabitation times under these scenarios (21–37 generations) than under scenarios A–D and G–H (6–12 generations). The estimates of the average number of admixture events per deme can be translated into a maximum number of interbreeding events having occurred over all Europe during the whole replacement process of Neanderthals by modern humans, as reported in Table 2. We find that, depending on the scenario, these maximum estimates range between 34 (scenario B) and 120 (scenario E) admixture events over the whole of Europe , which are extremely low values given the fact that the two populations have certainly coexisted for more than 12,000 y in that region.
Table 2
bThis figure is computed from the previous column by assuming that there were a total of 140,000 reproducing females in the total modern human population in Europe (see Materials and Methods)
Discussion
Our simulations show that the mitochondrial evidence in favor of no, or very little, interbreeding between Neanderthals and modern humans is much stronger than previously realized (Wall 2000; Nordborg 2001). We indeed find that the current absence of Neanderthal mtDNA genes is compatible with a maximum admixture rate about 400 times smaller than that previously estimated (Nordborg 1998; Serre et al. 2004). This initial estimate (25%) was, however, based on a simple but unrealistic model of evolution, assuming no population subdivision, constant population size, and a single and instantaneous admixture event between Neanderthals and modern humans. Taking into account the progressive nature of the range expansion of modern humans into Europe, the maximum initial input of Neanderthal genes into the Paleolithic European population can thus be estimated to lie between only 0.02% (scenario B) and 0.09% (scenario E) (Table 2). Our simulations of alternative scenarios of HS range expansion into Europe suggest that our results are not very sensitive to local HS growth rates, level of gene flow between neighboring HS demes, or the geographical origin of HS range expansion. It is also worth emphasizing that the final HN contribution to the European gene pool does not really depend on the size and spread of the population at the source of the range expansion (compare scenario A to C and D in Table 1). This is logical since the colonization process starts from a restricted number of demes at the edge of the pre-existing range in our model of subdivided population (see Figure 1C). If this model is correct, it implies that the current European genes should have coalesced in a small number of individuals present in the demes at the source of the colonization of Europe, or, in other words, that there was a bottleneck having preceded the range expansion into Europe. Available data on European mtDNA diversity indeed support this view, since most European populations do present a signal of Paleolithic demographic expansion from a small population, which could be dated to about 40,000 y ago (Excoffier and Schneider 1999).
Additional complexities of the simulation model could have been envisioned, like the possibility for long-range dispersal, some heterogeneity of the environment leading to different carrying capacities and preferential colonization routes, or uneven migration rates. However, these extra parameters would have been very difficult to calibrate due to the scarcity of paleodemographic data. Moreover, it is likely that they would not have lead to qualitatively different results. For instance, since long-range dispersal speeds up the colonization process (Nichols and Hewitt 1994), short range migration rates would need to be reduced, in order to preserve a realistic colonization time. But this reduction would have no effect on local cohabitation time, which is the important factor affecting admixture rates (Table 2). Another source of realism could be the implementation of a recent Neolithic expansion wave on top of a Paleolithic substrate. This additional expansion wave has not been implemented here, as it is clearly beyond the scope of the present study. However, our present results suggest that small amounts of admixture between the Paleolithic and the Neolithic populations would lead to a massive contribution of Paleolithic lineages among the current Europeans. This point is important as it implies that if Neanderthal lineages had been present among the Paleolithic populations, they would not have been erased by the spread of the Neolithic in Europe. If we were using previous estimations of the Neolithic contribution to the current European genetic pool of about 50% (Barbujani and Dupanloup 2002; Chikhi 2002), the effect of a Neolithic expansion would require our estimates of the initial input of HN into the modern pool to be roughly multiplied by two, but still be very small (0.07% for scenario A). Note also that the simulation of a pure acculturation process, which amounts to increasing the carrying capacity of populations after the Neolithic by a factor 250 has virtually no effect on the expected proportion of Neanderthal genes in current Europeans (see Figure S1). Another argument against a major influence of the Neolithic expansion stems from mtDNA studies, since the demographic expansion inferred from mtDNA diversity and dated to about 40,000 y ago (Excoffier and Schneider 1999) implies that most of the mtDNA lineages of current Europeans result from a Paleolithic range expansion (Ray et al. 2003). If the expansion of Neolithic settlers had fully erased Paleolithic mtDNA diversity, one would indeed not expect to see this Paleolithic expansion signal. It thus argues in favor of a minor contribution of Neolithic genes to the current European gene pool, as expected under our model of progressive range expansion with continuous mixing.
Compared to previous models assuming an instantaneous mixing of HN and HS populations (Nordborg 1998; Serre et al. 2004) (see Figure 1A), we find that extremely small Neanderthal contributions should still be visible in the European gene pool. It implies that HN genes have a much larger probability of persisting when entering a progressively invading HS population than when entering a stationary population. This is because HN genes enter the HS population in demes that are still growing in size (see Figure S2), which prevents them from being lost by genetic drift and which amplifies their absolute number in the deme, making it likely they will persist and reach observable frequencies in the global population. This process is actually similar to that occurring in an unsubdivided growing population (e.g., Otto and Whitlock 1997). Actually, if HN genes were to directly enter an unsubdivided HS population that grew exponentially until today (see Figure 1B), the current absence of HN genes would also imply a very small amount of Neanderthal introgression into our gene pool (Nordborg 1998; Serre et al. 2004). However, this continuous and global exponential growth process appears difficult to justify (Serre et al. 2004) and does not really apply to the late Pleistocene human population (Weiss 1984; Biraben 2003).
Under our model, the progressive range expansion (Figure 1C) and the local logistic growth contribute to reduce the probability of losing introgressed HN genes. Without logistic growth, much larger interbreeding rates would be necessary to have the same impact on current human diversity (see scenario I in Table 1 and in Figure 4). Under this scenario, the absence of Neanderthal mtDNA sequences in present Europeans is still compatible with a maximum of about 1,850 fertile breedings between Neanderthal females and Cro-Magnon males, corresponding to a maximum initial input of 1.2% Neanderthal genes into the European Cro-Magnon population (Table 2). This figure being 20 times larger than when assuming an initial logistic growth of newly founded populations, it shows that the local logistic growth and the progressive range expansion contribute equally to reducing the inferred admixture rate compared to the simple model assuming a single admixture event and an instantaneous settlement of Europe by modern humans (see Figure 1A) (Serre et al. 2004). However, because new territories are often colonized by a few migrants and not by whole populations, local logistic growth has been incorporated into most models of range expansion (e.g., Fisher 1937; Skellam 1951; Shigesada and Kawasaki 1997). It should thus be considered as a normal feature of range expansions.
Another important result of this study is to show that an expanding population or species is likely to have its own genome invaded by that of the invaded population if interbreeding is possible and gradual, which could explain some documented cases of mtDNA introgression (e.g., Bernatchez et al. 1995; Shaw 2002). Our results indeed suggest that introgression should occur preferentially in species having gone through a range expansion, and that the introgressing genome would be that of the invaded population and not that of the invasive species. Of course this result should only apply to the part of the genome that is not under selection or that is not linked to the selective advantage of the invaders. If the mitochondrial genome of modern humans was involved in their higher fitness, the absence of observed mtDNA introgression would not necessarily be due to an absence of interbreeding, but would rather result from an active selection process against crosses between Neanderthal females and modern human males, and one would therefore expect to see potential leakage of Neanderthal genes in our nuclear genome. While some evidence for the differential fitness of some mtDNA human genomes in distinct climates has been recently found (Mishmar et al. 2003; Ruiz-Pesini et al. 2004), it is unlikely that such differences were involved in the selective advantage of modern humans over Neanderthals. It is indeed doubtful that modern humans coming from the Middle East would have had mitochondria better adapted to the colder environment of Europe than Neanderthals, who had spent tens of thousands of years in such a climate (Tattersall and Schwartz 1999; Klein 2003). It is therefore more likely that modern humans' higher technology and higher cognitive abilities (Klein 2003), resulting in better resource processing and environmental exploitation, have allowed them to out-compete Neanderthals, and that mtDNA was selectively neutral in that respect. It should however be kept in mind that our conclusions assume no sex bias in interbreeding rates. Studies of fossil Y chromosome or nuclear DNA would be needed to examine the basis of this assumption, but it seems difficult to imagine why interbreeding between Neanderthal men and modern human females resulting in the incorporation of Neanderthal genes would have been more frequent than the reverse situation.
Even though our model of interaction and competition between Neanderthals and modern humans may not entirely correspond to the reality, it captures two important historical aspects that were neglected in previous studies. The first one is the documented progressive spread of modern humans in Europe (see Figures 1 and and2),2), and the second is the local and progressive demographic growth of Paleolithic populations, with density-dependent interactions with Neanderthals. The incorporation of these additional sources of realism cannot be handled by current analytical models, but it can be readily integrated into a coalescent simulation framework, showing that it will be possible in the future to predict patterns of molecular diversity among populations or species belonging to a particular ecological network. Given the long period of cohabitation of the two populations in Europe and ample opportunities to interbreed, the absence or extremely low number of admixture events between Neanderthals and modern humans is best explained by intersterility or reduced fitness of hybrid individuals, promoting these populations to the status of different biological species. No interbreeding between the two populations also strongly argues in favor of a complete replacement of previous members of the genus Homo by modern humans and against a multiregional evolution of H. sapiens (Eckhardt et al. 1993; Wolpoff et al. 2000). It thus gives more credit to the RAO hypothesis (Excoffier 2002; Stringer 2002), since some very divergent H. erectus mitochondrial sequences should have also been observed if interbreeding had occurred during the colonization of Eurasia by modern humans from Africa.
Our conclusions about the genetic incompatibility between modern humans and Neanderthals would however be wrong if the absence of Neanderthal mtDNA genes in the current gene pool of modern Europeans was due to some processes that were not incorporated into our model. For instance, a range expansion of Neolithic populations without genetic contacts with Paleolithic could have erased both Paleolithic and remaining Neanderthal genes, but as discussed above, there are evidences for a substantial contribution of Paleolithic populations to the current gene pool (Barbujani and Dupanloup 2002; Chikhi et al. 2002; Dupanloup et al. 2004), invalidating this theory. Also an extremely rapid range expansion of a very large and unsubdivided modern population would also be compatible with an absence of Neanderthal genes despite considerable admixture, like in the scenario shown in Figure 1A (Nordborg 1998; Serre et al. 2004), but the long duration of the replacement process would be difficult to justify in that case. Finally, the occurrence of a cultural or ecological barrier, and not necessarily of a genetic barrier, could have prevented the realization of biologically possible hybridizations. Under this scenario, Neanderthals and early modern humans would have just avoided each other, which is contradicted by the observation of technological exchanges between Neanderthals and Cro-Magnons (e.g., Hublin et al. 1996). Moreover, the fact that the two populations had a very similar economy (Klein 1999, p. 530), indicates they had occupied an overlapping ecological niche and had thus ample opportunities to meet. It therefore seems that our model of subdivided population and progressive range expansion, implying local contacts, competition, and potential hybridization is quite plausible. One of its merits is also to explain both the replacement of Neanderthals by modern humans through a better exploitation of local resources, but also the late colonization of Europe by modern humans, which would have been possible only after the emergence of refined Upper Paleolithic technologies giving a competitive edge over Neanderthal industries (Klein 1999, pp. 511–524).
Materials and Methods
Digital map of Europe
The simulated region corresponds to the geographical region encompassing Europe, the Near East and North Africa. It has been modeled as a collection of 7,500 square cells of 2,500 km2 each, arranged on a two-dimensional grid, with contours delimited by seas and oceans. Each cell harbors two demes, one potentially occupied by modern humans (HS) and one potentially occupied by Neanderthals (HN). Given the estimated range distribution of Neanderthals (Klein 2003), HN demes were allowed in only 3,500 cells, mainly located in the lower part of Europe and in the Near East (see Figure 2A). Three land bridges have been artificially added to allow the settlement of Great Britain and Sicily.
Simulation of the colonization of Europe by modern humans
The simulation of the colonization process in Europe is an extension of that described in absence of competition in a homogeneous square world (Ray et al. 2003). At the beginning of the simulation, 1,600 generations ago (corresponding to 40,000 y ago when assuming a generation time of 25 y), the HN demes are all filled at their carrying capacity, KHN, and, in the basic scenario, the population HS is assumed to be restricted to a single deme in the Near East at a position corresponding approximately to the present border between Saudi Arabia and Jordan. Note that alternative locations and a more widespread distribution are also envisioned in other scenarios (see Table 1).This source for the spatial and demographic expansion of modern humans into Europe has been chosen arbitrarily, as its exact origin is still debated (Bocquet-Appel and Demars 2000a; Kozlowski and Otte 2000). Since we model the evolution of mtDNA, we only simulate the spread of females, but we implicitly assume that there are the same number of males and females in each deme. The source deme for HS is assumed to be at its carrying capacity KHS of 40 females, corresponding to a density of about 0.06–0.1 individuals per km2 (including males and juveniles), in agreement with density estimates for Pleistocene hunter-gatherers (Steele et al. 1998; Bocquet-Appel and Demars 2000b). HS individuals can then migrate freely to each of the four neighboring HS demes at rate m/4. When one or more HS individuals enter an empty deme, it results in a colonization event, which initiates a local logistic growth process, with intrinsic rate of growth rHS per generation, and with limiting carrying capacity KHS. Interactions between the HS and the HN demes of the same cell are described below in more detail, and its combination with migrations between HS demes results in a wave of advance progressing from the Near East toward Europe and North Africa.
Demographic model incorporating competition and admixture
We describe here a demographic model of interaction between populations, incorporating competition and interbreeding between individuals of the HN and HS populations, as well as migration between neighboring demes from the same subdivided population. We distinguish here migrations events between HN and HS populations from migrations between neighboring HN or HS populations. We model the former ones as admixture events, whereas the latter ones correspond to true dispersal events. The life cycle of a population at a given generation is as follows: admixture, logistic regulation incorporating competition, followed by migration. This life cycle thus assumes that migration is at the adult stage. In line with previous work (Barbujani et al. 1995), the frequency of admixture events is assumed to be density-dependent. Within a given deme, each of the Ni individuals from the i-th population has a probability
to reproduce successfully with one of the Nj members of the j-th population, and γij represents the probability that such a mating results in a fertile offspring. Alternatively, γij could represent the relative fitness of hybrid individuals or an index of disassortative mating. Following admixture, population densities are then first updated as
Our model of density regulation incorporating competition is based on the Lotka–Volterra interspecific competition model, which is an extension of the logistic growth model (Volterra 1926; Lotka 1932). For each population, a new density N″i is calculated from the former density as
where ri is the intrinsic growth rate of the i-th population, Ki is its carrying capacity, and αij is an asymmetric competition coefficient (Begon et al. 1996, pp. 274–278). An αij value of 1 implies that individuals of the j-th population have as much influence on those of population i as on their own conspecific, or that competition between populations is as strong as competition within a population. Lower values of αij indicate lower levels of competition between populations than within populations; a value of zero implies no competition between individuals from different populations. We have decided here not to fix αij values, but to make them density-dependent as
reflecting the fact that the influence of the members of a population on the other grows with its density. An example of the demographic transition between HN and HS is shown in Figure S2, together with the amount of admixture between the two populations. In the migration phase, each population of each deme can send emigrants to the same population in neighboring demes at rate m.N″im emigrants are thus sent outward each generation, and distributed equally among the four neighboring demes, as described previously (Ray et al. 2003). If a gene is sent to an occupied deme, the migration event results in gene flow; otherwise, it results in the colonization of a new deme. This latter possibility only exists for the population of modern humans, since we assume that Europe was already fully colonized by Neanderthals. Finally, the densities of the two populations are updated as a balance between logistic growth, migration, and admixture as
where Ii is the number of immigrants received from neighboring demes.
Parameter calibration
We have calibrated the parameters of our simulation model from available paleodemographic information and from the estimated colonization time of Europe by modern humans. Estimates of the total number of hunter-gatherers living before Neolithic times range between 5 and 10 million (Coale 1974; Hassan 1981; Weiss 1984; Landers 1992; Chikhi et al. 2002), of whom about 1 million individuals were living in Europe. Taking a carrying capacity KHS of 40 females would imply the presence of 220,000 effective mtDNA genes in the 5,500 demes occupied by modern humans in Europe and the Middle East. Since this number represents only females, the total number of individuals living over Europe was multiplied by four to include men and juveniles, leading to a total density of about 880,000 HS individuals. This value of KHS corresponds to a density of 0.064 individuals per square kilometer, which is close to the value (0.04) used by some previous simulation of modern humans (Rendine et al. 1986; Barbujani et al. 1995) and well within the range obtained from actual hunter-gatherer groups (0.01–0.35; Binford 2001) or that estimated for ancient hunter-gatherers (0.015–0.2; Steele et al. 1998; Bocquet-Appel and Demars 2000b). The time required for the colonization of Europe by modern humans is the other information that was used to calibrate the growth rates, rHS, the rate of migration, mHS, and the Neanderthal carrying capacity (KHN), as these three parameters have an influence on the speed of the migration wave (Fisher 1937; Skellam 1951). Since modern humans arrived in Europe approximately 40,000 y ago and occupied the whole continent by 27,500 before present (BP) (Bocquet-Appel and Demars 2000b), the colonization process lasted approximately 500 generations, assuming an average generation time of 25 to 30 y (Tremblay and Vezina 2000; Helgason et al. 2003 ).
Scenarios of modern human range expansion in Europe
Among the many sets of parameter values leading to the appropriate colonization time and the complete disappearance of Neanderthals, we have retained the following scenarios. Scenario A: Origin of HS in a single deme of the Near East at the border between Saudi Arabia and Jordan, mHS = mHN = 0.25, rHN = 0.4, and KHN = 10, rHS = 0.4, KHS = 40. Note that a value of KHN of ten corresponds to a total density of about 140,000 Neanderthals over Europe (0.016 individuals per km2), which is of the same order of magnitude as the rare available estimates (250,000 Neanderthals, Biraben 2003). Under this scenario, we have only considered admixture events between HN females and HS males, such that γHS,HN = 0 . Eight alternative scenarios have been considered by using extreme values of the parameters of the model (m, r, K, European colonization time, place, and size of initial HS population). Scenario B is identical to scenario A, except that the HS origin is located in Iran. Scenario C uses the same parameters as scenario A, but the HS source is more diffuse and corresponds to a subdivided population of 25 demes (1,000 breeding females) surrounding the source deme defined in scenario A. Scenario D is identical to A, except that the initial HS population is even much more numerous (14,000 breeding females located in 1,400 demes) and occupies all the south area of the HN occupation zone. Scenario E is identical to A, but rHS is here equal to 0.8, which is the maximum growth rate estimated for the Paleolithic human population (Ammerman and Cavalli-Sforza 1984; Young and Bettinger 1995). Scenario F is identical to A, except that mHS is here much higher and equal to 0.5, implying that 50% of the women are recruited in adjacent demes. The carrying capacity of Neanderthals KHN had to be readjusted for scenarios E and F, which may appear as extreme, in order to maintain a colonization time of about 500 generations. It was indeed set to 25, giving a total density of HN of 350,000 individuals over Europe. Scenario G is identical to A, except that rHS is here equal to 0.6 and mHS is equal to 0.35, leading to a shorter colonization time of the European continent by HS. Under scenario G, the colonization time of Europe is approximately 8,000 y, which would correspond to the minimum colonization time estimated from direct fossil evidence, since the first European HS fossil is dated to about 36,000 y BP (Trinkaus et al. 2003), and the latest HN is dated around 28,000 y BP (Smith et al. 1999). Scenario H is identical to A, but admixture can occur between HN males and HS females as well, such that γHS,HN = γHN,HS
. Finally, scenario I uses the same parameters as A, but a different demographic model. When a cell is colonized by HS, it is directly filled at KHS with an initial proportion γ of Neanderthals. Admixture thus occurs when demographic equilibrium is already reached, and not during the demographic growth as in the other models.
While the γ values are the true parameters of our model, they may not be very telling per se, and we have therefore chosen to quantify levels of interbreeding between populations using another parameterization, which is the average number of admixture events per deme between modern humans and Neanderthals. By performing a large series of simulations, we could find the values of γ leading to a given average number of admixture events per deme (e.g., 1/500, 1/100, 1/10, 1, 2, etc.). For instance, a value of 1/10 means that one admixture event occurred on average in one deme out of ten during the whole cohabitation period between HN and HS.
Coalescent simulations
For each scenario and for different interbreeding values, γij, the demography of the more than 14,000 demes is thus simulated for 1,600 generations. The density of all demes, the number of migrants exchanged between demes from the same population, and the number of admixture events resulting in gene movements between Neanderthals and modern humans are recorded in a database. This demographic database is then used to simulate the genealogy of samples of 40 genes drawn from 100 demes, representing a total of 4,000 modern human genes distributed over all Europe and corresponding approximately to the current sampling effort of European mtDNA sequence (Richards et al. 1996; Handt et al. 1998). The coalescent simulations proceed as described previously (Ray et al. 2003; Currat et al. 2004). The average proportion of sampled genes whose ancestors can be traced to some Neanderthal lineages was then computed over 10,000 simulations. The likelihood of each interbreeding coefficient, γij, is estimated for the different scenarios by the proportion of 10,000 simulations that lead to a Most Recent Common Ancestor of all 4,000 sampled mtDNA sequences being of modern human origin.
Supporting Information
Figure S1
Proportion of Neanderthal Lineages in the European Population as a Function of the Average Number of Admixture Events per Deme between HN and HS:
These values are given for the nine scenarios (A–I) listed in Table 1, and for a new scenario A+Neol. This latter scenario is similar to A, except that the carrying capacity of the modern humans is increased by a factor 250 at the time of the Neolithic transition (320 generations BP). The influence of this demographic increase on the simulated HN proportion is very weak, as shown on this figure.
Figure S2
Evolution of the densities of demes HN (in black) and HS (in gray) within a cell simulated under demographic scenario A for γij = 0.4. The cell is colonized by HS at time −1520 ( 0 = present). The thin black line with white circles represents the distribution of admixture events, whose numbers are reported on the right axis:
Acknowledgments
Thanks to Nicolas Ray and Pierre Berthier for computing assistance. We are grateful to Monty Slatkin, Arnaud Estoup, and Grant Hamilton for their critical reading of the manuscript, and to four anonymous reviewers for their helpful comments. This work was supported by a Swiss NSF grant No 3100A0–100800 to LE.
Abbreviations
BP
before present
HN
Homo sapiens neanderthalensis
HS
Homo sapiens sapiens
mtDNA
mitochondrial DNA
RAO
Recent African Origin Model
Conflicts of interest. The authors have declared that no conflicts of interest exist.
Author contributions. MC and LE conceived and designed the experiments. MC performed the experiments. MC and LE analyzed the data. MC and LE wrote the paper.
Academic Editor: David Penny, Massey University
Citation: Currat M, Excoffier L (2004) Modern humans did not admix with Neanderthals during their range expansion into Europe. PLoS Biol 2(12): e421.
|
The first one is the documented progressive spread of modern humans in Europe (see Figures 1 and and2),2), and the second is the local and progressive demographic growth of Paleolithic populations, with density-dependent interactions with Neanderthals. The incorporation of these additional sources of realism cannot be handled by current analytical models, but it can be readily integrated into a coalescent simulation framework, showing that it will be possible in the future to predict patterns of molecular diversity among populations or species belonging to a particular ecological network. Given the long period of cohabitation of the two populations in Europe and ample opportunities to interbreed, the absence or extremely low number of admixture events between Neanderthals and modern humans is best explained by intersterility or reduced fitness of hybrid individuals, promoting these populations to the status of different biological species. No interbreeding between the two populations also strongly argues in favor of a complete replacement of previous members of the genus Homo by modern humans and against a multiregional evolution of H. sapiens (Eckhardt et al. 1993; Wolpoff et al. 2000). It thus gives more credit to the RAO hypothesis (Excoffier 2002; Stringer 2002), since some very divergent H. erectus mitochondrial sequences should have also been observed if interbreeding had occurred during the colonization of Eurasia by modern humans from Africa.
Our conclusions about the genetic incompatibility between modern humans and Neanderthals would however be wrong if the absence of Neanderthal mtDNA genes in the current gene pool of modern Europeans was due to some processes that were not incorporated into our model.
|
no
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
no_statement
|
"neanderthals" did not "interbreed" with "modern" "humans".. "modern" "humans" did not "interbreed" with "neanderthals".
|
https://www.sci.news/othersciences/anthropology/article00526.html
|
Scientists Say Humans Did not Interbreed with Neanderthals ...
|
Scientists Say Humans Did not Interbreed with Neanderthals
Cambridge researchers have raised questions about the theory that Neanderthals and modern humans at some point interbred. Their findings show that common ancestry, not hybridization, better explains the average 1-4 per cent DNA that those of European and Asian descent share with Neanderthals.
Early humans in a cave, left, and Neanderthal family, right (Charles Knight / AMNH / The Natural History Museum)
In the last two years, a number of studies have suggested that modern humans and Neanderthals had at some point interbred. Genetic evidence shows that on average Eurasians and Neanderthals share between 1-4 per cent of their DNA. In contrast, Africans have almost none of the Neanderthal genome. The previous studies concluded that these differences could be explained by hybridization, which occurred as modern humans exited Africa and bred with the Neanderthals who already inhabited Europe.
The scientists found that common ancestry, without any hybridization, explains the genetic similarities between Neanderthals and modern humans. In other words, the DNA that Neanderthal and modern humans share can all be attributed to their common origin, without any recent influx of Neanderthal DNA into modern humans.
“Our work shows clearly that the patterns currently seen in the Neanderthal genome are not exceptional, and are in line with our expectations of what we would see without hybridization,” said lead author Dr Andrea Manica of the University of Cambridge. “So, if any hybridization happened – it’s difficult to conclusively prove it never happened – then it would have been minimal and much less than what people are claiming now.”
Neanderthals and modern humans once shared a common ancestor who is thought to have spanned Africa and Europe about half a million years ago. Just as there are very different populations across Europe today, populations of that common ancestor would not have been completely mixed across continents, but rather closer populations would have been more genetically similar to each other than populations further apart.
Then, about 350-300 thousand years ago, the European range and the African range became separated. The European range evolved into Neanderthal, the African range eventually turned into modern humans. However, because the populations within each continent were not freely mixing, the DNA of the modern human population in Africa that were ancestrally closer to Europe would have retained more of the ancestral DNA that is also shared with Neanderthals.
On this basis, the scientists created a model to determine whether the differences in genetic similarities with Neanderthal among modern human populations, which had been attributed to hybridization, could be down to the proximity of modern humans in northern Africa to Neanderthals.
By examining the different genetic makeup among modern human populations, the scientists’ model was able to infer how much genetic similarity there would have been between distinct populations within a continent. The researchers then simulated a large number of populations representing Africa and Eurasia over the last half a million years, and estimated how much similarity would be expected between a random Neanderthal individual and modern humans in Africa and Eurasia.
The scientists concluded that when modern humans expanded out of Africa 60-70K years ago, they would have brought out that additional genetic similarity with them, making Europeans and Asians more similar to Neanderthals than Africans are on average – undermining the theory that hybridization, and not common ancestry, explained these differences.
_______
Bibliographic information: Manica et al. 2012. Effect of ancient population structure on the degree of polymorphism shared between modern human populations and ancient hominins. PNAS, accepted for publication.
|
Scientists Say Humans Did not Interbreed with Neanderthals
Cambridge researchers have raised questions about the theory that Neanderthals and modern humans at some point interbred. Their findings show that common ancestry, not hybridization, better explains the average 1-4 per cent DNA that those of European and Asian descent share with Neanderthals.
Early humans in a cave, left, and Neanderthal family, right (Charles Knight / AMNH / The Natural History Museum)
In the last two years, a number of studies have suggested that modern humans and Neanderthals had at some point interbred. Genetic evidence shows that on average Eurasians and Neanderthals share between 1-4 per cent of their DNA. In contrast, Africans have almost none of the Neanderthal genome. The previous studies concluded that these differences could be explained by hybridization, which occurred as modern humans exited Africa and bred with the Neanderthals who already inhabited Europe.
The scientists found that common ancestry, without any hybridization, explains the genetic similarities between Neanderthals and modern humans. In other words, the DNA that Neanderthal and modern humans share can all be attributed to their common origin, without any recent influx of Neanderthal DNA into modern humans.
“Our work shows clearly that the patterns currently seen in the Neanderthal genome are not exceptional, and are in line with our expectations of what we would see without hybridization,” said lead author Dr Andrea Manica of the University of Cambridge. “So, if any hybridization happened – it’s difficult to conclusively prove it never happened – then it would have been minimal and much less than what people are claiming now.”
Neanderthals and modern humans once shared a common ancestor who is thought to have spanned Africa and Europe about half a million years ago. Just as there are very different populations across Europe today, populations of that common ancestor would not have been completely mixed across continents, but rather closer populations would have been more genetically similar to each other than populations further apart.
Then, about 350-300 thousand years ago, the European range and the African range became separated.
|
no
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
no_statement
|
"neanderthals" did not "interbreed" with "modern" "humans".. "modern" "humans" did not "interbreed" with "neanderthals".
|
https://www.nytimes.com/1997/07/11/us/neanderthal-dna-sheds-new-light-on-human-origins.html
|
NEANDERTHAL DNA SHEDS NEW LIGHT ON HUMAN ORIGINS ...
|
NEANDERTHAL DNA SHEDS NEW LIGHT ON HUMAN ORIGINS
TimesMachine is an exclusive benefit for home delivery and digital subscribers.
A hauntingly brief but significant message extracted from the bones of a Neanderthal who lived at least 30,000 years ago has cast new light both on the origin of humans and Neanderthals and on the long disputed relationship between the two.
The message consists of a short strip of the genetic material DNA that has been retrieved and deciphered despite the age of the specimen. It indicates that Neanderthals did not interbreed with the modern humans who started to supplant them from their ancient homes about 50,000 years ago.
The message also suggests, said the biologists who analyzed it, that the Neanderthal lineage is four times older than the human lineage, meaning that Neanderthals split off much earlier from the hominid line than did humans.
The finding, made by a team of scientists led by Dr. Svante Paabo of the University of Munich in Germany, marks the first time that decodable DNA has been extracted from Neanderthal remains and is the oldest hominid DNA so far retrieved. The DNA was extracted from the original specimen of Neanderthals, found in the Neander valley near Dusseldorf, Germany, in 1856 and is now in the Rheinisches Landesmuseum in Bonn.
''This is obviously a fantastic achievement,'' said Dr. Chris Stringer, an expert on Neanderthals at the Museum of Natural History in London.
Many anthropologists had tried to extract DNA from Neanderthal bones without success. ''Clearly, it's a coup,'' Dr. Maryellen Ruvolo, an anthropologist at Harvard University, said of the Munich team.
The Neanderthals were large, thick-boned individuals with heavy brows and a brain case as large as that of modern humans but stacked behind the face instead of on top of it. They lived in Europe and western Asia from 300,000 years ago, dying out about 270,000 years later.
For the latter part of that period they clearly coexisted with modern humans but the relationship between the two groups, whether fraternal or genocidal, has been debated ever since the first Neanderthal was discovered. Early humans and Neanderthals may have interbred, as some scientists contend, with modern Europeans being descended from both; or the two hominid lines may have remained distinct, with humans displacing and probably slaughtering their rivals.
The new finding, reported in today's issue of the journal Cell, comes down firmly on the side of Neanderthals having been a distinct species that contributed nothing to the modern human gene pool. Calling it as ''an incredible breakthrough in studies of human evolution,'' Dr. Stringer said the results showed Neanderthals ''diverged away from our line quite early on, and this reinforces the idea that they are a separate species from modern humans.''
Dr. Ian Tattersall, a paleontologist at the American Museum of Natural History in Manhattan, said the finding ''fits well into my view of the fossil record,'' although there was still a ''very tenacious notion of Neanderthals having given rise to Homo sapiens or interbred with them.''
The new work was also praised by scientists who study ancient DNA, a lively new field, which has included reports of DNA millions of years old being retrieved from dinosaur bones, fossil magnolia leaves and insects entombed in amber. Although these reports have appeared in leading scientific journals like Science and Nature, other scientists have been unable to reproduce them. In at least in one case the supposed fossil DNA was contaminated by contemporary human DNA.
But a leading critic of these claims of ancient DNA extraction, Tomas Lindahl of the Imperial Cancer Research Fund in England, has given the new work his seal of approval, calling it ''arguably the greatest achievement so far in the field of ancient DNA research.''
The Munich team took great pains to verify that it had a genuine sample of Neanderthal DNA. Working in sterile conditions, team members isolated it in two different laboratories and distinguished it from the human DNA, which contaminated the bones. Their work is ''compelling and convincing,'' Dr. Lindahl wrote in a commentary in today's Cell.
The DNA recovered from the Neanderthal is known as mitochondrial DNA, a type especially useful for monitoring human evolution. Mitochondria are tiny, bacteria-like organelles within the cell and possess their own DNA. They exist in eggs, but not in sperm, and so are passed down through the female line. Unlike the main human genes on the chromosomes, which get shuffled each generation, the only change to mitochondrial DNA is the accidental change caused by copying errors, radiation or other mishaps.
Once a change, or mutation, becomes established in mitochondrial DNA, it gets passed on to all of that woman's descendants. Tracking mutations is a powerful way of constructing family trees. The branch points on such a family tree can also be dated with some plausibility if at least one of them can be matched to known event in the fossil record, like the parting of the human and chimpanzee lines.
The Munich team focused on a particularly variable region of mitochondrial DNA and reconstructed the Neanderthal version of it, 378 units in length. Comparing it with modern human DNA from five continents, they found it differed almost equally from all of them, signaling no special relationship with contemporary Europeans, as would have been expected if Neanderthals and modern humans interbred.
In addition, the family tree of Neanderthal mutations, when compared with those of the chimpanzee and human, yielded a distinctive pattern of variations. In the authors' interpretation, Neanderthals branched off the hominid line first, followed by humans much later.
According to the fossil and archeological record, humans and Neanderthals diverged at least 300,000 years ago. The mitochondrial DNA evidence agrees well with this date, the authors say, since individual genes would be expected to diverge before the divergence of populations.
From fossil evidence the human and chimpanzee lines are thought to have diverged some four million to five million years ago, a date that helps anchor the tree drawn from the new genetic data. The split between Neanderthal and human mitochondrial DNA, which marks the start of the split between the human and Neanderthal lineages, would have occurred between 550,000 and 690,000 years ago, the authors say, while the individual from whom all modern human mitochondrial DNA is descended, would have lived 120,000 to 150,000 years ago.
Acknowledging the uncertainty in these dates, the authors say they show at least that Neanderthal lineage is four times as old as the human lineage, as measured by mitochondrial DNA.
The Munich team's report ranges over the three treacherous fields of paleoanthropology, ancient DNA and the genetics of human evolution. Dr. Svante Paabo has criticized many claims of ancient DNA and sought to lay out his methods with care. That was part of the reason for choosing to publish his work in Cell, which specializes in rigorous molecular biology, rather than more widely read journals like Science and Nature.
Dr. Mark Stoneking of Pennsylvania State University, an expert on human evolution and a member of the Munich team, said Cell offered more space to describe the team's methods. Also, Dr. Stoneking said, ''this was in the background of Svante despairing over some of the claims of ancient DNA published in Science and Nature.''
The interpretation of the Neanderthal mitochondrial data may be open to debate on the dating.
''Deriving these dates involves making a lot of supposition about the neutrality of the mitochondrial genome and the speed of accretion of new changes,'' Dr. Tattersall said.
But Dr. Ruvolo said the Munich team's methods seemed sound and its interpretation of the data was likely to be accepted. ''There could be minor quibbles over the dates but the overall properties of the tree won't change,'' she said.
A version of this article appears in print on , Section A, Page 1 of the National edition with the headline: NEANDERTHAL DNA SHEDS NEW LIGHT ON HUMAN ORIGINS. Order Reprints | Today’s Paper | Subscribe
|
Early humans and Neanderthals may have interbred, as some scientists contend, with modern Europeans being descended from both; or the two hominid lines may have remained distinct, with humans displacing and probably slaughtering their rivals.
The new finding, reported in today's issue of the journal Cell, comes down firmly on the side of Neanderthals having been a distinct species that contributed nothing to the modern human gene pool. Calling it as ''an incredible breakthrough in studies of human evolution,'' Dr. Stringer said the results showed Neanderthals ''diverged away from our line quite early on, and this reinforces the idea that they are a separate species from modern humans.''
Dr. Ian Tattersall, a paleontologist at the American Museum of Natural History in Manhattan, said the finding ''fits well into my view of the fossil record,'' although there was still a ''very tenacious notion of Neanderthals having given rise to Homo sapiens or interbred with them.''
The new work was also praised by scientists who study ancient DNA, a lively new field, which has included reports of DNA millions of years old being retrieved from dinosaur bones, fossil magnolia leaves and insects entombed in amber. Although these reports have appeared in leading scientific journals like Science and Nature, other scientists have been unable to reproduce them. In at least in one case the supposed fossil DNA was contaminated by contemporary human DNA.
But a leading critic of these claims of ancient DNA extraction, Tomas Lindahl of the Imperial Cancer Research Fund in England, has given the new work his seal of approval, calling it ''arguably the greatest achievement so far in the field of ancient DNA research.''
The Munich team took great pains to verify that it had a genuine sample of Neanderthal DNA. Working in sterile conditions, team members isolated it in two different laboratories and distinguished it from the human DNA, which contaminated the bones. Their work is ''compelling and convincing,'' Dr. Lindahl wrote in a commentary in today's Cell.
|
no
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
no_statement
|
"neanderthals" did not "interbreed" with "modern" "humans".. "modern" "humans" did not "interbreed" with "neanderthals".
|
https://www.telegraph.co.uk/news/science/science-news/9474109/Neanderthals-did-not-interbreed-with-humans-scientists-find.html
|
Neanderthals did not interbreed with humans, scientists find
|
Neanderthals did not interbreed with humans, scientists find
The genetic traits between humans and Neanderthals are more likely from a shared ancestry rather than interbreeding, a British study has suggested.
ByTelegraph Reporters 14 August 2012 • 8:45am
Their analysis contradicts recent studies that found inter-species mating, known as hybridisation, probably occurred.Credit: Photo: ALAMY
Cambridge University researchers concluded that the DNA similarities were unlikely to be the result of human-Neanderthal sex during their 15,000-year coexistence in Europe.
People living outside Africa share as much as four per cent of their DNA with Neanderthals, a cave-dwelling species with muscular short arms and legs and a brain slightly larger than ours.
The Cambridge researchers examined demographic patterns suggesting that humans were far from intimate with the species they displaced in Europe almost 40,000 years ago.
The study into the genomes of the two species, found a common ancestor 500,000 years ago would be enough to account for the shared DNA.
Their analysis, published in the journal Proceedings of the National Academy of Sciences (PNAS), contradicts recent studies that found inter-species mating, known as hybridisation, probably occurred.
Dr Andrea Manica, who led the study, said: "To me the interbreeding question is not whether there was hybridisation but whether there was any hybridisation that affected the subsequent evolution of humans. I think this is very, very unlikely.
"Our work shows clearly the patterns currently seen in the Neanderthal genome are not exceptional, and are in line with our expectations of what we would see without hybridisation.
"So, if any hybridisation happened then it would have been minimal and much less than what people are claiming now."
Evidence has shown that Neanderthals were driven into extinction by humans who were more efficient at finding food and multiplied at a faster rate.
A previous study in 2010 suggested that interspecies liaisons near the Middle East resulted in Neanderthal genes first entering humans 70,000 years ago.
Modern non-Africans share more with Neanderthals than Africans, supporting the claim that the mixing occurred when the first early humans left Africa to populate Europe and Asia.
The existence of a 500,000-year-old shared ancestor that predates the origin of Neanderthals provides a better explanation for the genetic mix.
Diversity within this ancestral species meant that northern Africans were more genetically similar to their European counterparts than southern Africans through geographic proximity.
This likeness persisted over time to account for the overlap with the Neanderthal genome we see in modern people today.
Differences between populations can be explained by common ancestry, Dr Manica said.
"The idea is that our African ancestors would not have been a homogeneous, well-mixed population but made of several populations in Africa with some level of differentiation, in the way right now you can tell a northern and southern European from their looks," she said.
“Based on common ancestry and geographic differences among populations within each continent, we would predict out of Africa populations to be more similar to Neanderthals than their African counterparts – exactly the patterns that were observed when the Neanderthal genome was sequenced, but this pattern was attributed to hybridisation.
"Hopefully, everyone will become more cautious before invoking hybridisation, and start taking into account that ancient populations differed from each other probably as much as modern populations do.”
Northern Africans would be more similar to Europeans and ancient similarity stayed because there wasn't enough mixing between northern and southern Africans.
Population diversity, known as substructure, cant explain data on the shared genes, said David Reich, a professor of genetics at Harvard Medical School, in Boston who authored the 2010 study.
We have ruled out the possibility that ancient substructure can explain all the evidence of greater relatedness of Neanderthals to non-Africans than to Africans, he added.
Dr Manica said hybridisation between Neanderthals and humans can never be disproved entirely.
|
Neanderthals did not interbreed with humans, scientists find
The genetic traits between humans and Neanderthals are more likely from a shared ancestry rather than interbreeding, a British study has suggested.
ByTelegraph Reporters 14 August 2012 • 8:45am
Their analysis contradicts recent studies that found inter-species mating, known as hybridisation, probably occurred. Credit: Photo: ALAMY
Cambridge University researchers concluded that the DNA similarities were unlikely to be the result of human-Neanderthal sex during their 15,000-year coexistence in Europe.
People living outside Africa share as much as four per cent of their DNA with Neanderthals, a cave-dwelling species with muscular short arms and legs and a brain slightly larger than ours.
The Cambridge researchers examined demographic patterns suggesting that humans were far from intimate with the species they displaced in Europe almost 40,000 years ago.
The study into the genomes of the two species, found a common ancestor 500,000 years ago would be enough to account for the shared DNA.
Their analysis, published in the journal Proceedings of the National Academy of Sciences (PNAS), contradicts recent studies that found inter-species mating, known as hybridisation, probably occurred.
Dr Andrea Manica, who led the study, said: "To me the interbreeding question is not whether there was hybridisation but whether there was any hybridisation that affected the subsequent evolution of humans. I think this is very, very unlikely.
"Our work shows clearly the patterns currently seen in the Neanderthal genome are not exceptional, and are in line with our expectations of what we would see without hybridisation.
"So, if any hybridisation happened then it would have been minimal and much less than what people are claiming now. "
Evidence has shown that Neanderthals were driven into extinction by humans who were more efficient at finding food and multiplied at a faster rate.
|
no
|
Anthropology
|
Did Neanderthals interbreed with modern humans?
|
no_statement
|
"neanderthals" did not "interbreed" with "modern" "humans".. "modern" "humans" did not "interbreed" with "neanderthals".
|
https://www.pbs.org/wgbh/evolution/educators/course/session5/explain_a.html
|
Evolution: Online Course for Teachers: Session 5- Explain Part A
|
Human evolution is believed to have occurred over the past six million years. As you explore 14 hominid species in the Origins of Humankind Web activity, jot down the answers to these questions:
When did bipedalism evolve in time (mya)? Which species was the first early bipedal hominid? What specific evidence supports bipedalism for early hominids? Which early hominid fossils provide the strongest evidence of bipedalism?
When did the first evidence of tool use appear? In what species?
Which was the first hominid to leave evidence of culture? What clues did hominids leave behind that reflected their cognitive abilities?
What cultural adaptations allowed Homo erectus to expand beyond tropical and subtropical environments into the cooler climate of the temperate zone?
Which hominids coexisted in time? Why and how do you think this was possible?
New technologies have allowed paleontologists to reexamine earlier fossil finds. Recently scientists were able to recover mitochondrial DNA from Neanderthal skeletons. That molecular evidence, based on a very small sample, differed from modern human DNA and suggests that Neanderthals and modern humans probably did not interbreed. There are still many questions about the details of the human phylogeny, especially with ongoing announcements of new species, some of which are believed to occupy new branches on an already bushy evolutionary tree. In paleontology, as in any scientific field, we need to reexamine and revise old hypotheses as new evidence emerges.
|
Human evolution is believed to have occurred over the past six million years. As you explore 14 hominid species in the Origins of Humankind Web activity, jot down the answers to these questions:
When did bipedalism evolve in time (mya)? Which species was the first early bipedal hominid? What specific evidence supports bipedalism for early hominids? Which early hominid fossils provide the strongest evidence of bipedalism?
When did the first evidence of tool use appear? In what species?
Which was the first hominid to leave evidence of culture? What clues did hominids leave behind that reflected their cognitive abilities?
What cultural adaptations allowed Homo erectus to expand beyond tropical and subtropical environments into the cooler climate of the temperate zone?
Which hominids coexisted in time? Why and how do you think this was possible?
New technologies have allowed paleontologists to reexamine earlier fossil finds. Recently scientists were able to recover mitochondrial DNA from Neanderthal skeletons. That molecular evidence, based on a very small sample, differed from modern human DNA and suggests that Neanderthals and modern humans probably did not interbreed. There are still many questions about the details of the human phylogeny, especially with ongoing announcements of new species, some of which are believed to occupy new branches on an already bushy evolutionary tree. In paleontology, as in any scientific field, we need to reexamine and revise old hypotheses as new evidence emerges.
|
no
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
yes_statement
|
"orson" welles' 'war of the worlds' "broadcast" "caused" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles resulted in a genuine "panic" among listeners.
|
https://www.myjournalcourier.com/news/article/The-day-the-world-8216-ended-8217-Radio-show-13347272.php
|
The day the world 'ended': Radio show 80 years ago pushed panic ...
|
Years before the term “fake news” was coined, a seemingly innocent radio broadcast demonstrated the real definition — and caused panic in the streets.
Today marks the 80th anniversary of the national radio broadcast of “The War of the Worlds,” which was misunderstood by many listeners as an actual attack by Martians. An estimated 9 million to 12 million Americans were frightened by the broadcast, including some in Jacksonville.
The broadcast on CBS Radio was a dramatic interpretation of the 1898 novel of the same name by British author H.G. Wells. In an era before television, radio was a prime form of entertainment and thousands of Americans tuned in to hear news bulletins, live music, political addresses and dramatic readings.
“The War of the Worlds” was the 17th broadcast of the CBS series “The Mercury Theatre on the Air.” Its ratings, though, sorely lagged behind the popular NBC program “The Chase and Sanborn Hour,” which featured ventriloquist Edgar Bergen, in its 8 p.m. Sunday time slot.
The Mercury Theatre was a non-sponsored show, meaning there were no commercials in the hour-long slot. Producers at CBS, as well as actor Orson Welles, who is best known for his 1941 classic “Citizen Kane,” created an adaptation of Wells’ work with a setting in New Jersey against a backdrop of simulated live news bulletins.
As listeners tuned in that evening, they heard Welles introduce the play and read a weather report before switching to orchestra music, said to be live from New York. The fictional nature of the broadcast was announced before, and as many as four times during the program, but many people apparently never heard it.
As a result, some listeners were surprised when the music shifted to a breaking news bulletin that “at 20 minutes before 8, Central Time, Professor Farrell of the Mount Jennings Observatory” in Chicago reported “several explosions of incandescent gas, occurring at regular intervals on the planet Mars.”
The report claimed “the spectroscope indicates the gas to be hydrogen and moving toward the Earth with enormous velocity.”
•••
The music resumed before more news bulletins and interviews, along with a “live” news report from Grover’s Mill, New Jersey, where a meteorite reportedly had crashed. A shocked reporter said, “I hardly know where to begin. … I guess that’s the thing buried in front of me, half-buried in its vast pit.”
Minutes later, the same announcer breathlessly reported the emergence of a Martian and told the audience, “Ladies and gentlemen, this is the most terrifying thing I have ever witnessed. … Someone’s crawling out of the hollow top.” The creature was said to have a “V-shaped” mouth with “saliva dripping from its rimless lips that seemed to quiver and pulsate.”
Sounds of screams and explosions followed, with the announcer screaming “the whole field’s caught fire … it’s spreading everywhere.” Since there were no breaks in the show for nearly a half-hour, the suspense only built as more news bulletins told the story of a spreading Martian attack.
In succeeding reports, Martians were said to have destroyed New Jersey state militia, power stations, infrastructure and communications before spreading into New York City, where more destruction and havoc followed.
Despite the repeated announcements of the true nature of the broadcast, “The War of the Worlds” threw many into hysteria. Some raced into the streets while others gathered around family or headed to church for prayer in their “final hours.”
Law enforcement officials were up in arms as switchboards lit up nationwide, particularly in the East. Scattered suicide attempts were reported, while one young woman in New York reportedly broke her arm as she fell as she tried to flee.
•••
Even in Jacksonville, there was alarm.
In a 2013 interview, Lou Ellen Lemmons, a former Jacksonville resident who now is 100 years old, remembered hearing the broadcast during her job at Winston’s Cafe, a local diner.
The manager quietly told the staff, who eagerly awaited updates as the broadcast told of the terror. Lemmons said she called an aunt to warn her of the invasion and recalled the relief in the room when they learned the true story, saying “we were royally duped.”
Lemmons noted that some co-workers covered their embarrassment by claiming they “never really believed” the story.
•••
Though the hysteria was largely in the East, there was plenty of alarm elsewhere in Illinois. Southeast of Peoria in Mackinaw, a minister ended his Sunday evening services early, telling his congregation that “an alarming news broadcast has just come over the radio, and I suggest that you all go to your homes, where you can better keep in touch with events.”
In Decatur, the switchboard of the daily Herald & Review was “kept busy for nearly 45 minutes” from people fearing an invasion from Mars. One girl was reported to be “hysterical and under the care of a physician.” Switchboard operators in small towns also were overwhelmed, including in the Montgomery County seat of Hillsboro, where the operator “said she could not handle all the calls for a time.”
In Bloomington, the Pantagraph reported that “for several minutes (its) switchboard was swamped with calls,” though apparently the callers kept their wits about them. The paper added that “there was no known panicky feeling in Bloomington” though “one woman, who later heard the explanation of the skit, called back to apologize for having called in the first place.”
To the south in Alton, a railroad turntable operator named John Maxey was reported to have been “uneasy by the realism” of the broadcast and “even went out of doors to peer at the eastern sky for signs of the conflagration (or) catastrophe.”
Though the episode remains legendary in American culture, recent researchers argue that reaction to “The War of the Worlds” has been embellished, based on the low ratings of The Mercury Theatre as well as letters received by CBS, The Mercury Theatre staff and the Federal Communications Commission. Of the 1,770 letters documented by CBS, some 1,086 were positive, as were 91 percent of mailings to The Mercury Theatre production staff.
Among the compliments was a note to the FCC from a 12-year-old Rockford boy, who wrote, “I enjoyed the broadcast of Mr. Welles. … I heard about half of it but my mother and sister got frightened and I had to turn it off.”
|
Years before the term “fake news” was coined, a seemingly innocent radio broadcast demonstrated the real definition — and caused panic in the streets.
Today marks the 80th anniversary of the national radio broadcast of “The War of the Worlds,” which was misunderstood by many listeners as an actual attack by Martians. An estimated 9 million to 12 million Americans were frightened by the broadcast, including some in Jacksonville.
The broadcast on CBS Radio was a dramatic interpretation of the 1898 novel of the same name by British author H.G. Wells. In an era before television, radio was a prime form of entertainment and thousands of Americans tuned in to hear news bulletins, live music, political addresses and dramatic readings.
“The War of the Worlds” was the 17th broadcast of the CBS series “The Mercury Theatre on the Air.” Its ratings, though, sorely lagged behind the popular NBC program “The Chase and Sanborn Hour,” which featured ventriloquist Edgar Bergen, in its 8 p.m. Sunday time slot.
The Mercury Theatre was a non-sponsored show, meaning there were no commercials in the hour-long slot. Producers at CBS, as well as actor Orson Welles, who is best known for his 1941 classic “Citizen Kane,” created an adaptation of Wells’ work with a setting in New Jersey against a backdrop of simulated live news bulletins.
As listeners tuned in that evening, they heard Welles introduce the play and read a weather report before switching to orchestra music, said to be live from New York. The fictional nature of the broadcast was announced before, and as many as four times during the program, but many people apparently never heard it.
As a result, some listeners were surprised when the music shifted to a breaking news bulletin that “at 20 minutes before 8, Central Time, Professor Farrell of the Mount Jennings Observatory” in Chicago reported “several explosions of incandescent gas, occurring at regular intervals on the planet Mars.”
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
yes_statement
|
"orson" welles' 'war of the worlds' "broadcast" "caused" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles resulted in a genuine "panic" among listeners.
|
https://www.wstam.com/news/other-insights/war-worlds-enemy-us/
|
War of the Worlds - The Enemy is Us? - Wilbanks Smith & Thomas
|
War of the Worlds - The Enemy is Us?
On the evening of October 30th, 1938, 23-year-old Orson Welles achieved infamy with his broadcast of H.G. Wells’ science fiction novel War of the Worlds. The story – which was serialized in the late 1890s before publication as a novel - told of a brutal Martian invasion and occupation of Earth and was styled as a first-person factual account.
The novel lent itself to a sense of realism and a reporting-like format that Welles complemented with “breaking news”-style alerts on the devastation of advancing Martian armies with their heat-ray guns. Actors gave tortured voice to the characters in the story, while descriptions of the catastrophe were underscored by explosions and sound effects that were convincing to an audience for whom radio broadcast was less than two decades old. In short, it was radio drama styled as a news broadcast.
The performance reportedly caused mass hysteria among radio listeners unable to distinguish performance from reality - the next day, horrific newspaper headlines told of suicides and the emptying of cities. Days later, with the dust settling and the alleged hysteria calmed, still more headlines proliferated - these ones focused on assigning blame to Columbia Broadcasting and Welles, excoriating the company and performers as having deliberately misled and terrorized audiences through unchecked use of a dangerous medium.
More to the Story: The Incident vs. the Myths of the Incident
With the 75th anniversary of the incident, 2013 saw renewed interest in the War of the Worlds broadcast. As part of its American Experience series, PBS released a documentary recalling that “upwards of a million people, [were] convinced, if only briefly, that the United States was being laid waste by alien invaders.” NPR unrolled a “War of the Worlds” Radiolab episode.
The surge of interest also prompted a more critical examination of the actual magnitude of the panic Welles’ experiment allegedly caused. “There’s only one problem,” read a Slate article entitled “The Myth of the War of the Worlds Panic.” “The supposed panic was so tiny as to be practically immeasurable on the night of the broadcast. Despite repeated assertions to the contrary in the PBS and NPR programs, almost nobody was fooled by Welles’ broadcast.”
Often-forgotten facts of the story are that Orson Welles opened it with a clear introduction referencing H.G. Wells’ story, and the performance occurred on a program called The Mercury Theatre on the Air. It was never intended as a fake broadcast or offered as a real broadcast. Confronted with those points, however, newspapers speculated that many listeners had dial-surfed during the musical interlude of a more popular program, landing on Welles’ broadcast after the introduction. There was little evidence to support any of the newspapers’ claims – even less about the news of suicides and pandemonium that they reported on in the aftermath of the alleged panic.
In hindsight, the origination of the War of the Worlds myth is easy enough to explain – it boils down to competition, and the utility of fake news to discredit competition.
In 1938, the animosity between radio and print media was palpable and at its height. Back in 1920, the first commercial radio broadcast had kicked off the radio craze, which had grown into a Golden Age at the expense of print media. The 1930s saw furious competition emerge between the incumbent newspaper empires and the upstart radio industry, as publishers made desperate and futile attempts block the transmission of news by radio. The depleted print industry was motivated to gamble on anything that could damage radio, and War of the Worlds provided a golden opportunity to hit back.
Naturally the accusations against Welles and his affiliates thrived and spread as newspapers hammered them, but eventually that campaign ceased and the facts were available for inspection. Somehow though - even eighty years later - there has been little balance achieved in the perspective on the incident. Why was this myth allowed to persist and grow despite ample and straightforward evidence against it? One argument suggests the myth gained authority simply due to its own popularity.
The Enemy is Us?
The omnipresence of cell phones and the instantaneousness of communication today means that a real-time mass-panic hoax would be difficult to get off the ground. But technology – however powerful – is limited in its capacity to save our grip on reality, especially when the mistruths are more subtle than an alien invasion. And most are.
We’ve noted before that today, 2/3 of US adults surveyed report getting their news from social media websites, which are increasingly both relied on and questioned as reliable and trustworthy distribution platforms. The quality of the reporting that underlies information offered as news is generally unknowable to the consumer, and we are consuming that information in ever-greater quantities.
A group of data scientists at MIT recently published in Science magazine the findings of the largest-ever study of news circulated via social media. The study examined every major English-language news story in the history of Twitter. The data set comprised 126,000 verified true or false stories, tweeted around 4.5 million times by ~3 million users over the course of a decade. According to the abstract, the study “classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications.” The complexity and rigor of the study is summarized well in “The Grim Conclusions of the Largest-Ever Study of Fake News,” an article from the Atlantic’s Robinson Meyer.
Ultimately, the study found that truth simply cannot compete with hoax and rumor. Fake news and false rumors travel faster, reach more people and penetrate deeper into the social network than verifiable stories – “in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information” (again quoting the abstract).
Finally, one focus of the study was the role of technology – “robots” – in the proliferation of fake news, and it concluded that bots accelerate the dissemination of real news and fake news at essentially the same pace. Technology is, after all, wielded by humans who have shown tendencies to commit even more stubbornly to facts and interpretations when they are challenged, and sometimes simply because they are challenged. It’s easy to think we are at the mercy of the technologies we rely on, but this study suggests otherwise - thus we are not off the hook.
The Fight Continues
In 1938 Editor & Publisher, the leading trade publication for the print media industry, offered this ominous opinion in response to the War of the Worlds broadcast incident, lending its voice (for obvious reasons) to the anti-radio campaign.
We do note the irony in quoting a warning from an infamous fake news campaign, but nonetheless it frames very well the issues and concerns that arise from today's rate of information flow in the absence of accountability.
The war between truth and falsehood is timeless and unceasing, although the brutality of the war seems to wax and wane through different periods. In today’s environment, that war seems to be raging - it is ugly and we are constantly confronted with it. Extraordinary times call for circumspection when it comes to our sources and we must learn to hold our opinions lightly. Our “better nature” may go against our own instincts, but it’s part of our responsibility to the truth.
Besides attributed information, this material is proprietary and may not be reproduced, transferred or distributed in any form without prior written permission from WST. WST reserves the right at any time and without notice to change, amend, or cease publication of the information. This material has been prepared solely for informative purposes. The information contained herein may include information that has been obtained from third party sources and has not been independently verified. It is made available on an “as is” basis without warranty. This document is intended for clients for informational purposes only and should not be otherwise disseminated to other third parties. Past performance or results should not be taken as an indication or guarantee of future performance or results, and no representation or warranty, express or implied is made regarding future performance or results. This document does not constitute an offer to sell, or a solicitation of an offer to purchase, any security, future or other financial instrument or product.
|
“The supposed panic was so tiny as to be practically immeasurable on the night of the broadcast. Despite repeated assertions to the contrary in the PBS and NPR programs, almost nobody was fooled by Welles’ broadcast.”
Often-forgotten facts of the story are that Orson Welles opened it with a clear introduction referencing H.G. Wells’ story, and the performance occurred on a program called The Mercury Theatre on the Air. It was never intended as a fake broadcast or offered as a real broadcast. Confronted with those points, however, newspapers speculated that many listeners had dial-surfed during the musical interlude of a more popular program, landing on Welles’ broadcast after the introduction. There was little evidence to support any of the newspapers’ claims – even less about the news of suicides and pandemonium that they reported on in the aftermath of the alleged panic.
In hindsight, the origination of the War of the Worlds myth is easy enough to explain – it boils down to competition, and the utility of fake news to discredit competition.
In 1938, the animosity between radio and print media was palpable and at its height. Back in 1920, the first commercial radio broadcast had kicked off the radio craze, which had grown into a Golden Age at the expense of print media. The 1930s saw furious competition emerge between the incumbent newspaper empires and the upstart radio industry, as publishers made desperate and futile attempts block the transmission of news by radio. The depleted print industry was motivated to gamble on anything that could damage radio, and War of the Worlds provided a golden opportunity to hit back.
Naturally the accusations against Welles and his affiliates thrived and spread as newspapers hammered them, but eventually that campaign ceased and the facts were available for inspection. Somehow though - even eighty years later - there has been little balance achieved in the perspective on the incident. Why was this myth allowed to persist and grow despite ample and straightforward evidence against it? One argument suggests the myth gained authority simply due to its own popularity.
The Enemy is Us?
|
no
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
yes_statement
|
"orson" welles' 'war of the worlds' "broadcast" "caused" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles resulted in a genuine "panic" among listeners.
|
https://www.sfchronicle.com/chronicle_vault/article/Chronicle-Covers-War-of-the-Worlds-a-10076528.php
|
Chronicle Covers: 'War of the Worlds,' a Halloween scare for the ages
|
The Chronicle’s front page from Oct. 31, 1938, covers the radio broadcast of “The War of the Worlds” that caused a panic because some people were convinced the alien invasion was real.
“Hysteria among radio listeners throughout the nation and actual panicky evacuations from sections of the (New York) metropolitan area resulted from a too-realistic radio broadcast tonight describing a fictitious and devastating visitation of strange men from Mars,” the story read.
“Excited and weeping persons all over the country swamped newspaper and police switchboards with the question: ‘Is it true?’”
Actor and filmmaker Orson Welles was the director and narrator of the program, which was adapted from H.G. Wells’ novel. The show featured fake news segments about the mass-murdering martians and realistic-sounding sound effects.
Not everyone was amused.
“Many New Yorkers seized personal effects and raced out of their apartments, some jumping into their automobiles and heading for the wide-open spaces,” the story read.
Another anecdote: “A woman ran into a church in Indianapolis screaming: ‘New York destroyed, it’s the end of the world! You might as well go home to die! I just heard it on the radio!’”
Happy Halloween!
See more front pages: Go to SFChronicle.com/covers to search a database of hundreds of Chronicle Covers articles that showcase the newspaper’s history.
|
The Chronicle’s front page from Oct. 31, 1938, covers the radio broadcast of “The War of the Worlds” that caused a panic because some people were convinced the alien invasion was real.
“Hysteria among radio listeners throughout the nation and actual panicky evacuations from sections of the (New York) metropolitan area resulted from a too-realistic radio broadcast tonight describing a fictitious and devastating visitation of strange men from Mars,” the story read.
“Excited and weeping persons all over the country swamped newspaper and police switchboards with the question: ‘Is it true?’”
Actor and filmmaker Orson Welles was the director and narrator of the program, which was adapted from H.G. Wells’ novel. The show featured fake news segments about the mass-murdering martians and realistic-sounding sound effects.
Not everyone was amused.
“Many New Yorkers seized personal effects and raced out of their apartments, some jumping into their automobiles and heading for the wide-open spaces,” the story read.
Another anecdote: “A woman ran into a church in Indianapolis screaming: ‘New York destroyed, it’s the end of the world! You might as well go home to die! I just heard it on the radio!’”
Happy Halloween!
See more front pages: Go to SFChronicle.com/covers to search a database of hundreds of Chronicle Covers articles that showcase the newspaper’s history.
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
yes_statement
|
"orson" welles' 'war of the worlds' "broadcast" "caused" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles resulted in a genuine "panic" among listeners.
|
https://www.wcax.com/2022/10/20/plattsburgh-radio-station-broadcast-localized-war-worlds/
|
Plattsburgh radio station to broadcast localized 'War of the Worlds'
|
Plattsburgh radio station to broadcast localized ‘War of the Worlds’
PLATTSBURGH, N.Y. (WCAX) - Broadcasters in Plattsburgh will be bringing a Halloween radio tradition to the airwaves on October 31st. Radio station Z106.3 FM will broadcast “War of the “Worlds,” the science fiction novel made famous by the all too real panic caused by a 1938 radio play by Orson Welles. But the new broadcast comes with a twist.
When “War of the Worlds” hit the radio waves back in 1938, declaring a martian invasion in New Jersey, it didn’t go so well. But after movie adaptations, musicals, and other spinoffs, it’s now become more of a radio tradition.
“We’ve tried to localize this to include landmarks that people that are local would know and to basically show it so that it can be relatable,” said Amanda Dagley, the general manager and co-owner of Plattsburgh classic hits station Z106.3. And on Halloween, it’ll provide the platform for a new adaptation of the story thanks to Tom Lavin with the Adirondack Regional Theatre.
“He asked me if I maybe wanted to bring “War of the Worlds” to the radio and I didn’t hesitate because I think it’s cool,” Dagley said.
Lavin says this was his first time doing the play, though his father heard the original broadcast. With a combination of local actors and radio staff, the groups have recorded a 1.5-hour, hyper-local version of the classic.
‘” In our adaptation, not only do the Martians come back, they come back to the North Country,” Lavin said. He says that includes nods to Keeseville, West Chazy, and Ausable Chasm, to name a few. They’re hoping listeners will find the localized version fun and not scary.
Lavin says doing theater on the radio proved to be a bit of a challenge. “Taking all the people that we have together, recording them, then going into the process of making them sound like they’re on a cell phone or a two-way radio or driving down the road,” Lavin explained.
While it took a lot of work, Lavin and Dagley say this is theater of the mind that allows listeners to get into the spooky spirit while painting a picture of an alien-invaded North Country in their heads. “I hope it brings something a little different, because not everybody wants to go out on Halloween but they still want to participate in Halloween. I kind of want it to be a destination. I think a lot of good radio is destination radio. ‘Let’s sit around the glowing eye of the radio and listen to ‘War of the Worlds,’’” Dagley said.
She says the station has notified local emergency services and agencies just in case they get panicked phone calls following the broadcast. They’re also running promos stating it’s just a story and will have a prologue before the reading as well.
|
Plattsburgh radio station to broadcast localized ‘War of the Worlds’
PLATTSBURGH, N.Y. (WCAX) - Broadcasters in Plattsburgh will be bringing a Halloween radio tradition to the airwaves on October 31st. Radio station Z106.3 FM will broadcast “War of the “Worlds,” the science fiction novel made famous by the all too real panic caused by a 1938 radio play by Orson Welles. But the new broadcast comes with a twist.
When “War of the Worlds” hit the radio waves back in 1938, declaring a martian invasion in New Jersey, it didn’t go so well. But after movie adaptations, musicals, and other spinoffs, it’s now become more of a radio tradition.
“We’ve tried to localize this to include landmarks that people that are local would know and to basically show it so that it can be relatable,” said Amanda Dagley, the general manager and co-owner of Plattsburgh classic hits station Z106.3. And on Halloween, it’ll provide the platform for a new adaptation of the story thanks to Tom Lavin with the Adirondack Regional Theatre.
“He asked me if I maybe wanted to bring “War of the Worlds” to the radio and I didn’t hesitate because I think it’s cool,” Dagley said.
Lavin says this was his first time doing the play, though his father heard the original broadcast. With a combination of local actors and radio staff, the groups have recorded a 1.5-hour, hyper-local version of the classic.
‘” In our adaptation, not only do the Martians come back, they come back to the North Country,” Lavin said. He says that includes nods to Keeseville, West Chazy, and Ausable Chasm, to name a few. They’re hoping listeners will find the localized version fun and not scary.
Lavin says doing theater on the radio proved to be a bit of a challenge.
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
yes_statement
|
"orson" welles' 'war of the worlds' "broadcast" "caused" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles resulted in a genuine "panic" among listeners.
|
https://sites.smith.edu/fys169-f19/2019/11/22/how-fake-news-made-a-fake-news-story-famous/
|
How Fake News Made a Fake News Story Famous – Urban Fictionary
|
Site Navigation
How Fake News Made a Fake News Story Famous
On Halloween in 1938, the CBS radio network aired a play adaptation of the classic War of the Worlds novel by H.G. Wells. The play was formatted like a real radio news broadcast. It started with the weather and a performance by an orchestra. Reports of explosions on Mars occasionally interrupted the otherwise realistic orchestra performance. Half an hour in, there was an “emergency bulletin” about a UFO that landed in New Jersey. Reporters on site described “a humped shape” with incineration lasers slaughtering soldiers and police. The newscaster told his audience that aliens had invaded Earth and there was no doubt who would be the victor in this war. According to urban legend, thousands of Americans were panicked by the hyper-realistic play. People to this day believe that the broadcast caused car accidents, heart attacks, threats of suicide, and dozens of cases of shock.
The truth is much duller. Very few people tuned into the broadcast and even fewer were frightened, let alone to the extent depicted by urban legend. A survey completed by C.E. Hooper ratings service the night of the broadcast reported that only 2% of radio listeners were even tuned into CBS. Most listeners were instead listening to a performance by a family favorite: ventriloquist Edgar Bergen. Of the few that were listening to the radio show, even fewer missed the introduction (which explicitly stated that all events to be described were entirely fictional) and the multiple intermission breaks with the same disclaimer. Of those who missed every sign, most “looked at it is as a prank,” according to CBS Broadcasting Executive Frank Stanton[1].
Of course, it is true that a few people panicked at the fake news reports. A good portion of those who fell for the broadcast believed that the invaders were not aliens, but instead Germans, as 1938 was a time of high anxiety over the possibility of war. However, there is no evidence of a panic of the size that has cemented itself in American myth and legend.
The common perception of that night was the same in 1938. Most people believed, even then, that mass hysteria swept the country as a result of the conniving Orson Welles. For example, in 1938, the New York Times published an article describing the effects of the broadcast on an impossible scale: thousands of calls, hundreds of people evacuating their cities and homes, several cases of shock, a few attempted suicides. They even wrote that “in New Orleans a general impression prevailed that New Jersey had been devastated by the ‘invaders.’” Investigators attempted to find evidence of any attempted or successful suicides and found none. Additionally, no people were admitted to hospitals with shock that night. The New York Times is a newspaper that forged a reputation through vigorous fact-checking and solid reporting. They have always tried to distinguish themselves from the sensationalist papers with flashy headlines that have often sold better. It is surprising that a newspaper like the New York Times would rely so heavily on rumor and hearsay. While it remains unclear why this article was published with so much easily discredited information, this suggests that even the most reputable sources are not immune from sensationalism.
Another cause of this pervasive myth is a book called The Invasion from Mars written in 1940 by Princeton psychology professor Henry Cantril. This book attempted to explain the psychology behind the supposed “mass hysteria” in 1938. His psychological analysis might have been accurate and insightful, but his investigative skills were lacking. Cantril wrote that “millions” of listeners were panicked by the broadcast, although he later admitted that a number so large was improbable. He went on to describe nonexistent disasters that resulted from this panic. Sensationalist newspaper accounts and an effort to make his book succeed commercially probably influenced his book.
Orson Welles himself did very little to dispel this myth. He played along, later admitting that he “was hiding his delight that Halloween morning.” He knew it was in his favor for the newspaper to cause a ruckus. Indeed, the publicity he received for his radio play boosted his profile and opened doors for him in Hollywood. He was hired to direct Citizen Kane in 1941, three years after the radio play made him a household name.
Whether as a result of news sensationalism or pseudo-academic research or a writer’s delight at finally receiving some publicity, the story of “mass hysteria” spread and passed from generation to generation, becoming part of American myth and legend.
The New York Times article published the day after the play aired reads like a description of a Twilight Zone episode. It’s exciting, vivid, and most importantly, reveals something dark in human nature. It appeals to the same instinct that makes horror movies so entertaining. It’s true that people are attracted to stories with these themes and so long as these stories are recognized as fiction, there is no problem with that. However, the willingness of reputable newspapers to publish these stories claiming they are real, and the willingness of the populace to accept them as truth, does reveal a problem. In the era of the internet, where news stories are held to an even lower standard of accuracy, a gravitation towards these types of stories could have devastating effects on democracy.
|
Site Navigation
How Fake News Made a Fake News Story Famous
On Halloween in 1938, the CBS radio network aired a play adaptation of the classic War of the Worlds novel by H.G. Wells. The play was formatted like a real radio news broadcast. It started with the weather and a performance by an orchestra. Reports of explosions on Mars occasionally interrupted the otherwise realistic orchestra performance. Half an hour in, there was an “emergency bulletin” about a UFO that landed in New Jersey. Reporters on site described “a humped shape” with incineration lasers slaughtering soldiers and police. The newscaster told his audience that aliens had invaded Earth and there was no doubt who would be the victor in this war. According to urban legend, thousands of Americans were panicked by the hyper-realistic play. People to this day believe that the broadcast caused car accidents, heart attacks, threats of suicide, and dozens of cases of shock.
The truth is much duller. Very few people tuned into the broadcast and even fewer were frightened, let alone to the extent depicted by urban legend. A survey completed by C.E. Hooper ratings service the night of the broadcast reported that only 2% of radio listeners were even tuned into CBS. Most listeners were instead listening to a performance by a family favorite: ventriloquist Edgar Bergen. Of the few that were listening to the radio show, even fewer missed the introduction (which explicitly stated that all events to be described were entirely fictional) and the multiple intermission breaks with the same disclaimer. Of those who missed every sign, most “looked at it is as a prank,” according to CBS Broadcasting Executive Frank Stanton[1].
Of course, it is true that a few people panicked at the fake news reports. A good portion of those who fell for the broadcast believed that the invaders were not aliens, but instead Germans, as 1938 was a time of high anxiety over the possibility of war. However, there is no evidence of a panic of the size that has cemented itself in American myth and legend.
The common perception of that night was the same in 1938.
|
no
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
yes_statement
|
"orson" welles' 'war of the worlds' "broadcast" "caused" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles resulted in a genuine "panic" among listeners.
|
https://www.otrcat.com/p/war-of-the-worlds
|
War of the Worlds Radio Broadcasts | Old Time Radio
|
As the nip of autumn air takes hold and the bright orange pumpkins at the farmer's markets are rimmed with morning frost, many of us begin to anticipate hanging fake cobwebs in the front yard, carving a jack-o-lantern or two, and making sure that there is a big bowl of treats near the front door. Even though cold winter snows are just a few short weeks away, this is a time for fun and enjoying as many Tricks as we do Treats.
Tricks and treats may have been a consideration for young Orson Welles eight decades ago when he came up with the idea of presenting H.G. Wells' science fiction masterpiece The War of the Worlds in an updated format on the network-sustained Mercury Theatre. Even though Welles had yet to reach his mid-Twenties, by the fall of 1938 he already had a reputation for "tinkering" with literary classics to forward his own concepts and agenda. He produced a version of Julius Caesar with modern costumes as a commentary on Italian fascism and staged Voodoo Macbeth with an all-black cast. Welles had discussed the power of radio, especially the reaction to Herbert Morrison's reports of the Hindenburg airship disaster. The notion of presenting a story over the radio in the form of a news report was intriguing, just a few weeks before the infamous War of the Worlds broadcast, Welles had H.V. Kaltenborn provide commentary in a broadcast of Julius Caesar to make the play resemble a March of Time documentary.
An urban legend has built up over the decades since the historic broadcast which claims that Welles' "irresponsible" broadcast caused panic in the streets because listeners believed that Martians had actually landed and were running amok in New Jersey. The supposed panic was reported by the newspaper wire services, but when modern historians investigated they found no hospital emergency room reporting any cases resulting from the panic. There were phone calls to local stations and the network, some complaining about the network airing such a realistic show while others asked where people could donate blood to help with the "emergency". Several of the calls were to congratulate Mercury Theatre for such an exciting Halloween program.
The supposed panic was created by the newspapers, and certainly did not hurt their circulation, but the most important outcome was a chance for the print media to attack the upstart radio industry. Radio was siphoning advertising income away from the newspaper industry (much like TV would do to network radio a decade and a half later), a serious blow during the Depression. Of course, judging by the newspaper reports immediately following his broadcast, Orson Welles could not be sure if he would ever be allowed to appear on the radio again, or even if he could avoid being arrested.
There were two important, actual results from the broadcast. One was to help the Science Fiction genre to gain greater acceptance in mainstream entertainment. Stories of space travel and little green men remained the purview of pulp magazines, but The War of the Worlds allowed more readers to take a closer look and discover the fun of imagination. The more immediate result was to bring greater attention to 23-year-old upstart director Orson Welles. RKO offered Welles an unprecedented contract for an untried movie director, which resulted in Citizen Kane (1941), a scandalous box office tragedy when it was released but now considered one of the greatest motion pictures of all time.
As mentioned earlier, Welles' War of the Worlds helped to fuel the acceptance and popularity of Science Fiction, and "Invaders from Mars" is a popular topic. Lux Radio Theatre presents an adaptation of the film version of The War of the Worlds (1953), like the 1938 broadcast it is a modern telling of H.G. Wells story, this time set in California and filled with Cold War references (Spoiler: the Martians have little to fear from atomic weaponry). Also included are The Mysterious Traveler and Dimension X episodes which have some entertaining Martian Invasion tales.
Show Rating
COMMENTS
I was in high school during the 90’s in a radio/tc communication class when I first heard this program.
I have listened to this show, yearly, around Halloween time - going on 30 years this year for that tradition.
Love this program and the part if plays in the history of radio. A moment that will forever be branded into history, and never experienced again.
Welles' Mercury Theater was costing CBS a fortune. This was a huge scam to get a sponsor for the show. And in a month, they got the sponsor with Campbell's Soup. The show was renamed the Campbell Playhouse in just a few weeks.
The idea that people across the US completely freaked out was made up and still gets perpetually told 80 years after the fact. The show stated it wasn't real a few times. There was actually less of a "panic" than widespread anger when people realized they'd been hoodwinked.
I played it for my school children one year. Surprisingly, they enjoyed it, yet could not believe people actually believed it was real. It led to a great discussion of cultural differences and changes in the last 100 years.
Orson wells I think was a genius in what he did for radio and movies he was ahead of his time. I love listening to this show. When I was a kid our teacher would play the lp for us. She also played the Hindenburg crash and other radio news lps. Maybe she was the one who turned me into otr? Anyway, a great show that doesn't go out of style.
The myth of a mass panic grew over the years but the reaction was significant enough at the time that it was front page and banner headline news in newspapers across the country the next day. If you are imagining clogged roads and mass evacuations, that didn't happen. If you were out and about you probably wouldn't have noticed anything. That is why there aren't any photos or film of the "panic." But if you were at a police or fire station, a radio or newspaper office, or a telephone switchboard, you knew something was up. Reports in local papers on local reaction the next morning are all very consistent. The first newspaper reports said "thousands" believed it. A public opinion survey done shortly after showed that a very small percentage of the population reported hearing it and believing it. Howard Koch, who wrote the script, wrote a book about it around 1970 called "The Panic Broadcast." Probably out of print but get your hands on a copy if you can.
I should add that listening to War of the Worlds began for me a lifetime interest in all things Orson Welles—including and attending a conference on him in Woodstock, Illinois for his 100th birthday. I met Many Welles scholars and I learned additional levels of appreciation and understanding in his work.
Met Ojar Kodar—Welles’ partner at the time of his death. I was able to shake the hands of people who shook the hand of Orson Welles! Holy Cow!
And please notice I purchased your latest compilation of Orson Welles radio work—now at 11 cd’s rather than 7. Holy Cow Again!
Such a great show! One of my personal favorites. However, the modern notion that the show sparked mass panic is not supported by the historical record. There were certainly isolated incidents in some communities across the country, and of course some individuals in otherwise 'dormant' cities got caught up in the story and panicked. But the idea of mass hysteria is apocryphal. Doesn't keep War of the Worlds from being a fantastic show from a master storyteller.
Well, it's whatever one chooses to believe.
Because I read all the stuff on it not being mass hysteria.
My Mom said they listened to that, and for some reason everyone in her family/household knew it was a show.
But she said their friends, neighbors were truly freaking out, didn't know what to do, and was trying to find shelter.
I've met quite few people of that generation, in the Midwest and the East Coast, who said many people were panicking.
Not all, so I guess the definition of mass hysteria is in question, but there were many who did believe it, and panicked.
“The gas is spreading over the city now. Hard to breathe.....” October 30, 1938, Orson Welles and the Mercury Theatre of the Air broadcast “The War of the Worlds”. The realistic portrayal of an alien invasion created a panic throughout the USA, especially on the Eastern seaboard. Welles would go on the air later and apologize for the panic, but you could tell in his voice he knew that his celebrity stature had just grown by leaps and bounds.
That's because the actual myth is that it was exxaggerated.
Whether it was massive or not, there was a nationwide panic.
My Mom told me about it for years. It was real.
Don't know who these people are nowadays making up their own myth. It happened
I was in my late teens the first time I heard "WotW". I had heard about it, of course, but had never actually heard it. One night, I was channel-surfing on my transistor radio and I came across it. An aside... my brother and sister-in-law had just moved not far from that area so a lot of the place names mentioned in the broadcast were familiar to me. I got sucked in for about 10 seconds until I put 2 + 2 together and realized what it was. So, I maybe can believe that people thought it was real. One has to remember there was no CNN or Google back then but I supposed people could have just changed the station to see what other radio stations were reporting.
The most recent studies say that the stories of widespread panic are way overblown. Welles spread bigger and bigger tall tales about it, as the genuine facts began to fade in public memory.
And Welles later admitted that wasn't sorry at all..
CBS had been carrying live news from Europe for much of that year, using actual journalists like Ed Murrow and Robert L. Shirer. Listeners were getting used to the possibility of bad news as it was happening.
|
Welles had discussed the power of radio, especially the reaction to Herbert Morrison's reports of the Hindenburg airship disaster. The notion of presenting a story over the radio in the form of a news report was intriguing, just a few weeks before the infamous War of the Worlds broadcast, Welles had H.V. Kaltenborn provide commentary in a broadcast of Julius Caesar to make the play resemble a March of Time documentary.
An urban legend has built up over the decades since the historic broadcast which claims that Welles' "irresponsible" broadcast caused panic in the streets because listeners believed that Martians had actually landed and were running amok in New Jersey. The supposed panic was reported by the newspaper wire services, but when modern historians investigated they found no hospital emergency room reporting any cases resulting from the panic. There were phone calls to local stations and the network, some complaining about the network airing such a realistic show while others asked where people could donate blood to help with the "emergency". Several of the calls were to congratulate Mercury Theatre for such an exciting Halloween program.
The supposed panic was created by the newspapers, and certainly did not hurt their circulation, but the most important outcome was a chance for the print media to attack the upstart radio industry. Radio was siphoning advertising income away from the newspaper industry (much like TV would do to network radio a decade and a half later), a serious blow during the Depression. Of course, judging by the newspaper reports immediately following his broadcast, Orson Welles could not be sure if he would ever be allowed to appear on the radio again, or even if he could avoid being arrested.
There were two important, actual results from the broadcast. One was to help the Science Fiction genre to gain greater acceptance in mainstream entertainment. Stories of space travel and little green men remained the purview of pulp magazines, but The War of the Worlds allowed more readers to take a closer look and discover the fun of imagination. The more immediate result was to bring greater attention to 23-year-old upstart director Orson Welles.
|
no
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
yes_statement
|
"orson" welles' 'war of the worlds' "broadcast" "caused" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles resulted in a genuine "panic" among listeners.
|
https://www.wellesnet.com/the-aftermath-orson-welles-the-war-of-the-worlds-halloween-press-conference-1938/
|
The aftermath: Orson Welles "The War of the Worlds" Halloween ...
|
The aftermath: Orson Welles “The War of the Worlds” Halloween press conference, 1938
There are pictures of me made about three hours after the broadcast looking as much as I could like an early Christian saint. As if I didn’t know what I was doing… but I’m afraid it was about as hypocritical as anyone could possibly get!
—Orson Welles (to Tom Snyder – 1975)
*************************************************
Press conference transcript from RADIO GUIDE Magazine, 1938
No more interesting interview was ever given than that granted to the press on Monday Oct. 31, 1938 – the day after The War of the Worlds hoax broadcast by Orson Welles, who played Professor Pierson, adapted the novel to radio, and who directs the Mercury Theater. He entered the interview room unshaven since Saturday, eyes red from lack of sleep. Welles read this prepared statement:
MR. WELLES: Despite my deep regret over any misapprehension that our broadcast might have created among some listeners, I am even more bewildered over this misunderstanding in the light of an analysis of the broadcast itself.
It seems to me that they’re our four factors, which should have in any event maintained the illusion of fiction in the broadcast. The first was that the broadcast was performed as if occurring in the future, and as if it were then related by a survivor of a past occurrence. The date of this fanciful invasion of this planet by Martians was clearly given as 1939 and was so announced at the outset of the broadcast.
The second element was the fact that the broadcast took place at our weekly Mercury Theatre period and had been so announced in all the papers. For seventeen consecutive weeks we have been broadcasting radio sixteen of these seventeen broadcasts have been fiction and have been presented as such. Only one in the series was a true story, the broadcast of Hell on Ice by Commander Ellsberg, and was identified as a true story in the framework of radio drama.
The third element was the fact that at the very outset of the broadcast, and twice during its enactment, listeners were told that this was a play that it was an adaptation of an old novel by H. G. Wells. Furthermore, at the conclusion, a detailed statement to this effect was made.
The fourth factor seems to me to have been the most pertinent of all. That is the familiarity of the fable, within the American idiom, of Mars and the Martians.
For many decades “The Man From Mars” has been almost a synonym for fantasy. In very old morgues of many newspapers there will be found a series of grotesque cartoons that ran daily, which gave this fantasy imaginary form. As a matter of fact, the fantasy as such has been used in radio programs many times. In these broadcasts, conflict between citizens of Mars and other planets been a familiarly accepted fairy-tale. The same make-believe is familiar to newspaper readers through a comic strip that uses the same device.
Mr. Welles then answered questions from reporters.
Q: Where you aware of the terror going on throughout the nation while you were giving the broadcast?
MR. WELLES: Oh no, of course not. I was frankly terribly shocked to learn it did. You must realize that when I left the broadcast last night I went into a dress rehearsal for a play that’s opening in two days (Danton’s Death) and I’ve had almost no sleep. So I know less about this than you do. I haven’t read the papers. I’m terribly shocked by the effect it’s had. The technique I used was not original with me, or peculiar to the Mercury Theater’s presentation. It was not even new. I anticipated nothing unusual.
Q: What was your reaction after you learned the extent of the panic the broadcast had caused?
MR. WELLES: Of course we are deeply shocked and deeply regretful about the results of last nights broadcast. It came as rather a great surprise to us that the H. G. Welles classic—which is the original for many fantasies about invasions by mythical monsters from the planet Mars—I was extremely surprised to learn that a story which has become familiar to children through the medium of comic strips and many succeeding novels and adventure stories, should have had such an immediate and profound effect on radio listeners.
Q: Knowing what happened, would you do the show over again?
MR. WELLES: I won’t say I won’t follow this technique again, as it is a legitimate dramatic form.
Q: Do you think there ought to be a law against such enactments as we had last night?
MR. WELLES: I don’t know what the legislation would be. I know that almost everyone in radio would do almost everything to avert the kind of thing that has happened, myself included. Radio is new and we are still learning about the effect it has on people.
Q: When were you first aware of the trouble caused?
MR. WELLES: Immediately after the broadcast was finished, when people told me of the large number of phone calls received.
Q: Should you have toned down the language of the drama?
MR. WELLES: No. You don’t play murder in soft words.
Q: Why was the story changed to put in the names of American cities and government officers?
MR. WELLES: H. G. Wells used real cities in Europe, and to make the play more acceptable to American listeners, we used real cities in America. Of course, I’m terribly sorry now.
Various film sites
Orson Welles 1915-1985
“To me, Orson is so much like a destitute king. A destitute king — not because he was thrown away from the kingdom — but (because) on this earth, the way the world is, there is no kingdom good enough for Orson Welles.” — Jeanne Moreau
* * *
Wellesnet is dedicated to the memory of Orson Welles (May 6, 1915 – October 10, 1985). Best known for his stage productions of Voodoo Macbeth, Cradle Will Rock and Caesar; the radio play The War of the Worlds; and the films Citizen Kane, The Magnificent Ambersons, Touch of Evil, Chimes at Midnight and The Other Side of the Wind.
|
The aftermath: Orson Welles “The War of the Worlds” Halloween press conference, 1938
There are pictures of me made about three hours after the broadcast looking as much as I could like an early Christian saint. As if I didn’t know what I was doing… but I’m afraid it was about as hypocritical as anyone could possibly get!
—Orson Welles (to Tom Snyder – 1975)
*************************************************
Press conference transcript from RADIO GUIDE Magazine, 1938
No more interesting interview was ever given than that granted to the press on Monday Oct. 31, 1938 – the day after The War of the Worlds hoax broadcast by Orson Welles, who played Professor Pierson, adapted the novel to radio, and who directs the Mercury Theater. He entered the interview room unshaven since Saturday, eyes red from lack of sleep. Welles read this prepared statement:
MR. WELLES: Despite my deep regret over any misapprehension that our broadcast might have created among some listeners, I am even more bewildered over this misunderstanding in the light of an analysis of the broadcast itself.
It seems to me that they’re our four factors, which should have in any event maintained the illusion of fiction in the broadcast. The first was that the broadcast was performed as if occurring in the future, and as if it were then related by a survivor of a past occurrence. The date of this fanciful invasion of this planet by Martians was clearly given as 1939 and was so announced at the outset of the broadcast.
The second element was the fact that the broadcast took place at our weekly Mercury Theatre period and had been so announced in all the papers. For seventeen consecutive weeks we have been broadcasting radio sixteen of these seventeen broadcasts have been fiction and have been presented as such. Only one in the series was a true story, the broadcast of Hell on Ice by Commander Ellsberg, and was identified as a true story in the framework of radio drama.
The third element was the fact that at the very outset of the broadcast, and twice during its enactment, listeners were told that this was a play that it was an adaptation of an old novel by H. G.
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
yes_statement
|
"orson" welles' 'war of the worlds' "broadcast" "caused" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles resulted in a genuine "panic" among listeners.
|
https://www.goodreads.com/book/show/21368056-the-martians-are-coming
|
The Martians are Coming!: The True Story of Orson Welles' 1938 ...
|
The Martians are Coming!: The True Story of Orson Welles' 1938 Panic Broadcast
It was Halloween 1938 when twenty-three year-old Orson Welles fooled America into thinking it had been invaded by aliens. The Mercury Theatre on the Air production of H. G. Wells' The War of the Worlds is one of the most talked-about radio broadcasts in history. The realistic retelling of Wells' original story resulted in mass panic across America. People ran into the streets screaming that the world was ending. Churches were emptied of their congregations, cinemas of their audiences, restaurants of their patrons. Panic-stricken families rushed to their cars and drove like lunatics in a bid to escape Martian annihilation. It seemed as if everyone in America knew about the 'invasion' that night, except of course the actors taking part in the live drama - they had no idea what was happening until the NYPD raided the recording studio. The Martians Are Coming! is the story of how a play that few people believed in came to be written, cast, rehearsed and finally broadcast to a credulous nation. Set against the background of America's economic depression and rumbles of another world war coming from across the Atlantic, the book sets The War of the Worlds in the context of what ordinary Americans were thinking and feeling on the night their world almost - but not quite - came to an end.
Community Reviews
The true story of the Orson Welles, Mercury Theatre production of HG Wells' The War of the Worlds on CBS radio in 1938 that led to panic in the New York and New Jersey areas, and resulted in real injuries, chaos and significant disturbances on the night which reverberated across the United States, Hitler's Germany, Mussolini's Italy and around the entire world. A wonderful true story that demonstrated the reach and power of radio in the 1930s... as well as the genius of the man Orson Welles. 7 out of 12
I first heard about the 1938 radio broadcast of The War of The Worlds through a 1975 (I think) TV movie called The Night That Panicked America. I first heard the actual many years later when it was given away as a gift by a magazine. It is hard to believe now that the show could possibly have caused such a furore, but this slim tome neatly lays out the perfect storm that caused it to happen.
There isn't too much history about the people who made it happen, partly because they were all in their early careers. There is just enough to introduce the characters. The background and the event itself are portrayed in a highly-readable and entertaining fashion, with lots of researched quotes and excerpts from contemporary reviews, newspaper articles and interviews, both archived and actual, with the people who made it happen.
And for anyone who thinks that it could never happen again, think about the furore about the BBC TV show Ghostwatch in 1992 or the rumours around the movie The Blair Witch Project in 1999, not to mention just about everything being said in politics around the world at the moment.
In a way, I'd have liked the book to be longer and to delve into the meaning of what happened and how it relates to today as much as it covers what happened in 1938, but what there is of it is well worth a read.
This is a fascinating history of the epic War of the Worlds radio broadcast. There were lots of facts I had never heard before. The author tells us what happened to each of the people involved with producing this broadcast. It also goes into detail about the effects of the broadcast around the country.
|
The Martians are Coming!: The True Story of Orson Welles' 1938 Panic Broadcast
It was Halloween 1938 when twenty-three year-old Orson Welles fooled America into thinking it had been invaded by aliens. The Mercury Theatre on the Air production of H. G. Wells' The War of the Worlds is one of the most talked-about radio broadcasts in history. The realistic retelling of Wells' original story resulted in mass panic across America. People ran into the streets screaming that the world was ending. Churches were emptied of their congregations, cinemas of their audiences, restaurants of their patrons. Panic-stricken families rushed to their cars and drove like lunatics in a bid to escape Martian annihilation. It seemed as if everyone in America knew about the 'invasion' that night, except of course the actors taking part in the live drama - they had no idea what was happening until the NYPD raided the recording studio. The Martians Are Coming! is the story of how a play that few people believed in came to be written, cast, rehearsed and finally broadcast to a credulous nation. Set against the background of America's economic depression and rumbles of another world war coming from across the Atlantic, the book sets The War of the Worlds in the context of what ordinary Americans were thinking and feeling on the night their world almost - but not quite - came to an end.
Community Reviews
The true story of the Orson Welles, Mercury Theatre production of HG Wells' The War of the Worlds on CBS radio in 1938 that led to panic in the New York and New Jersey areas, and resulted in real injuries, chaos and significant disturbances on the night which reverberated across the United States, Hitler's Germany, Mussolini's Italy and around the entire world. A wonderful true story that demonstrated the reach and power of radio in the 1930s... as well as the genius of the man Orson Welles. 7 out of 12
I first heard about the 1938 radio broadcast of The War of The Worlds through a 1975 (I think)
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
no_statement
|
"orson" welles' 'war of the worlds' "broadcast" did not "cause" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles did not result in a genuine "panic" among listeners.
|
https://www.heraldstandard.com/gcm/news/local_news/war-of-the-worlds-75-years-later-people-still-remember-famous-broadcast/article_78a1ba4f-d79e-5db8-b9ba-6882431335ca.html
|
'War of the Worlds:' 75 years later, people still remember famous ...
|
Orson Welles broadcasts his radio show of H.G. Wells’ science fiction novel “The War of the Worlds” in a New York studio at 8 p.m. Sunday, Oct. 30, 1938. The realistic account of an invasion from Mars caused thousands of listeners to panic.
Herb Springer, 80, of Perryopolis, remembers listening as a child to the frightening “War of the Worlds” broadcast on the family’s Atwater Kent radio on Oct. 30, 1938. He is holding a copy of the broadcast on a record album while being interviewed at the Perryopolis Senior Center.
'War of the Worlds:' 75 years later, people still remember famous broadcast
Orson Welles broadcasts his radio show of H.G. Wells’ science fiction novel “The War of the Worlds” in a New York studio at 8 p.m. Sunday, Oct. 30, 1938. The realistic account of an invasion from Mars caused thousands of listeners to panic.
Ed Cope
Herb Springer, 80, of Perryopolis, remembers listening as a child to the frightening “War of the Worlds” broadcast on the family’s Atwater Kent radio on Oct. 30, 1938. He is holding a copy of the broadcast on a record album while being interviewed at the Perryopolis Senior Center.
Ed Cope
Ed Cope
Norma Allison of Perryopolis talks about the “War of the Worlds” radio broadcast in 1938 she remembers from her childhood.
Seventy-five years ago tonight, a national panic ensued as radio listeners came to believe that Martians had invaded the United States.
They were listening to Orson Welles’ 1938 adaptation of “The War of the Worlds,’’ which became one of the most famous broadcasts in radio history.
“I remember a neighbor lady came running over to my mother and dad because she was scared,’’ said Norma Allison, 86, of Perryopolis, who was 11 years old at the time. She said her parents calmed the woman.
“I thought it was real to begin with because whatever came out of the radio was gospel. But my father straightened me out,’’ said Herb Springer, 80, also of Perryopolis, who was then a 5-year-old child living in Leetsdale, Beaver County. “He explained to me those things could not have happened even before the program ended.’’
While Allison and Springer didn’t join in the panic, Uniontown historian Victoria Dutko Leonelli said many people believed a Martian invasion was actually taking place. She included this story in her 2009 book, “The Unexplained: Stories, Folklore and Legends of Fayette County, Pennsylvania.’’
“People were frightened half to death. It was a scary thing at the time,’’ said Leonelli.
The broadcast was performed as a Halloween episode of the Columbia Broadcasting System’s “The Mercury Theatre on the Air.’’ Welles directed and narrated the show, which was an adaptation of H.G. Wells’ 1898 science fiction novel. The CBS production took the story out of England and changed the setting to Grover’s Mill, N.J.
The drama simulated a live newscast of a Martian invasion, coming as a series of fictionalized breaking news bulletins and firsthand reports from the scene. While the show incorporated notices that the broadcast was a drama, many listeners missed them.
“It was the first one, to the best of my knowledge, to sound like it was taking place in real time,’’ said Doug Wilson, operations manager and morning show host for WANB radio in Waynesburg and an instructor at Waynesburg University who has held a lifelong interest in classic radio broadcasts.
“Orson Welles and his team took a unique approach to making it sound like a contemporary broadcast,’’ Wilson said. “He set it in 1939 — a year later — but if you came in 10 to 15 minutes late into the program, you might think you were listening to an actual broadcast.’’
Not only was there fear in New Jersey, where the Martians supposedly landed, but it spread across the country. Local news accounts from the Morning Herald and Daily News Standard, forerunners to the Herald-Standard, reveal Fayette County was not immune.
“The city and county last night were as excited as the rest of the nation over the CBS broadcast, which, today, had precipitated a wave of bitter public reaction. The News Standard office was swamped with telephone calls by radio listeners, many of whom were hard to convince that the radio broadcast was mere fiction,’’ was a report from the Oct. 31, 1938, Daily News Standard that carried the headlines “Broadcast gives nation jitters.’’
The Oct. 31 edition of the Morning Herald noted it received many reports including, “A man at Uniontown called the Herald to report that a group playing cards at his home ‘fell down on their knees and prayed,’ then hurried home.’’
Uniontown’s annual Halloween parade went on as planned on Oct. 31, but a story block on the Standard’s front page suggested “Bar Martians from the parade’’ and appeared to try to comfort its readers: “The ghosts and witches and goblins that’ll be hovering over the town tonight will be just make believe, honestly, absolutely and no kidding. And they’ll be no Martians and black smoke breezing around — you can take that as complete truth.’’
Both papers also printed wire service stories that reported episodes of panic across the country, including people running into churches, leaving restaurants and fleeing their homes.
“But in the East, in the country being subjected to the ‘invasion,’ hysteria ran riot. Several persons came forward to swear they saw the rocket land and ‘strange creatures’ climb out of it,’’ according to a United Press International story.
Allison and Springer were children listening to the broadcast with their families.
“Nobody had television,’’ said Springer, a retired machinist die maker and veteran. “My mother and father listened to the radio and me, as the oldest sibling, I’d listen to it while lying on the floor. We had a large console radio, and I was allowed to turn the dial.’’
Allison, a retired nurse, said, “I had parents that if something was going on, we listened to it on the radio.’’
The two, who were interviewed at the Perryopolis Senior Center, don’t remember details of the broadcast but remembered they didn’t join in the panic. Both credit their parents.
“We weren’t raised to be afraid of things,’’ said Allison.
So why did so many panic?
Wilson looks at the times when World War I was just a couple of decades removed and there was tension from the Nazis in Europe, where World War II would begin the next year. Immediate mass communication was by radio. Television and computers for home use were still in the future. Even most telephone use was limited as many people had a party line, having to wait for the line to be free and then calling an operator to put a call through.
Wilson said, “It was, in some respects, to borrow a phrase, a perfect storm.’’
But were Welles and his team innocent?
“I think too many things had to be very good coincidences for this to have worked out so well,’’ said Brandon Szuminsky, an instructor of communications at Waynesburg University, who has performed research on media hoaxes, including “War of the Worlds’’, as a doctoral candidate at Indiana University of Pennsylvania. He also co-authored a chapter in the 2013 book, “War of the Worlds to Social Media.’’
“Did Welles mean to cause a panic?’’ asked Szuminsky. “Probably not, but he did mean to fool people. ‘The War of the Worlds’ was designed to sound like radio bulletins and no one had done that before. It was groundbreaking style, and I think he had a lot of people falling for it.’’
Szuminsky added there also is a belief that newspapers may have exaggerated the panic in an effort to make radio look bad.
Still, the broadcast made everyone take a look at the influence of radio and what eventually would be called the “media effect,’’ noted Szuminsky and Wilson.
“We take for granted that media impacts us,’’ said Szuminsky, “but it was a turning point in the way we looked at media.’’
“The public, I think, learned you’ve got to take what you hear and compare it against other sources,’’ said Wilson. “Radio learned a lesson — we have to be morally aware. And that later applied to television as well. Even the government realized that by 1938, radio was starting to be a household item and that the government would have to pay attention.’’
Today, Springer looks back fondly on the broadcast and even has a record album of the original show.
He said, “I feel that is something that will never happen again.’’
Is that true?
Wilson believes that hoaxes today are more likely to come through social media, such as Facebook or Twitter, where someone might tweet “I saw an alien spacecraft’’ and 100 people will re-tweet it and so on.
Szuminsky also believes that today’s hoaxes are more likely to occur on social media, such as the death of a celebrity or someone saying he has a winning lottery ticket and if you post a photo, you might be picked to share in the winnings.
“You don’t just take everything at face value,’’ Wilson said. “Do some research. Don’t jump to conclusions. There’s so many ways to double-check information today.’’
Watch this discussion.Stop watching this discussion.
(0) comments
Welcome to the discussion.
Keep it Clean. Please avoid obscene, vulgar, lewd,
racist or sexually-oriented language. PLEASE TURN OFF YOUR CAPS LOCK. Don't Threaten. Threats of harming another
person will not be tolerated. Be Truthful. Don't knowingly lie about anyone
or anything. Be Nice. No racism, sexism or any sort of -ism
that is degrading to another person. Be Proactive. Use the 'Report' link on
each comment to let us know of abusive posts. Share with Us. We'd love to hear eyewitness
accounts, the history behind an article.
|
Orson Welles broadcasts his radio show of H.G. Wells’ science fiction novel “The War of the Worlds” in a New York studio at 8 p.m. Sunday, Oct. 30, 1938. The realistic account of an invasion from Mars caused thousands of listeners to panic.
Herb Springer, 80, of Perryopolis, remembers listening as a child to the frightening “War of the Worlds” broadcast on the family’s Atwater Kent radio on Oct. 30, 1938. He is holding a copy of the broadcast on a record album while being interviewed at the Perryopolis Senior Center.
'War of the Worlds:' 75 years later, people still remember famous broadcast
Orson Welles broadcasts his radio show of H.G. Wells’ science fiction novel “The War of the Worlds” in a New York studio at 8 p.m. Sunday, Oct. 30, 1938. The realistic account of an invasion from Mars caused thousands of listeners to panic.
Ed Cope
Herb Springer, 80, of Perryopolis, remembers listening as a child to the frightening “War of the Worlds” broadcast on the family’s Atwater Kent radio on Oct. 30, 1938. He is holding a copy of the broadcast on a record album while being interviewed at the Perryopolis Senior Center.
Ed Cope
Ed Cope
Norma Allison of Perryopolis talks about the “War of the Worlds” radio broadcast in 1938 she remembers from her childhood.
Seventy-five years ago tonight, a national panic ensued as radio listeners came to believe that Martians had invaded the United States.
They were listening to Orson Welles’ 1938 adaptation of “The War of the Worlds,’’ which became one of the most famous broadcasts in radio history.
“I remember a neighbor lady came running over to my mother and dad because she was scared,’’ said Norma Allison, 86, of Perryopolis, who was 11 years old at the time. She said her parents calmed the woman.
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
no_statement
|
"orson" welles' 'war of the worlds' "broadcast" did not "cause" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles did not result in a genuine "panic" among listeners.
|
https://allthatsinteresting.com/famous-hoaxes-fake-images
|
7 Famous Hoaxes That Fooled The World
|
7 Famous Hoaxes That Fooled The World
From the "BBC Spaghetti Tree" to the "Cardiff Giant" to the "Taco Liberty Bell," these famous hoaxes are the ones that truly bamboozled the world.
It’s not every day, or even every April Fool’s Day, that a hoax makes the history books. Yet there have been a few famous hoaxes that have fooled the world just enough to make them eternally hilarious (at least in retrospect) and/or endlessly fascinating.
From the “BBC Spaghetti Tree” to the “Taco Liberty Bell,” these especially famous hoaxes are the ones that truly bamboozled and confounded the world:
Famous Hoaxes: “The War of the Worlds”
Wikimedia CommonsThe day after the broadcast, Orson Welles meets with reporters to explain that no one connected with “The War of the Worlds” had any idea that the show would cause a panic.
On October 30, 1938, Orson Welles’ all too realistic radio adaptation of H.G. Wells’ novel The War of the Worlds — staged as if it was an actual radio report of an alien invasion in progress — created a nationwide panic.
In between bouts of music, various “breaking news” announcements reported visible explosions on Mars, then a spaceship landing in Grover’s Mill, New Jersey, and finally Martians terrorizing New Jersey and New York City.
Stricken with fear, New Jersey locals went into a panic, with some even packing the highways to make an escape.
Two-thirds of the way through the broadcast, the intermission announcement reminded listeners that the broadcast was fictional, but the damage was done.
Although it was never intended to be a hoax at all, the resulting panic led “The War of the Worlds” broadcast to become one of history’s most famous hoaxes.
Edward Mordake
TwitterThe photo — actually of a wax construction of what Edward Mordrake might have looked like — that set the Internet abuzz.
An old hoax that is only growing more famous thanks to the Internet and American Horror Story: Freak Show is the curious case of Edward Mordake.
The story goes that Mordake was born into a noble bloodline but suffered from a horrid congenital deformity: a second face on the back of his head.
He claimed it would whisper hateful, evil things to him while he slept. Mordrake begged doctors to have it removed, but that never happened and, unable to live with the relentless sneering from his parasitic twin, he committed suicide at the age of 23.
Different versions of Mordrake’s story have since been featured in plays, television, and music (Tom Wait’s song “Poor Edward”) — and many of these retellings present Mordrake as a real man. His case even appeared in an 1896 medical journal.
But, although there have been real cases of this rare deformity, Mordrake’s story was instead a hoax created by science fiction writer Charles Hildreth in 1895. And as for the picture of the “real” Edward Mordake that has recently been floating around the Internet, it’s actually just a wax replica portraying what Mordake might have looked like.
|
7 Famous Hoaxes That Fooled The World
From the "BBC Spaghetti Tree" to the "Cardiff Giant" to the "Taco Liberty Bell," these famous hoaxes are the ones that truly bamboozled the world.
It’s not every day, or even every April Fool’s Day, that a hoax makes the history books. Yet there have been a few famous hoaxes that have fooled the world just enough to make them eternally hilarious (at least in retrospect) and/or endlessly fascinating.
From the “BBC Spaghetti Tree” to the “Taco Liberty Bell,” these especially famous hoaxes are the ones that truly bamboozled and confounded the world:
Famous Hoaxes: “The War of the Worlds”
Wikimedia CommonsThe day after the broadcast, Orson Welles meets with reporters to explain that no one connected with “The War of the Worlds” had any idea that the show would cause a panic.
On October 30, 1938, Orson Welles’ all too realistic radio adaptation of H.G. Wells’ novel The War of the Worlds — staged as if it was an actual radio report of an alien invasion in progress — created a nationwide panic.
In between bouts of music, various “breaking news” announcements reported visible explosions on Mars, then a spaceship landing in Grover’s Mill, New Jersey, and finally Martians terrorizing New Jersey and New York City.
Stricken with fear, New Jersey locals went into a panic, with some even packing the highways to make an escape.
Two-thirds of the way through the broadcast, the intermission announcement reminded listeners that the broadcast was fictional, but the damage was done.
Although it was never intended to be a hoax at all, the resulting panic led “The War of the Worlds” broadcast to become one of history’s most famous hoaxes.
Edward Mordake
TwitterThe photo — actually of a wax construction of what Edward Mordrake might have looked like — that set the Internet abuzz.
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
no_statement
|
"orson" welles' 'war of the worlds' "broadcast" did not "cause" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles did not result in a genuine "panic" among listeners.
|
https://memoryln.net/places/united-states/new-york/watertown/ufo/northern-new-york-hysteria-october-30-1938-war-of-the-worlds/
|
War of the Worlds - October 30, 1938 - Northern New York Hysteria ...
|
War of the Worlds - October 30, 1938 - Northern New York Hysteria
October 30, 1938 War of the Worlds Old Time Radio Broadcast Causes Mass Hysteria
Before the golden age of television became the popular mode of home entertainment, H. G. Wells’s 1897’s War of the Worlds serial in Pearson’s Magazine would make its dramatic debut October 30, 1938, on the era’s more popular medium, the theater of the mind and Old Time Radio broadcasts.In the 1930s, radio was still a relatively, but trusted, source of news and communication with the likes of President Franklin D. Roosevelt holding “fireside chats” to comfort and keep people informed of the economy and war efforts abroad.
October 31, 1938 Front page of the Watertown Daily Times with coverage of the radio broadcast War of the Worlds causing mass hysteria. Photo: Watertown Daily Times.
Then there was Orson Welles, of no relation to H. G. Wells, who would use the medium in a cleverly manipulative way to expose how fragile the human psyche can be during times of great societal stress and uncertainty.The broadcast of War of the Worlds, stated from the outset as a fictional play, would unnerve audiences who either missed the opening prologue or weren’t paying attention.The result would be mass hysteria and subsequently the realization that perhaps radio could not be trusted.
In Northern New York, the Watertown Daily Times and local police would be flooded with phone calls.The Times would expend an extraordinary amount of coverage in its Monday afternoon, October 31st edition with headlines such as “North Aroused By ‘War” Drama.
Original art from the serialization of H. G. Wells War of the Worlds. Photo: Cosmopolitan Magazine, 1898.
HUNDREDS CALL THE TIMES AND THE POLICE DEPARTMENT
Many Who Heard Only Part Of Program Believed It Was True
NEAR PANIC IN SOME HOMES
Hundreds of persons throughout the city and northern New York were suddenly aroused from a quiet Sunday evening last night when excited voices poured throughout radios proclaiming that strange men from Mars had landed in New Jersey in meteor-like airplanes and were killing and plundering in a manner never before seen on the earth. It was not until after hundreds of phone calls swamped the Times offices and the police station that it was generally learned the broadcast was merely a fictional play, “The War of the Worlds,” by H. G. Wells.
The broadcast originated in radio station WABC between 8 and 9 p.m. and was sent out over the country by the Columbia Broadcasting system. The program was the Mercury Theatre of the Air and the play was made realistic by the dramatization by Orson Welles.
Those persons who had not heard the beginning of the program but turned in after it started declared today that the broadcast of the “tragic affair” was so realistic that they believed it was an actual fact that Martians had suddenly swept down from Mars to invade earth. The announcers, speaking in present tense, said the Mars fighters were invincible and that thousands were dead and buildings were destroyed.
At The Times office between 8:30 and 9 more than 50 calls were received from persons in the city seeking to learn if the “disaster in New Jersey” was true. The voices were excited and some had reached an almost hysterical pitch.
Another piece of haunting original artwork by Henrique Alvim Corrêa inspired by the War of the Worlds. Photo: 1906 Henrique Alvim Corrêa drawing, source unknown.
A telephone operator in Evans Mills would call The Times office where there was only one person working. She had requested any news or information on “The New Jersey” tragedy, stating she was swamped with telephone calls on the subject. The local police in Watertown faired no better. Sergeant John L. Couchette reported answering dozens of calls asking the same question as there were about 300 more phone calls than usual Sunday evening per the New York Telephone company the following day.
Headline from the Watertown Daily Times regarding a 9-year-old Gouverneur girl fainting during the broadcast of War of the Worlds. Photo, caption: Watertown Daily Times.
The Times article would continue–
Near panic swept over some families in this city, as it did throughout other sections of the nation. In one home here it was almost decided to get out the family car and head north for Canada. Others declared they were ready to flee the city. At one theatre here calls were received asking that certain persons be paged and told to come home at once.
After so much excitement and confusion had occurred throughout the state and nation, the New York City police department sent out a teletype message which was received by local police headquarters. It read:
“WABC informs us that the broadcast just completed over their station was only a dramatization of a play. No cause for alarm.”
Another piece of haunting original artwork by Henrique Alvim Corrêa inspired by the War of the Worlds. Photo: 1906 Henrique Alvim Corrêa drawing via Wikimedia.
In Gouverneur, 9-year-old Mary McAdam fainted from fright while listening to the War of the Worlds broadcast. According to The Times–
Mary was listening at the radio with her mother, Mrs. Charles McAdam, and other members of the family. When an announcer said that the militia had taken over the networks, Mrs. McAdam and the children were alarmed. As the realistic reports of attacks by Martians in New Jersey and other sections proceeded, Mrs. McAdam and the children became really frightened.
Suddenly, Mary said, “Mother, I feel—” and then toppled over on the floor in a faint before she could complete the sentence. Her mother revived her with aromatic spirits of Ammonia.
The family had missed the opening, but it was noted that the announcement of it being a work of fiction was made four times throughout its hourlong broadcast.
A then 23-year-old Orson Welles would field questions from the press after the War of the Worlds broadcast panicked audiences the night before Hallowe’en. Photo: Wikimedia.
Elsewhere, in Concrete,Washington, the coincidental timing of a power failure that turned off the street lights added to the surrealistic events unfolding on the radio and provided further proof, at least to the mind’s untethered fears, that the end was near–
Just as an announcer was “choked off” by “poisonous gas” in what he had just said might be “the last broadcast ever made” the lights in Concrete failed.
They called friends on the telephone until all lines were clogged.They shouted from house to house.The more they talked the more excited they became.Others who had not listened to the program became alarmed.Hysteria swept over the more excitable of the thousand or more residents.
One man bolted from his home, grabbed a small child by the arm and headed into the adjacent pine forests.Others prepared hastily to “flee to the hills” in the belief the invasion already had reached across the continent from New York.
It was only about the time the news of the events being a dramatic play was making its rounds did the problem at a nearby power substation get resolved.
In Salt Lake City, Radio Station KSL said many personsreported packing their belongings to evacuate their homes, “children became hysterical and grownups fainted at the ‘War of the Worlds’ program”
More original artwork from the War of the Worlds, by H. G. Wells. Photo: Interior illustration to H. G. Wells‘ novel The War of the Worlds from reprinting in Amazing Stories, August 1927, from Wikimedia.
In San Francisco, telephone operators reported they were virtually swamped with requests for cross-continent telephone connections with New York and New Jersey. One puzzled elderly lady was informed that the report was only a drama and cheerfully remarked, “Well, if it doesn’t do anything else, it made a lot of people pray tonight.”
The Watertown Daily Times would also publish an article in the editorial section the following day pointing out the responsibilities of media—
The Federal Communications Commission proposes to make a sweeping investigation. That is right and proper but probably the investigation will disclose nothing but what is already known. The radio company intended it to be simply a realistic dramatization of a well known book. Announcement was made to that effect four times during the program.
It was all a little too real, however. A program of music was interrupted for what seemed to be a news report for what seemed to be a news report of a tremendous gas explosion in New Jersey.From then on these “news reports” came think and fast.The little men from Mars were swarming all over New Jersey. The United States was actually being invaded by a host from another planet.
If the radio people intended to make the broadcast realistic they certainly succeeded, even the announcement which seemed to come from the secretary of the interior that a state of war existed.
The broadcasting companies must learn what the newspapers learned long ago. A hoax cannot be perpetuated in the guise of news without serious consequences. The radio people have a grave responsibility in this matter.
Photo, caption: Watertown Daily Times.
One looking back upon these events 84 years ago may scoff at how gullible people seemed, yet, at the same time, modern audiences themselves have been besieged by much more forms of complex media delivered with more immediacy in an era of “fake news,” manipulation and propaganda. It’s almost as if somewhere between then and now, somebody took the warning along with the social implications and saw not the fear for the lack of integrity, but the basis for an opportunity.
The short analysis by a psychologist of the event back in 1938 blaming the hysteria on “Mass Jitters” —
Terre Haute, Ind. (AP) — Dr. Rudolph A. Archer, Indiana State Teachers College psychologist, traced Sunday night’s widespread hysteria over a radio play to the same thing he says causes “jitter-bugs.” He said the alarm spread by a dramatization of H. G. Wells’ “War of the Worlds” and the antics of “shag” dancers both grew out of a “mass case of jitters” started by the World War and aggravated by “persistent economic chaos.”
With all that being said, there are theories out there that the “panic” was actually created by the newspapers which saw the medium of radio as a potential threat and thereby perpetuated the myth of a panic as a means to censor the burgeoning medium. The problem with this is it would be one heck of a coordinated effort on behalf of the press across the country to get articles written for publication the very next morning. It should be noted the names in the Times articles were indeed real people in their noted locations and descriptions.
Then there are other theories explored in depth that postulate the War of the Worlds radio broadcast was a psychological experiment to study public reaction (and the potential for psychological manipulation and social engineering) to the output of the mainstream media financed by the Rockefeller Foundation. The theory traces Orson Welles’ connection to having cart blanche while making Citizen Kane for RKO Pictures whom Nelson Rockefeller sat on the board of directors.
And now, straight from the theater of the mind and Old Time Radio is Orson Welles’ 1938 presentation of H. G. Welles’ “War of the Worlds,” remastered and posted on YouTube.
|
War of the Worlds - October 30, 1938 - Northern New York Hysteria
October 30, 1938 War of the Worlds Old Time Radio Broadcast Causes Mass Hysteria
Before the golden age of television became the popular mode of home entertainment, H. G. Wells’s 1897’s War of the Worlds serial in Pearson’s Magazine would make its dramatic debut October 30, 1938, on the era’s more popular medium, the theater of the mind and Old Time Radio broadcasts. In the 1930s, radio was still a relatively, but trusted, source of news and communication with the likes of President Franklin D. Roosevelt holding “fireside chats” to comfort and keep people informed of the economy and war efforts abroad.
October 31, 1938 Front page of the Watertown Daily Times with coverage of the radio broadcast War of the Worlds causing mass hysteria. Photo: Watertown Daily Times.
Then there was Orson Welles, of no relation to H. G. Wells, who would use the medium in a cleverly manipulative way to expose how fragile the human psyche can be during times of great societal stress and uncertainty. The broadcast of War of the Worlds, stated from the outset as a fictional play, would unnerve audiences who either missed the opening prologue or weren’t paying attention. The result would be mass hysteria and subsequently the realization that perhaps radio could not be trusted.
In Northern New York, the Watertown Daily Times and local police would be flooded with phone calls. The Times would expend an extraordinary amount of coverage in its Monday afternoon, October 31st edition with headlines such as “North Aroused By ‘War” Drama.
Original art from the serialization of H. G. Wells War of the Worlds. Photo: Cosmopolitan Magazine, 1898.
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
no_statement
|
"orson" welles' 'war of the worlds' "broadcast" did not "cause" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles did not result in a genuine "panic" among listeners.
|
https://studymoose.com/war-of-the-worlds-4-essay
|
War of the Worlds Free Essay Example
|
War of the Worlds
When listening to radio, one must develop a different set of skills than are used in watching a television program. Those skills begin to develop at a young age and involve listening to vocal cues like intonation, rhythm, pace, and context. In television, film or even live theatre, there is body language as well to watch and observe, but that can also serve as a distraction. The ears can often distinguish lies from truth, for example, simply by listening to careful cues in the speech of those talking.
Once must learn to detect sarcasm and humor versus the more serious and must be adept at detecting emotion by hearing the subtleties of the spoken word in each person’s voice.
I have a unique perspective about the 1938 radio broadcast of “War of the Worlds”, and one that many people may not agree with. Just like one needs to “fine tune” their listening skills when they only have voice to hear, I believe people must also learn to fine tune their skills in interpreting events, especially any event that impacts millions of people in a way the War of the Worlds broadcast did in 1938.
I immediately felt when I heard about this broadcast years ago there was more to it than the “official” story that is was a dramatization or piece of entertainment gone array.
So, I did some digging. No matter what level of listening skills the audience had on that evning in 1938 – no matter how sophisticated they would have been – which most were quite adept at interpreting what they were hearing due to radio being the prime source of entertainment and information – all of that went out the window for many.
Get to Know The Price Estimate For Your Paper
Topic
Deadline: 10 days left
Number of pages
EmailInvalid email
By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
It was a Rockefeller Foundation Psyop that was broadcast into the living rooms of Americans that evening in 1938. It was a classic early example of a Psyop, or “psychological operation” which created “accidental” and “unfortunate” panic and hysteria throughout the United States that evening and beyond. Listeners tuned in to what they thought was a real invasion by Martians.
It was funded indirectly by the Rockefeller Foundation through the The Princeton Radio Project, and was controlled and guided by members of the Council on Foreign Relations. It is generally considered that the mainstream media “psyop” (which is designed to steer and manage the perceptions of the masses) is typically perpetuated by news other current affairs programs. However, after researching this, my opinion is this was one of the earliest examples from the mainstream media and it did not include an earthly tale of foreign powers or political intrigue, but a story about a Martian invasion of Earth. This was the very first mass media Psyop and out of the 6 million people who tuned in, 1.7 million believed the broadcast was legitimate. 1.2 million of those were scared enough to fight the incoming invaders. Why did they fall for it? Firstly, they were taken off-guard by the themes of the program: aliens and invasion. Secondly, Orson Welles and his team (who we now know who they were) added the authentic element of interrupting the broadcast periodically with “news.”
These listeners were not “media literate” as we are today, but they listened to the radio every day, often for hours, so they had great listening and interpreting skills. However, for many that night, their normal senses and skills of interpretation were inhibited, and by the fear being created and perpetuated on them, thus their ability to listen carefully to the many cues and subtleties went out the door. It was really a terribly cruel thing to perpetrate on the people, but psychological warfare is never very friendly as the intentions are nefarious. This is a newspaper article that was printed in the New York Times the day after the program aired:
“A wave of mass hysteria seized thousands of radio listeners between 8:15 and 9:30 o’clock last night when a broadcast of a dramatization of H. G. Wells’ fantasy, “The War of the Worlds,” led thousands to believe that an interplanetary conflict had started with invading Martians spreading wide death and destruction in New Jersey and New York. The broadcast, which disrupted households, interrupted religious services, created traffic jams and clogged communications systems, was made by Orson Welles, who as the radio character, “The Shadow,” used to give “the creeps” to countless child listeners. This time, at least, a score of adults required medical treatment for shock and hysteria.” – “Radio Listeners in Panic, Taking War Drama as Fact” – New York Times, October 31st, 1938.
The “disclaimer” they mention at the beginning or the broadcast alerting people was simply to absolve them of any liability from what might happen if they became fearful and concerned. If they told people at the start it wasn’t “real” then the blame rested on the listeners who just missed that disclaimer and let their imaginations get the best of them.
I have been fooled and I believe everyone has been fooled at some point by “fake news”, much of which we are not even aware of. In recent years with so many people becoming media savvy and with alternative forms of news and information sources, it is much easier to research and uncover fake news. In recent weeks, for instance, several MSM news outlets were caught showing fake footage of crowded hospital rooms full of Covid-19 patients and long lines of cars waiting to be tested for the virus. It was soon revealed and admitted they used footage from an Italian Hospital and completely staged the cars in line using medical center employees. These are just a few examples. Unfortunately, the current climate for “fake news” is abundant and one must always question and research things on their own, balancing and viewing multiple sources on our own to confirm the truth. In a way, we must become “citizen journalists.”
Roderick Mason
Share
Cite this page
War of the Worlds. (2021, Dec 08). Retrieved from http://studymoose.com/war-of-the-worlds-4-essay
Students looking for free, top-notch essay and term paper samples on various topics. Additional materials, such as the best quotations, synonyms and word definitions to make your writing easier are also offered here.
|
However, for many that night, their normal senses and skills of interpretation were inhibited, and by the fear being created and perpetuated on them, thus their ability to listen carefully to the many cues and subtleties went out the door. It was really a terribly cruel thing to perpetrate on the people, but psychological warfare is never very friendly as the intentions are nefarious. This is a newspaper article that was printed in the New York Times the day after the program aired:
“A wave of mass hysteria seized thousands of radio listeners between 8:15 and 9:30 o’clock last night when a broadcast of a dramatization of H. G. Wells’ fantasy, “The War of the Worlds,” led thousands to believe that an interplanetary conflict had started with invading Martians spreading wide death and destruction in New Jersey and New York. The broadcast, which disrupted households, interrupted religious services, created traffic jams and clogged communications systems, was made by Orson Welles, who as the radio character, “The Shadow,” used to give “the creeps” to countless child listeners. This time, at least, a score of adults required medical treatment for shock and hysteria.” – “Radio Listeners in Panic, Taking War Drama as Fact” – New York Times, October 31st, 1938.
The “disclaimer” they mention at the beginning or the broadcast alerting people was simply to absolve them of any liability from what might happen if they became fearful and concerned. If they told people at the start it wasn’t “real” then the blame rested on the listeners who just missed that disclaimer and let their imaginations get the best of them.
I have been fooled and I believe everyone has been fooled at some point by “fake news”, much of which we are not even aware of.
|
yes
|
Entertainment
|
Did Orson Welles' 'War of the Worlds' broadcast cause a real-life panic?
|
no_statement
|
"orson" welles' 'war of the worlds' "broadcast" did not "cause" a "real"-"life" "panic".. the 'war of the worlds' "broadcast" by "orson" welles did not result in a genuine "panic" among listeners.
|
https://www.princetonmagazine.com/the-war-of-the-worlds/
|
“The War of the Worlds” | Princeton Magazine
|
Decades before the term “fake news” became familiar, there was “The War of the Worlds.” The infamous 1938 radio broadcast, inspired by the H.G. Wells novel of the same name, announced to fans of the CBS Radio drama series Mercury Theatre on the Air that Martians had crash-landed in a farmer’s field in Grovers Mill, New Jersey, and were invading the earth.
It was the golden age of radio, and Sunday night was prime time. October 30, 1938 also happened to be mischief night. Led by 23-year-old Orson Welles, the theater company decided to take things a bit further than usual and give listeners a jolt. Just how much of a jolt they intended remains in question.
An announcer who claimed to be at the crash site just a few miles from Princeton breathlessly described a slimy Martian slithering its way out of a metallic cylinder.
“Good heavens, something’s wriggling out of the shadow like a gray snake,” he began. “Now here’s another and another one and another one! They look like tentacles to me. I can see the thing’s body now. It’s large, large as a bear. It glistens like wet leather…. I can hardly force myself to keep looking at it, it’s so awful! The eyes are black and gleam like a serpent. The mouth is kind of V-shaped with saliva dripping from its rimless lips that seem to quiver and pulsate.”
It was all a spectacular hoax, of course. But to some listeners across the country, the sophisticated sound effects and supposedly terrified announcers reporting Martians firing “heat-ray“ weapons created chaos. Newspaper reports at the time said people claimed they saw things that didn’t exist, and crowded the roadways in an effort to escape the invasion. Local legend has it that in Grovers Mill, an inebriated farmer shot at the wooden water tower because he thought it was an alien (never proven, but people who grew up in the West Windsor town have recalled seeing bullet holes in the tower).
Orson Welles, center, meeting with reporters in an effort to explain that no one connected with the “War of the Worlds” radio broadcast had any idea the show would cause panic. (Wikimedia Commons)
The legend lives on. In leafy Van Nest Park just down the road from the actual Grovers Mill buildings, a series of four plaques along the pond tell the story of the broadcast. A large, sculpted monument that pays tribute is prominently positioned. Over the years, West Windsor has commemorated the notorious event at key anniversaries. A section of the township’s website is devoted to the broadcast, its history, and the havoc it caused.
On Saturday, October 30, a family-oriented celebration of the event will be held at the MarketFair mall on Route 1, in collaboration with the West Windsor Arts Council.
“Every year, especially come October, we get inquiries,” said Gay Huber, West Windsor’s municipal clerk. “That’s why we put all that information on the website. There used to be several individuals who actually lived through it, and I was able to give their names to people. But they’re no longer with us.”
While newspapers reported widespread panic due to the broadcast (“Radio Listeners in Panic, Taking War Drama as Fact” read the front page of the New York Times), most local residents may have taken it in stride. “If you think about it, there was no Twitter, no Instagram or social media,” Huber said. “So, a lot of people didn’t know about it. My family’s farm was only a couple miles away, and I’m not sure they even knew about it. It wasn’t something my grandparents talked about.”
Plaque commemorating the radio broadcast in Van Nest Park, West Windsor. (Wikimedia Commons)
But October 30, 1938 certainly put Grovers Mill on the map. “It’s still something that we consider one of the more popular aspects of West Windsor history. However, there used to be a lot more interest,” wrote Paul Ligeti, head archivist of the Historical Society of West Windsor, in an email. “For the first half century, WotW [“War of the Worlds”] was actually a point that many in town did not want to highlight, because of the image that many thought it would give the town (of local yokels running for their lives from a hoax – whether this was actually true or not). However, in the late 1980s, there was a push to reframe WotW as a point of pride, and in 1988 there was a series of celebrations for its 50th anniversary.”
In a strip mall a few miles from the “crash site,” the Grovers Mill Coffee House celebrates the broadcast with newspaper clippings, photographs, and artifacts. Prominently on display is a mural by Princeton artist Robert Hummel, showing the water tower said to be the target of the gun-toting, drunken farmer. The mural is one in a series of artworks Hummel has created that were inspired by the broadcast.
Hummel painted the coffee house mural after listening to the recorded 1938 broadcast for the first time. “It was what I had thought listeners that night may have been imagining while tuning in if not realizing it was only a radio play,” he said.
His second painting, created for the 75th anniversary of the broadcast, is the artist’s version of the action that might have taken place later in the night, with more Martians and destruction. For the 80th anniversary three years ago, his painting was focused on the Princeton University Observatory, which plays a part in the radio play. Hummel is having the painting framed in oak wood trim that was salvaged from an office inside the observatory. “When they tore it down last year, the head engineer let me in to see it before it was demolished,” he said. “He gave me some of the oak wood that was around the doors.”
Scene two, as well, is framed in salvaged wood with a historical pedigree – interior pine planks from within the red barn in place at the time of the broadcast. They were given to Hummel by the barn’s current owner when he was transforming it into living accommodations.
The farmer firing at the water tower is just one example of the folklore surrounding the broadcast. “Another [legend] talks of someone being so scared that they took off in their car so quickly they forgot to open the garage door and crashed right through it! Unfortunately, it’s hard to tell currently whether these stories are 100 percent accurate or not – they could be, or they couldn’t be,” wrote Ligeti.
Hummel’s mother had a lifelong friend from Shamokin, Pennsylvania, – coal mining country – who claimed that a frightened relative hid in a coal mine to escape the Martians. “I find that hard to believe, personally, but that’s the story she told,” Hummel said.
Scoutship by artist Eric Schultz, in front of the West Windsor Arts Center. (Courtesy of WWAC)
At the West Windsor Arts Council’s building on Alexander Road, a 12-foot sculpture by Eric Schultz, called Scoutship, greets visitors. It was commissioned to help launch a sculpture program and mark the 80th anniversary of the broadcast.In recent years, the nonprofit has created programming around the annual mischief night anniversary of the event.
“‘War of the Worlds’ is important because it was the first time there was this realization that art was used to surprise people, and maybe fool them,” said Aylin Green, executive director of the arts center. “Orson Welles didn’t like to look at it that way, but that’s kind of what happened. It was able to happen because of the tenor of the times, and what was going on in the world. There were general fears about invasions. I love that this was a work of art – a theatrical production. That gets a little bit lost in the story. But it’s an example of art having a real impact.”
In interviews just after the craziness caused by the broadcast, Welles claimed that he had no idea it would affect people the way it did. But as time went on, he admitted that his intentions weren’t so innocent.
“I still meet people all over the place, everywhere in the world who’ve had experiences, bitter or otherwise, as a result of our little experiment in broadcasting,” he said in a filmed interview that is available on YouTube. “I suppose we had it coming to us. We were fed up with the way in which everything that came over this new magic box, the radio, was being swallowed. Anything that came through that new machine was believed. So, in a way, our broadcast was an assault on the credibility of that machine. We wanted people to understand that they shouldn’t take any opinion pre-digested, and shouldn’t swallow anything that came through the tap. Whether it was radio or not.”
|
Decades before the term “fake news” became familiar, there was “The War of the Worlds.” The infamous 1938 radio broadcast, inspired by the H.G. Wells novel of the same name, announced to fans of the CBS Radio drama series Mercury Theatre on the Air that Martians had crash-landed in a farmer’s field in Grovers Mill, New Jersey, and were invading the earth.
It was the golden age of radio, and Sunday night was prime time. October 30, 1938 also happened to be mischief night. Led by 23-year-old Orson Welles, the theater company decided to take things a bit further than usual and give listeners a jolt. Just how much of a jolt they intended remains in question.
An announcer who claimed to be at the crash site just a few miles from Princeton breathlessly described a slimy Martian slithering its way out of a metallic cylinder.
“Good heavens, something’s wriggling out of the shadow like a gray snake,” he began. “Now here’s another and another one and another one! They look like tentacles to me. I can see the thing’s body now. It’s large, large as a bear. It glistens like wet leather…. I can hardly force myself to keep looking at it, it’s so awful! The eyes are black and gleam like a serpent. The mouth is kind of V-shaped with saliva dripping from its rimless lips that seem to quiver and pulsate.”
It was all a spectacular hoax, of course. But to some listeners across the country, the sophisticated sound effects and supposedly terrified announcers reporting Martians firing “heat-ray“ weapons created chaos. Newspaper reports at the time said people claimed they saw things that didn’t exist, and crowded the roadways in an effort to escape the invasion. Local legend has it that in Grovers Mill, an inebriated farmer shot at the wooden water tower because he thought it was an alien (never proven, but people who grew up in the West Windsor town have recalled seeing bullet holes in the tower).
|
yes
|
Ancient Civilizations
|
Did Pharaoh Ramses II dye his hair red?
|
yes_statement
|
"pharaoh" ramses ii dyed his "hair" "red".. ramses ii had "red" "hair".
|
https://historum.com/t/was-ramses-ii-really-a-ginger.189257/
|
Was Ramses II really a ginger? | History Forum
|
Go to page
This might be a fairly silly thread, but I remember hearing that based on his mummy we think Ramses II had red hair? This seems pretty improbable just considering it isn’t exactly the hair colour that most springs to mind when imagining someone from Egypt. I thought to myself, are we sure his mummy’s hair wasn’t just reddish due to being dyed with henna or something? Or due to decomposition maybe. But I guess if people who’ve examined the guy’s remains say he was a ginger then maybe he was? Are there any archaeology academic papers that cover this maybe?
Also I assume it’s cool to ask Egypt related questions in a separate thread but if it’s preferable to ask it in the big “just ancient Egypt” thread just let me know.
Personally, I was addressing the questions the OP presented in a more general context. Rameses' age doesn't have anything to do with hair color nor with whether or not he had hair.
Everyone I can think of in my own family has lived past 90 and, has still had a full head of hair. At least half still had recognizable flecks of their original hair color. Losing your hair color as you age isn't a given, it's actually primarily a function of dietary habits and the presence of carcinogens which leach and, consequently bleach your hair. Biologically, losing hair color isn't a given, nor is baldness. It's an indicator of underlying health issues.
Rameses hair was White at the age he died, but dyed with henna to look reddish...
But microscopic analysis of his roots determined that in his youth his hair was red and that the henna may well have been his vanity trying to look younger.
a Ginger is a Nordic redhead, which confers very pale skin, blue or green eyes.... however, Blacks in Africa also have been known to have a gene variant that gives them a darker red or auburn hair color, usually with brown or hazel eyes...
This might be a fairly silly thread, but I remember hearing that based on his mummy we think Ramses II had red hair? This seems pretty improbable just considering it isn’t exactly the hair colour that most springs to mind when imagining someone from Egypt. I thought to myself, are we sure his mummy’s hair wasn’t just reddish due to being dyed with henna or something? Or due to decomposition maybe. But I guess if people who’ve examined the guy’s remains say he was a ginger then maybe he was? Are there any archaeology academic papers that cover this maybe?
Also I assume it’s cool to ask Egypt related questions in a separate thread but if it’s preferable to ask it in the big “just ancient Egypt” thread just let me know.
Professor P. F. Ceccaldi, with a research team behind him, studied some hairs which were removed from the mummy's scalp. Ramesses II was 90 years-old when he died, and his hair had turned white. Ceccaldi determined that the reddish-yellow colour of the mummy's hair had been brought about by its being dyed with a dilute henna solution; it proved to be an example of the cosmetic attentions of the embalmers. However, traces of the hair's original colour (in youth), remain in the roots, even into advanced old age. Microscopic examinations proved that the hair roots contained traces of natural red pigments, and that therefore, during his youth, Ramesses II had been red-haired. It was concluded that these red pigments did not result from the hair somehow fading, or otherwise altering post-mortem, but did indeed represent Ramesses' natural hair colour. Ceccaldi also studied a cross-section of the hairs, and he determined from their oval shape, that Ramesses had been "cymotrich" (wavy-haired). Finally, he stated that such a combination of features showed that Ramesses had been a "leucoderm" (white-skinned person). [Balout, et al. (1985) 254-257.]
Scientific analysis of his (Ramses II) has confirmed that in his youth, the king was indeed a natural redhead. (Tyldesley 2001).
Professor P. F. Ceccaldi, with a research team behind him, studied some hairs which were removed from the mummy's scalp. Ramesses II was 90 years-old when he died, and his hair had turned white. Ceccaldi determined that the reddish-yellow colour of the mummy's hair had been brought about by its being dyed with a dilute henna solution; it proved to be an example of the cosmetic attentions of the embalmers. However, traces of the hair's original colour (in youth), remain in the roots, even into advanced old age. Microscopic examinations proved that the hair roots contained traces of natural red pigments, and that therefore, during his youth, Ramesses II had been red-haired. It was concluded that these red pigments did not result from the hair somehow fading, or otherwise altering post-mortem, but did indeed represent Ramesses' natural hair colour. Ceccaldi also studied a cross-section of the hairs, and he determined from their oval shape, that Ramesses had been "cymotrich" (wavy-haired). Finally, he stated that such a combination of features showed that Ramesses had been a "leucoderm" (white-skinned person). [Balout, et al. (1985) 254-257.]
Scientific analysis of his (Ramses II) has confirmed that in his youth, the king was indeed a natural redhead. (Tyldesley 2001).
I don't believe in this lab analysis. Probably needs further investigation. Seems to me the Professor was just hypothesising. Ramses was most likely black haired. No one lives to be 90 in Egypt by being a ginger. The sun intensity is lethal for those of that complexion. Ramses II was always depicted as red skinned and this is not stylistic because Nubians were depicted as tar black and some Middle Easterners as ivory white next to him. It was just a fashion. He was most likely Middle Eastern looking though.
“the race (of the mummy), by diameters, indices, angles and cranial or facial profiles: is an a priori Berber type. … the hairs, of exceptional interest because of their state of conservation, are fine, supple, slightly wavy by place, of a reddish-blond colour pulling hard towards yellowish. Of oval cross-section, and cross-referencing all other anthropometric observations, they are characteristic of the hair of a “cymotrichous leucoderm", close to the Mediterraneans of Prehistory, like a Berber, of white skin - and not of a Nubian, of black skin.... Microscopic examinations revealed a practically intact morphology and natural red pigments: he was therefore a real redhead.”
‘The Gebelein predynastic mummies are six naturally mummified bodies, dating to approximately 3400 BC from the Late Predynastic period of Ancient Egypt. They were the first complete predynastic bodies to be discovered. … Since 1901, the first body excavated has remained on display in the British Museum. This body was originally nicknamed Ginger due to his red hair"
Red hair and blond hair crops up in Egypt and the Middle East on occasion. There have been a lot of foreign troops that have visited the areas. The Ptolemies also recruited Galatians into their armies and settled them on estates. The Galatians were a Celtic people, which has a history of producing red hair. The Celts were also known to serve as mercenaries throughout the Mediterranean. I have seen a number of "Black People" that had Red hair, Freckles and light colored eyes.
Red hair and blond hair crops up in Egypt and the Middle East on occasion. There have been a lot of foreign troops that have visited the areas. The Ptolemies also recruited Galatians into their armies and settled them on estates. The Galatians were a Celtic people, which has a history of producing red hair.
In ancient Egypt it has nothing to do with that, as Predynastic mummies already had red hair. Predynastic Egyptians have been described as being similar to Berbers (as was Ramses II), and Berbers sometimes have red or blond hair, particularly Kabyles and Riffians.
Well, it seems pretty conclusive that the victor (well, kind of) of Kadesh did indeed probably have red-ish, or at least auburn hair. The Egyptologist (Gaston Maspero) who first discovered the king's mummy corroborates the more recent French medical examination. At least according to this quote from Wikipedia, so take it with a grain of salt. But the quote comes from a book, so it's probably genuine. If anyone has John Romer's The Valley of the Kings and can confirm that would be cool.
So yeah, "possibly auburn in life". I guess redheads were more common in ancient Egypt than I thought. Cool!
Anyways, I guess the main question's been answered with a fairly conclusive "probably yes". Thanks for the help. Feel free to keep posting relevant books or Egyptology/archaeology papers about the subject. But let's not get too off-topic please. Thanks.
|
Ramesses II was 90 years-old when he died, and his hair had turned white. Ceccaldi determined that the reddish-yellow colour of the mummy's hair had been brought about by its being dyed with a dilute henna solution; it proved to be an example of the cosmetic attentions of the embalmers. However, traces of the hair's original colour (in youth), remain in the roots, even into advanced old age. Microscopic examinations proved that the hair roots contained traces of natural red pigments, and that therefore, during his youth, Ramesses II had been red-haired. It was concluded that these red pigments did not result from the hair somehow fading, or otherwise altering post-mortem, but did indeed represent Ramesses' natural hair colour. Ceccaldi also studied a cross-section of the hairs, and he determined from their oval shape, that Ramesses had been "cymotrich" (wavy-haired). Finally, he stated that such a combination of features showed that Ramesses had been a "leucoderm" (white-skinned person). [Balout, et al. (1985) 254-257.]
Scientific analysis of his (Ramses II) has confirmed that in his youth, the king was indeed a natural redhead. (Tyldesley 2001).
I don't believe in this lab analysis. Probably needs further investigation. Seems to me the Professor was just hypothesising. Ramses was most likely black haired. No one lives to be 90 in Egypt by being a ginger. The sun intensity is lethal for those of that complexion. Ramses II was always depicted as red skinned and this is not stylistic because Nubians were depicted as tar black and some Middle Easterners as ivory white next to him. It was just a fashion. He was most likely Middle Eastern looking though.
|
yes
|
Ancient Civilizations
|
Did Pharaoh Ramses II dye his hair red?
|
yes_statement
|
"pharaoh" ramses ii dyed his "hair" "red".. ramses ii had "red" "hair".
|
https://generalist.academy/2019/08/31/red-headed-pharaoh/
|
Red-headed pharaoh – The Generalist Academy
|
Red-headed pharaoh
Ramesses II was the most famous and powerful pharaoh of Egypt’s New Kingdom. And we’re pretty sure that he was a redhead.
Wolfman12405 [CC BY-SA 4.0], via Wikimedia CommonsRamesses lived from about 1303 to 1213 BCE, and he did a lot in that time. Conquered Canaan, invaded Nubia, built some amazing temple complexes, had around a hundred kids, died at age 90, got mummified and buried in the Valley of the Kings. His Greek name and history inspired one of the best known poems of the English Romantic period, Shelley’s Ozymandias:
My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!
Jumping forward about three thousand years, Ramesses’ mummy was dug up and shipped off to Paris. He even had a passport made for the trip, although that’s another story. It was an exceptionally well preserved corpse, and included skin and hair.
His hair was red. He was ninety years old when he died, so his hair at that time was white and just dyed red. But, in the tradition of older people everywhere, he dyed his hair the colour that it had in youth. Ramesses was a ginger.
Natural redheads are rare but not unheard of in the historical record. Some other notable redheaded leaders:
|
Red-headed pharaoh
Ramesses II was the most famous and powerful pharaoh of Egypt’s New Kingdom. And we’re pretty sure that he was a redhead.
Wolfman12405 [CC BY-SA 4.0], via Wikimedia CommonsRamesses lived from about 1303 to 1213 BCE, and he did a lot in that time. Conquered Canaan, invaded Nubia, built some amazing temple complexes, had around a hundred kids, died at age 90, got mummified and buried in the Valley of the Kings. His Greek name and history inspired one of the best known poems of the English Romantic period, Shelley’s Ozymandias:
My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!
Jumping forward about three thousand years, Ramesses’ mummy was dug up and shipped off to Paris. He even had a passport made for the trip, although that’s another story. It was an exceptionally well preserved corpse, and included skin and hair.
His hair was red. He was ninety years old when he died, so his hair at that time was white and just dyed red. But, in the tradition of older people everywhere, he dyed his hair the colour that it had in youth. Ramesses was a ginger.
Natural redheads are rare but not unheard of in the historical record. Some other notable redheaded leaders:
|
yes
|
Ancient Civilizations
|
Did Pharaoh Ramses II dye his hair red?
|
yes_statement
|
"pharaoh" ramses ii dyed his "hair" "red".. ramses ii had "red" "hair".
|
https://redhairedroots.com/2018/04/ancient-egyptians-had-red-hair/
|
Redheads Do Rule: Ancient Egyptian Pharaohs Had Red Hair ...
|
Redheads Do Rule: Ancient Egyptian Pharaohs Had Red Hair
Ramses II had red hair.
Claims have been made against Egyptian Pharaoh Ramses II having red hair, stating that his hair was dyed with henna. I would agree because most redheads are proud of their red hair and greatly desire it to remain so even when they age and it turns white. Well, recent scientific research suggests he really did have red hair. It was discovered that his hair roots contained red pigment signifying that he was indeed a redhead.
The family of Ramses worshipped Seth, the red-haired god of chaos, because of their belief to a divine lineage to him, proof in the face that many of them had red hair. The father of Ramses II, Seti I, had red hair.
A mummy of red-haired man has been uncovered in Egypt that is over 5000 years old named appropriately, Ginger.
I have told this information to many redheads and I always get the same mouth dropping and head shaking because redheads were up in the northern latitudes and Egyptians must look like middle easterns and have darker hair and skin.
The image to the left was found on the tomb of the granddaughter of King Khufu.
Note red hair.
Further reading on red hair in Egypt www.irishoriginsofcivilization.com/chapter-ten.html
|
Redheads Do Rule: Ancient Egyptian Pharaohs Had Red Hair
Ramses II had red hair.
Claims have been made against Egyptian Pharaoh Ramses II having red hair, stating that his hair was dyed with henna. I would agree because most redheads are proud of their red hair and greatly desire it to remain so even when they age and it turns white. Well, recent scientific research suggests he really did have red hair. It was discovered that his hair roots contained red pigment signifying that he was indeed a redhead.
The family of Ramses worshipped Seth, the red-haired god of chaos, because of their belief to a divine lineage to him, proof in the face that many of them had red hair. The father of Ramses II, Seti I, had red hair.
A mummy of red-haired man has been uncovered in Egypt that is over 5000 years old named appropriately, Ginger.
I have told this information to many redheads and I always get the same mouth dropping and head shaking because redheads were up in the northern latitudes and Egyptians must look like middle easterns and have darker hair and skin.
The image to the left was found on the tomb of the granddaughter of King Khufu.
Note red hair.
|
no
|
Ancient Civilizations
|
Did Pharaoh Ramses II dye his hair red?
|
yes_statement
|
"pharaoh" ramses ii dyed his "hair" "red".. ramses ii had "red" "hair".
|
https://diaryofaneccentric.wordpress.com/2008/12/17/guest-post-by-michelle-moran-author-of-nefertiti-and-the-heretic-queen-with-giveaway/
|
Guest Post: Michelle Moran, Author of Nefertiti and The Heretic Queen
|
First of all, thank you very much for having me here! When you first asked me to write a guest post, I knew immediately what I wanted to talk about. History’s surprises. I don’t mean the small surprises an author uncovers during the lengthy process of researching for an historical novel, such as the fact that the Romans liked to eat a fish sauce called garum which was made from fermented fish. Ugh. No, I mean the large surprises which alter the way we think about an ancient civilization and humanity.
The Heretic Queen is the story of Nefertari and her transformation from an orphaned and unwanted princess to one of the most powerful queens of ancient Egypt. She married Ramesses II and possibly lived through the most famous exodus in history. I assumed that when I began my research I would discover that Ramesses was tall, dark and handsome (not unlike the drool-worthy Yule Brenner in The Ten Commandments). And I imagined that he would have been victorious in every battle, given his long reign of more than thirty years and his triumphant-sounding title, Ramesses the Great. But neither of these assumptions turned out to be true.
My first surprise came when I first visited the Hall of Mummies in the Egyptian Museum in Cairo. Contrary to every single media portrayal of Ramesses and every movie ever made, it turns out the Pharaoh was not tall, dark and handsome as I had expected, but tall, light and red-headed (which was just as fine, by me)! When his mummy was recovered in 1881, Egyptologists were able to determine that he had once stood five feet seven inches tall, had flaming red hair, and a distinctive nose that his sons would inherit. There were those who contended that his mummy had red hair because of burial dyes or henna, but French scientists laid these theories to rest after a microscopic analysis of the roots conclusively proved he was a red-head like Set, the Egyptian god of chaos. As I peered through the heavy glass which separated myself from the a man commonly referred to as the greatest Pharaoh of ancient Egypt, my pre-conceived notions of Ramesses II fell away. I knew that the oldest mummy ever discovered in Egypt had had red hair, but to see red hair on a mummy in person was something else entirely.
My second surprise came as I was attempting to piece together what kind of man Ramesses II had been. I assumed, given his lengthy reign, that he must have been a great warrior who was level-headed in battle and revered as a soldier. Pharaohs who were inept at waging war didn’t tend to have very lengthy reigns. There were always people on the horizon – Hyksos, Hittites, Mitanni – who wanted Egypt for themselves, not to mention internal enemies who would have loved to usurp the throne. But while researching Ramesses’s foreign policy, a very different man began to emerge. One who was young, rash, and sometimes foolish. His most famous battle—the Battle of Kadesh—ended not in victory, but in a humiliating truce after he charged into combat strategically unprepared and very nearly lost the entire kingdom of Egypt. In images from his temple in Abu Simbel, he can be seen racing into this war on his chariot, his horse’s reins tied around his waist as he smites the Hittites in what he depicted as a glorious triumph. Nefertari is believed to have accompanied him into this famous battle, along with one of his other wives. First, I had to ask myself, what sort of man brings his wives to war? Clearly, one who was completely confident of his own success. Secondly, I had to wonder what this battle said about Ramesses’s character.
Rather than being a methodical planner, Ramesses was clearly the type of Pharaoh who was swayed – at least on the battlefield – by his passions. However, his signing of a truce with the Hittites seemed significant to me for two reasons. One, it showed that he could be humble and accept a stalemate (whereas other Pharaohs might have tried to attack the Hittites the next season until a definitive conqueror was declared). And two, it showed that he could think outside the box. Ramesses’s Treaty of Kadesh is the earliest copy of a treaty that has ever been found. When archaeologists discovered the tablet it was written in both Egyptian and Akkadian. It details the terms of peace, extradition policies and mutual-aid clauses between Ramesses’s kingdom of Egypt and the powerful kingdom of Hatti. Today, the original treaty, written in cuneiform and discovered in Hattusas, is displayed in the United Nations building in New York to serve as a reminder of the rewards of diplomacy. For me, it also serves as a reminder that Ramesses was not just a young, rash warrior, but a shrewd politician.
There were other surprises as well; about the personal history of my narrator Nefertari, the Exodus, and even the Babylonian legends which bear a striking resemblance to Moses’s story in the Bible. Researching history always comes with revelations, and it’s one of the greatest rewards of being an historical fiction author. There’s nothing I like better than being surprised and having my preconceptions crumble, because if I’m surprised, it’s likely that the reader will be surprised as well.
Thanks, Michelle, for taking time out of your busy schedule to stop by and share your story with us.
In January, I’ll be reviewing The Heretic Queen and Michelle will be stopping by for another guest post and to offer two hardcover copies of the book to my readers! Isn’t she a gem?
Michelle is generously offering one lucky reader a paperback copy of Nefertiti. If you’d like to be entered, please leave a comment in this post with your favorite historical fiction title. Also, please be sure I have your e-mail address or blog URL. If I don’t have a way to contact you, your entry won’t be counted. This giveaway is open to readers everywhere and will end 11:59 pm EST on December 21, 2008.
I love historical fiction – I would have to say the oldest ones that I remember reading would be The Kent Family Chronicles by John Jakes. Then about 10 years ago I couldn’t put down The Good Earth by Pearl S. Buck but most recently it would have to be Mozart’s Sister by Nancy Moser or In the Shadow of Lions by Ginger Garrett.
I mostly read English historical fiction, one of my favorites is A Place Beyond Courage by Elizabeth Chadwick. I’ve been hear so many good things about Michelle Moran’s books — I would love to win a copy. Thanks.
lcbrower40 at gmail dot com
I would love to be entered for this giveaway Anna.. but I have never read historical fiction until now.. so i don’t really have a favorite:(
I definitely want to win a copy of nefertiti though:)
ramyasbookshelf(at)gmail(dot)com
Please enter me. I have only read 1 historical fiction so far so I guess that would have to be my favorite by default. It was Here Be Dragons. This sounds like a great book to continue in the historical fiction genre.
I absolutely love Sharon Kay Penman’s work and also that of Elizabeth Chadwick; but, I have to say that my all-time favorite historical fiction novel is SWORD AT SUNSET by Rosemary Sutcliffe. That was my first brush with a realistic King Arthur.
I’m really looking forward to reading Michelle Moran’s work. She’s on my TBR list for 2009. Thanks for the chance to win one of her novels!
geebee.reads AT gmail DOT com
My favorite historical fiction novel would have to be the North and South trilogy of books. North and South, Love and War and Heaven and Hell. John Jakes is by far my favorite historical novelist. I also like The Kent Family Chronicles series of books,Homeland,American Dreams ,California Gold, Charleston. All excellent historical novels.
traymona[at]aol.com
It’s so hard to pick just one since I did major in history and I’m a social studies teacher. History is my life! Hm… I guess I would say that my favorite historical fiction book is I, Claudius by Graves.
I haven’t read much historical fiction in recent years, but the newer publications look fabulous. I’m about to start reading Texas Belles by Kimberley Comeaux — I know it’s going to be on my favorites list.
Please enter me in the drawing for Nefertiti. Thank you for sharing your interview.
I really would like to win this book. All the reviews
of Nefertiti I’ve read are great. My favorite historical novel this year was The Gates of Trevalyan, a Civil War story but there are so many.
florida982002@yahoo dot com
I want to win! Umm… I think my favorite historical fiction might me The Other Boleyn Girl. It’s certainly one of the most memorable–the three days I was reading it, I kept vividly dreaming I was part of Henry VIII’s court, so much so that I was surprised to wake up in the 21st century. I really liked The Cure (YA) by Sonia Levitan, too.
|
My first surprise came when I first visited the Hall of Mummies in the Egyptian Museum in Cairo. Contrary to every single media portrayal of Ramesses and every movie ever made, it turns out the Pharaoh was not tall, dark and handsome as I had expected, but tall, light and red-headed (which was just as fine, by me)! When his mummy was recovered in 1881, Egyptologists were able to determine that he had once stood five feet seven inches tall, had flaming red hair, and a distinctive nose that his sons would inherit. There were those who contended that his mummy had red hair because of burial dyes or henna, but French scientists laid these theories to rest after a microscopic analysis of the roots conclusively proved he was a red-head like Set, the Egyptian god of chaos. As I peered through the heavy glass which separated myself from the a man commonly referred to as the greatest Pharaoh of ancient Egypt, my pre-conceived notions of Ramesses II fell away. I knew that the oldest mummy ever discovered in Egypt had had red hair, but to see red hair on a mummy in person was something else entirely.
My second surprise came as I was attempting to piece together what kind of man Ramesses II had been. I assumed, given his lengthy reign, that he must have been a great warrior who was level-headed in battle and revered as a soldier. Pharaohs who were inept at waging war didn’t tend to have very lengthy reigns. There were always people on the horizon – Hyksos, Hittites, Mitanni – who wanted Egypt for themselves, not to mention internal enemies who would have loved to usurp the throne. But while researching Ramesses’s foreign policy, a very different man began to emerge. One who was young, rash, and sometimes foolish. His most famous battle—the Battle of Kadesh—ended not in victory, but in a humiliating truce after he charged into combat strategically unprepared and very nearly lost the entire kingdom of Egypt.
|
no
|
Ancient Civilizations
|
Did Pharaoh Ramses II dye his hair red?
|
yes_statement
|
"pharaoh" ramses ii dyed his "hair" "red".. ramses ii had "red" "hair".
|
http://strangebtrue.blogspot.com/2019/01/the-red-haired-mummies-of-egypt.html
|
The RED-HAIRED MUMMIES of EGYPT - MFS-Strange but TRUE
|
Pages
Thursday, January 3, 2019
The RED-HAIRED MUMMIES of EGYPT
Professor P. F. Ceccaldi, with a research team, studied some hairs from the mummy's scalp. Ramesses II was thought to be 87 years-old when he died, and his hair had turned white. Ceccaldi determined that the reddish-yellow color of the hair was due to a dye with a dilute henna solution. Many Egyptians dyed their hair, and this personal habit was preserved by the embalmers.
Red-haired Ramesses II
However, traces of the hair's original color remained in the roots. Microscopic examinations showed that the hair roots contained natural red pigments, and that therefore, during his younger days, Ramesses II had been a red head. Analysis concluded that these red pigments did not result from the hair somehow fading, or otherwise being altered after death, but did represent Ramesses' natural hair color. Ceccaldi also studied the cross-section of the hairs, and determined from their oval shape, that Ramesses had been "cymotrich" (wavy-haired). Finally, he stated that such a combination of features showed that Ramesses had been a "leucoderm" (white-skinned person).
THE RED HAIRED RAMSES II - LAST SIGNIFICANT WHITE PHARAOH
Egypt's last display of national vigor came with the red haired Pharaoh Ramses II (1292 - 1225 BC). Ramses II managed to re-establish the already decaying Egyptian Empire by recapturing much land in Nubia.
He also fought a series of battles against invading Indo-Europeans, the Hittites. This was culminated with the battle of Kadesh in northern Syria. Ramses signed a treaty with the Hittites in 1258 BC, which ended the war. In terms of the treaty, Ramses took as his wife an Indo-European Hittite princess. His other achievements included the building of the rock-hewn temple of Abu Simbel, the great hall in the Temple of Amon at Karnak, and the mortuary temple at Thebes.
After this king, Egypt entered into a steady period of decay, caused directly by the elimination of the original Egyptians, and their replacement with a mixed population made up of Black, Semitic and the remnant White population. This racially divergent nation was never again to reach the heights achieved by the First, Second or the first part of the Third Kingdoms. In these later years there were competing claimants to the pharaohs throne, many of whom, racially speaking, bore no resemblance to the original pharaohs at all.
The mummy of the wife of King Tutankhamen has auburn hair.
A mummy with red hair, red mustache and red beard was found by the pyramids at Saqqara.
Red-haired mummies were found in the crocodile-caverns of Aboufaida.
The book HISTORY OF EGYPTIAN MUMMIES mentions a mummy with reddish-brown hair.
The mummies of Rameses II and Prince Yuaa have fine silky yellow hair. The
mummy of another pharaoh, Thothmes II, has light chestnut-colored hair.
An article in a leading British anthropological journal states that many mummies have dark reddish-brownhair. Professor Vacher De Lapouge described a blond mummy found at Al Amrah, which he says has the face and skull measurements of a typical Gaul or Saxon.
A blond mummy was found at Kawamil along with many chestnut-colored ones.
Chestnut-haired mummies have been found at Silsileh.
The mummy of Queen Tiy has "wavy brown hair."
Unfortunately, only the mummies of a very few pharaohs have survived to the 20th century, but a large proportion of these are blond.
The Egyptians have left us many paintings and statues of blondes and redheads. Amenhotep III's tomb painting shows him as having light red hair. Also, his features are quite caucasian
A farm scene from around 2000 B.C. in the tomb of the nobleman Meketre shows redheads.
An Egyptian scribe named Kay at Sakkarah around 2500 B.C. has blue eyes.
The tomb of Menna (18th Dynasty) at West Thebes shows blond girls.
The god Horus is usually depicted as white. He is very white in the Papyrus Book of the Dead of Lady Cheritwebeshet (21st Dynasty), found in the Egyptian Museum in Cairo.
A very striking painting of a yellow-haired man hunting from a chariot can be found in the tomb of Userhet, Royal Scribe of Amenophis II. The yellow-haired man is Userhet. The same tomb has paintings of blond soldiers. The tomb of Menna also has a wall painting showing a blond man supervising two dark-haired workers scooping grain.
A very attractive painting is found on the wall of a private tomb in West Thebes from the 18th Dynasty. The two deceased parents are white people with black hair. Mourning them are two pretty fair-skinned girls with light blond hair and their red-haired older brother.
Queen Thi is painted as having a rosy complexion, blue eyes and blond hair. She was co-ruler with her husband Amenhotep III and it has been said of their rule. "The reign of Amenhotep III was the culminating point in Egyptian history, for never again, in spite of the exalted effort of the Ramessides, did Egypt occupy so exalted a place among the nations of the world as she had in his time."
Amenhotep III looks northern European in his statues.
Paintings of people with red hair and blue eyes were found at the tomb of Bagt in Beni Hassan. Many other tombs at Beni Hassan have paintings of individuals with blond and red hair as well as blue eyes.
Paintings of blonds and redheads have been found among the tombs at Thebes.
Blond hair and blue eyes were painted at the tomb of Pharaoh Menphtah in the valley of the Kings.
Paintings from the Third Dynasty show native Egyptians with red hair and blue eyes.
They are shepherds, workers and bricklayers.
A blond woman was painted at the tomb of Djeser-ka-ra-seneb in Thebes.
A model of a ship from about 2500 B.C. is manned by five blond sailors.
The god Nuit was painted as white and blond.
A painting at the tomb of Meresankh III at Giza, from about 2485 B.C., shows white skin and red hair.
Two statues from about 2570 B.C., found in the tombs at Medum, show Prince Rahotep and his wife Nofret. He has light green stones for eyes. She has
violet-blue stones.
A painting from Iteti's tomb at Saqqara shows a very Nordic-looking man with blond hair.
Harvard Professor Carleton Coon, in his book THE RACES OF EUROPE, tells us that "many of the officials, courtiers, and priests, representing the upper class of Egyptian society but not the royalty, looked strikingly like modern Europeans, especially long-headed ones." (Note: Nordics are long-headed.) Long-headed Europeans are most common in Britain, Scandinavia, the Netherlands, and northern Germany.
Time-Life books put out a volume called RAMESES II THE GREAT. It has a good picture of the blond mummy of Rameses II. Another picture can be found in the book X-RAYING THE PHARAOHS, especially the picture on the jacket cover. It shows his yellow hair.
A book called CHRONICLE OF THE PHARAOHS was recently published showing paintings, sculptures and mummies of 189 pharaohs and leading personalities of Ancient Egypt. Of these, 102 appear European, 13 look Black, and the rest are hard to classify. All nine mummies look like our Europeans.
The very first pharaoh, Narmer, also known as Menes, looks very Caucasion
The same can be said for Khufu's cousin Hemon, who designed the Great Pyramid of Giza, with help from Imhotep. A computer-generated reconstruction of the face of the Sphinx shows a European-looking face.
It was once painted sunburned red. The Egyptians often painted upper class men as red and upper class women as white; this is because the men became sun-burned or tanned while outside under the burning Egyptian sun. The women, however, usually stayed inside.
"The predynastic Egyptians, that is to say, that stratum of them which was indigenous to North Africa, belonged to a white or light-skinned race with fair hair, who in many particulars resembled the Libyans, who in later historical times lived very near the western bank of the Nile." [E. A. W. Budge, Egypt in the Neolithic and Archaic Periods (London: Kegan Paul, Trench & Trübner, 1902), p. 49.]
Later, in the same book, Budge referred to a pre-dynastic statuette that: "has eyes inlaid with lapis-lazuli, by which we are probably intended to understand that the woman here represented had blue eyes." [Ibid., p. 51.]
In 1925, the Oxford don L. H. Dudley Buxton, wrote the following concerning ancient Egyptian crania:
"Among the ancient crania from the Thebaid in the collection in the Department of Human Anatomy in Oxford, there are specimens which must unhesitatingly be considered to be those of Nordic type. [L. H. D. Buxton, The Peoples of Asia (London: Kegan Paul, Trench & Trübner, 1925), p. 50.]
The Scottish physical anthropologist Robert Gayre has written, that in his considered opinion:
"Ancient Egypt, for instance, was essentially a penetration of Caucasoid racial elements into Africa . . . This civilisation grew out of the settlement of Mediterraneans, Armenoids, even Nordics, and Atlantics in North Africa . . ." [R. Gayre of Gayre, Miscellaneous Racial Studies, 1943-1972 (Edinburgh: Armorial, 1972), p. 85.]
When English archaeologist Howard Carter excavated the tomb of Tutankhamen in 1922, he discovered in the Treasury a small wooden sarcophagus. Within it lay a memento of Tutankhamen's beloved grandmother, Queen Tiye: "a curl of her auburn hair." [C. Desroches-Noblecourt, Tutankhamen: Life and Death of a Pharaoh (Harmondsworth: Penguin Books, 1972), p. 65.] (See mummy picture)
Queen Tiye (18th Dynasty), was the daughter of Thuya, a Priestess of the God Amun. Thuya's mummy, which was found in 1905, has long, red-blonde hair. Examinations of Tiye's mummy proved that she bore a striking resemblance to her mother. [B. Adams, Egyptian Mummies (Aylesbury: Shire Publications, 1988), p. 39.] (See mummy picture)
Princess Ranofri, a daughter of Pharaoh Tuthmosis III (18th Dynasty), is depicted as a blonde in a wall painting that was recorded in the 19th century, by the Italian Egyptologist Ippolito Rosellini. [Ibid., p. 132.]
American Egyptologist Donald P. Ryan excavated tomb KV 60, in the Valley of the Kings, during the course of 1989. Inside, he found the mummy of a royal female, which he believes to be the long-lost remains of the great Queen Hatshepsut (18th Dynasty). Ryan describes the mummy as follows:
"The mummy was mostly unwrapped and on its back. Strands of reddish-blond hair lay on the floor beneath the bald head." [Ibid., p. 87.]
Manetho, a Graeco-Egyptian priest who flourished in the 3rd century BC, wrote in his Egyptian History, that the last ruler of the 6th Dynasty was a woman by the name of Queen Nitocris. He has this to say about her:
"There was a queen Nitocris, braver than all the men of her time, the most beautiful of all the women, blonde-haired with rosy cheeks. By her, it is said, the third pyramid was reared, with the aspect of a mountain." [W. G. Waddell, Manetho (London: William Heinemann, 1980), p. 57.]
According to the Graeco-Roman authors Pliny the Elder, Strabo and Diodorus Siculus, the Third Pyramid was built by a woman named Rhodopis. When translated from the original Greek, her name means "rosy-cheeked". [G. A. Wainwright, The Sky-Religion in Egypt (Cambridge: University Press, 1938), p. 42.]
We may also note that a tomb painting recorded by the German Egyptologist C. R. Lepsius in the 1840s, depicts a blonde woman by the name of Hetepheres (circa 5th Dynasty). The German scholar Alexander Scharff, observed that she was described as being a Priestess of the Goddess Neith, a deity who was sacred to the blond-haired Libyans of the Delta region. He goes on to state that her name is precisely the same as that of Queen Hetepheres II, who is also shown as fair-haired, in a painting on the wall of Queen Meresankh III's tomb. He deduced from all of this, that the two women may well have been related, and he suggested that Egypt during the Age of the Pyramids, was dominated by an elite of blonde women. [A. Scharff, "Ein Beitrag zur Chronologie der 4. ägyptischen Dynastie." Orientalistische Literaturzeitung XXXI (1928) pp. 73-81.]
The twentieth prayer of the 141st chapter of the ancient Egyptian Book of the Dead, is dedicated "to the Goddess greatly beloved, with red hair." [E. A. W. Budge, The Book of the Dead (London: Kegan Paul, Trench & Trübner, 1901), p. 430.] In the tomb of Pharaoh Merenptah (19th Dynasty), there are depictions of red-haired goddesses. [N. Reeves & R. H. Wilkinson, The Complete Valley of the Kings (London: Thames & Hudson, 1997), p. 149.]
In the Book of the Dead, the eyes of the god Horus are described as "shining," or "brilliant," whilst another passage refers more explicitly to "Horus of the blue eyes". [Budge, op. cit., pp. 421 & 602.] The rubric to the 140th chapter of said book, states that the amulet known as the "Eye of Horus," (used to ward-off the "Evil Eye"), must always be made from lapis-lazuli, a mineral which is blue in colour. [Ibid., p. 427.] It should be noted that the Goddess Wadjet, who symbolised the Divine Eye of Horus, was represented by a snake (a hooded cobra to be precise), and her name, when translated from the original Egyptian, means "blue-green". [A. F. Alford, The Phoenix Solution (London: Hodder & Stoughton, 1998), pp. 266-268.] Interestingly, the ancient Scandanavians claimed that anyone who was blue-eyed (and therefore possessed the power of the Evil Eye), had "a snake in the eye," and blue eyes were frequently compared to the eyes of a serpent. [F. B. Gummere, Germanic Origins (London: David Nutt, 1892), pp. 58, 62.]
In the ancient Pyramid Texts, the Gods are said to have blue and green eyes. [Alford, op. cit., p. 232.] The Graeco-Roman author Diodorus Siculus (I, 12), says that the Egyptians thought the goddess Neith had blue eyes. [C. H. Oldfather, Diodorus of Sicily (London: William Heinemann, 1968), p. 45.]
A text from the mammisi of Isis at Denderah, declares that the goddess was given birth to in the form of a "ruddy woman". [J. G. Griffiths, De Iside et Osiride (Cardiff: University of Wales Press, 1970), p. 451.] Finally, the Greek author Plutarch, in the 22nd chapter of his De Iside et Osiride, states that the Egyptians thought Horus to be fair-skinned, and the god Seth to be of a ruddy complexion. [Ibid., p. 151.]
Yuya (left), and to the right, Tjuyu
Yuya-(Joseph II)
Biblical Joseph Egyptian Prime Minister during 1400 BC.
Father of Tiy, Yuya's blonde hair and Caucasian facial struture have been well preserved by the embalming process.
Thuya, Wife of Yuya.
Equally blonde and caucasian. She was the great grandmother of Tutankhamen.
Mother of Tiy
Egyptian Female Pharaoh: Queen Hatshepsut, wife of Pharaoh Thutmosis II. She ruled Egypt after Thutmosis' death in 1520 BC. Her long blonde hair and facial structure has been well preserved by the embalming process of the time.
|
Pages
Thursday, January 3, 2019
The RED-HAIRED MUMMIES of EGYPT
Professor P. F. Ceccaldi, with a research team, studied some hairs from the mummy's scalp. Ramesses II was thought to be 87 years-old when he died, and his hair had turned white. Ceccaldi determined that the reddish-yellow color of the hair was due to a dye with a dilute henna solution. Many Egyptians dyed their hair, and this personal habit was preserved by the embalmers.
Red-haired Ramesses II
However, traces of the hair's original color remained in the roots. Microscopic examinations showed that the hair roots contained natural red pigments, and that therefore, during his younger days, Ramesses II had been a red head. Analysis concluded that these red pigments did not result from the hair somehow fading, or otherwise being altered after death, but did represent Ramesses' natural hair color. Ceccaldi also studied the cross-section of the hairs, and determined from their oval shape, that Ramesses had been "cymotrich" (wavy-haired). Finally, he stated that such a combination of features showed that Ramesses had been a "leucoderm" (white-skinned person).
THE RED HAIRED RAMSES II - LAST SIGNIFICANT WHITE PHARAOH
Egypt's last display of national vigor came with the red haired Pharaoh Ramses II (1292 - 1225 BC). Ramses II managed to re-establish the already decaying Egyptian Empire by recapturing much land in Nubia.
He also fought a series of battles against invading Indo-Europeans, the Hittites. This was culminated with the battle of Kadesh in northern Syria. Ramses signed a treaty with the Hittites in 1258 BC, which ended the war. In terms of the treaty, Ramses took as his wife an Indo-European Hittite princess.
|
yes
|
Ancient Civilizations
|
Did Pharaoh Ramses II dye his hair red?
|
yes_statement
|
"pharaoh" ramses ii dyed his "hair" "red".. ramses ii had "red" "hair".
|
https://www.bartleby.com/essay/Ramses-II-Whole-Structure-Of-Religion-FJUAGHSUGCB
|
Ramses II: Whole Structure Of Religion - 137 Words | Bartleby
|
Ramses II: Whole Structure Of Religion
Ramses ii had a religious belief in God Seth. God Seth was a God who represented wind, chaos, confusion, storms and desert. He was quite a negative God and wasn’t very nice. Ramses ii and his Father Seti I had a connection with this God as they all were warrior pharaohs and had a violent nature for war effort. Ramses ii expressed his belief by dyeing his hair red, his hair represented God Seth. Ramses ii was also turned in to a God at the Sed festival. The sed festival was celebrated once a pharaoh had reigned for over 30 years. As Ramses ii reigned over 66 years he would have been a god. Egyptians worshiped their new God a lot. As Ramses ii was a God he decided to change the whole structure of the religion.
Ramses II
Imagine Egypt, in its prime. During the 19th Dynasty, where chariots might be racing through the streets, constructions of our modern day wonders were in progress, and merchants and artisans were in the busy market place selling their ware. Pharaohs ruled the land, and were seen as gods. During
In the eulogy on the Kuban stele, we have a repetitive notion of how Ramesses II is view as one of the gods, using metaphors to describe his relationship with specific gods. It is important to make that Ramesses II is not being compared to any random gods, but he being compared or rather being described to having the attributes of some of the most popular and powerful gods within Egypt. Ramesses II is described as being “like Re”, his words are like those of Harakhte, he is able to measure more accurately than Thoth and that his mind works just like Ptah’s mind . What is notable about these four gods, is that both Re and Harakhte are manifestations of the all-important Sun-God of Egypt, Thoth is the wise patron of god of Hermopolis and Ptah is the patron god for Memphis and Craftsmen; all in their own right gods of creation .
During each period, there is always a leader who changes a basic component of society: a king or ruler. We find characteristics of leaders as well as what changes they made in the ancient and medieval period. It is important to study ancient history because we can learn
The temple was originally carved out of the mountainside next to the Nile during the reign of Pharaoh Ramesses II in the 13th century BC. It was to serve as a lasting monument to himself and his queen Nefertari, to commemorate his victory at the Battle of Kadesh. He also wanted to intimidate Egypt's neighbors, the Nubians. It was Ramses' way of trying to make an impression upon Egypt's neighbors, as well as to force Egypt's religion upon them. Propaganda in Egyptian art was common. Art can attempt to persuade, publicize and influence the people’s attitudes. Their art work includes paintings, stone carvings, statues, and sculpture and funeral artifacts. Daily life such as field work, special events, political and social hierarchy, battles
The Ancient Egyptians believed that the pharaoh was a god himself, and that his power was given to him by the god Ra. Other pharaohs also believed in this as well which was the case with Zoser, and the pharaohs of the preceding dynasties.
February-22nd-1303bc was the birth of Ramses II, son of Queen Tuy and father Seti I. After taking the throne in his teens he went on to become the third Egyptian pharaoh of the nineteen dynasty in his early 30’s and conquered Egypt from May-31st 1279-1212bc, he ruled Egypt for 66 years and 2 months before his decease.
In the typical life of an Egyptian citizen, one was constantly being influenced by their gods and goddesses because of his or her belief in a polytheistic religion. The gods and goddesses were believed to have power over the forces and elements over nature, and myths about them explained the connection they had between their
Architecture, literature, and the sculpture of 7.25 ton granites busts are all talents of Ramses II, and all of which paved his way to fame, power, and an eternal profile that was misunderstood by historians around the world. A man of many talents and achievements, Ramses II was as calculating as he was skilled. He managed to raise an empire to greatness, promote himself to a position of power so that no opponent would ever dare to challenge his reign, and (accidentally) fool historians everywhere centuries after his death. How did he do it? He did it through public promotions, careful calculating and planning, and the sheer power of Egyptian intelligence. Ramses II has plenty of historians fooled that he was a pride drinking ruler, hungry
Another significant pharaoh was Ramses II, he also helped made Egypt stronger through his military strength, architectural knowledge, and even his religious beliefs. Specifically Ramses II military was
He did this to, “… make the Aten—the god of the solar disk—the head of the state religion” (Readings, p.23). This goes to show how badly he wanted this change because he decided to change his name. He tried to make changes to the Egyptian religion that was known for its continuity. This lead to an innovation called henotheism which was different then the polytheistic religion Egypt was accustomed to. Henotheism was worshiping one god without denying there were other gods elsewhere in the world (Humanities, p. 180). These changes he was starting to make were all around Egypt’s religion and the gods they believed in. He still represented the Egyptians desire for continuity even though he tried to change the religion because he wanted what he tried to do to continue even after his death and he was still believing in a god, though it was more centered towards one god in particular. When he did this it basically “…reshaped the royal religion at his capital, Amarna” (Humanities, p. 180). It reshaped it because many were accustomed to the religion they had been believing in and many people were not happy with this change. Apparently, it had “…aroused the opposition of conservative nobles who supported the powerful priests of the Theban god, Amen” (Humanities,
This changed with the rise of a new pharaoh named Akhenaten. Akhenaten is most famous for his belief that only one god should be worshipped: Aton, god of the sun (McKay et al. 26). This belief led to the pharaoh moving the capital of the country from Thebes to a new city: Amarna (Redford 22). This was a far-cry from the beliefs of the old Egypt, with only one god getting attention, and the others being neglected (it was said that the other gods were still acknowledged as existent, in some way, shape or form) (Redford 13). This new series of beliefs did not hold well with the Egyptian people, who bounced back to polytheism right after his death. Although author McKay at el. State that the ideas did not survive because of “no connection to the past” and “persecution of nonbelievers”, perhaps the resistance to the new form of religion was because of the loss of power and jobs in the temple sect of the Egyptian economy (McKay 26)? No one can tell for
Ramses had a harsh and profound life, yet he was able to accomplish, build, and expand so many ideas across Egypt and even into today. Ramses II made most decisions based on his involvement whether that was war, politics, or ruling. Egypt had to rely on Ramses II to be
The physical environments of Egypt and Mesopotamia do explain their cultural differences. Egyptians had natural barriers and fertile, predictable land, while Mesopotamians had unpredictable land and no protection from invaders. These key differences are the basis of the cultural differences between the two regions, and explain different parts of their
This Goblet Inscribed with the names King Akhenaten and Queen Nefertiti, is made of travertine, (Egyptian alabaster) height 5 ½ in diameter 4 1/8 in. (MET). When I look at this piece I feel it may commemorate a wedding, anniversary, or King Akhenaten’s deep love and affections for his
The Ancient Egyptian were polytheistic most of the time, which means that they believed in multiple gods. When Akhenaten was pharaoh, the Egyptians were monotheistic, meaning they worshiped only one god. He ended the worship of other gods and claimed that Aten, the lord of all was the only god in Egypt. The Egyptians didn't like this idea, so on their own,
|
Ramses II: Whole Structure Of Religion
Ramses ii had a religious belief in God Seth. God Seth was a God who represented wind, chaos, confusion, storms and desert. He was quite a negative God and wasn’t very nice. Ramses ii and his Father Seti I had a connection with this God as they all were warrior pharaohs and had a violent nature for war effort. Ramses ii expressed his belief by dyeing his hair red, his hair represented God Seth. Ramses ii was also turned in to a God at the Sed festival. The sed festival was celebrated once a pharaoh had reigned for over 30 years. As Ramses ii reigned over 66 years he would have been a god. Egyptians worshiped their new God a lot. As Ramses ii was a God he decided to change the whole structure of the religion.
Ramses II
Imagine Egypt, in its prime. During the 19th Dynasty, where chariots might be racing through the streets, constructions of our modern day wonders were in progress, and merchants and artisans were in the busy market place selling their ware. Pharaohs ruled the land, and were seen as gods. During
In the eulogy on the Kuban stele, we have a repetitive notion of how Ramesses II is view as one of the gods, using metaphors to describe his relationship with specific gods. It is important to make that Ramesses II is not being compared to any random gods, but he being compared or rather being described to having the attributes of some of the most popular and powerful gods within Egypt. Ramesses II is described as being “like Re”, his words are like those of Harakhte, he is able to measure more accurately than Thoth and that his mind works just like Ptah’s mind . What is notable about these four gods, is that both Re and Harakhte are manifestations of the all-important Sun-God of Egypt, Thoth is the wise patron of god of Hermopolis and Ptah is the patron god for Memphis and Craftsmen; all in their own right gods of creation .
During each period, there is always a leader who changes a basic component of society: a king or ruler.
|
yes
|
Ancient Civilizations
|
Did Pharaoh Ramses II dye his hair red?
|
no_statement
|
"pharaoh" ramses ii did not "dye" his "hair" "red".. ramses ii did not have "red" "hair".
|
http://www.egyptsearch.com/forums/ultimatebb.cgi?ubb=print_topic;f=8;t=005500
|
Ramses II - EgyptSearch Forums
|
Curious if the board discussed this last year...if so could someone point me to that thread?
Thanx
Redheaded Pharaoh Ramses II 28 February 2006 Pharaoh Ramses II (of the 19th Dynasty), is generally considered to be the most powerful and influential King that ever reigned in Egypt. He is one of the few rulers who has earned the epithet "the Great". Subsequently, his racial origins are of extreme interest.
In 1975, the Egyptian government allowed the French to take Ramesses mummy to Paris for conservation work. Numerous other tests were performed, to determine Ramses precise racial affinities, largely because the Senegalese scholar Cheikh Anta Diop, was claiming at the time that Ramesses was black. Once the work had been completed, the mummy was returned in a hermetically sealed casket, and it has remained largely hidden from public view ever since, concealed in the bowels of the Cairo Museum. The results of the study were published in a lavishly illustrated work, which was edited by L. Balout, C. Roubet and C. Desroches-Noblecourt, and was titled La Momie de Ramsès II: Contribution Scientifique à lÉgyptologie (1985).
Professor P. F. Ceccaldi, with a research team behind him, studied some hairs which were removed from the mummys scalp. Ramesses II was 90 years-old when he died, and his hair had turned white. Ceccaldi determined that the reddish-yellow colour of the mummys hair had been brought about by its being dyed with a dilute henna solution; it proved to be an example of the cosmetic attentions of the embalmers. However, traces of the hairs original colour (in youth), remain in the roots, even into advanced old age. Microscopic examinations proved that the hair roots contained traces of natural red pigments, and that therefore, during his youth, Ramses II had been red-haired. It was concluded that these red pigments did not result from the hair somehow fading, or otherwise altering post-mortem, but did indeed represent Ramses natural hair colour. Ceccaldi also studied a cross-section of the hairs, and he determined from their oval shape, that Ramesses had been "cymotrich" (wavy-haired). Finally, he stated that such a combination of features showed that Ramesses had been a "leucoderm" (white-skinned person). [Balout, et al. (1985) 254-257.]
Balout and Roubet were under no illusions as to the significance of this discovery, and they concluded as follows:
"After having achieved this immense work, an important scientific conclusion remains to be drawn: the anthropological study and the microscopic analysis of hair, carried out by four laboratories: Judiciary Medecine (Professor Ceccaldi), Société LOréal, Atomic Energy Commission, and Institut Textile de France showed that Ramses II was a leucoderm, that is a fair-skinned man, near to the Prehistoric and Antiquity Mediterraneans, or briefly, of the Berber of Africa." Balout, et al. (1985) 383.
Posted by Doug M (Member # 7650) on 22. August 200710:30 AM:
Click the search button at the top of the page and type in Ramses II or blonde.
Posted by Obenga (Member # 1790) on 22. August 200710:45 AM:
Thanx
Posted by Djehuti (Member # 6698) on 22. August 200712:20 PM:
quote:Originally posted by Doug M: Click the search button at the top of the page and type in Ramses II or blonde.
Yes, please! There are like 6 or 7 topics of that title already! Thank God for the search engine. Posted by Punos_Rey (Member # 21929) on 16. February 201512:11 AM:
Sorry to necro such an old thread guys, but I'd rather do that then create yet another Ramses II thread. Has there been any new studies recently on the mummy of Ramses II? I'm just really curious to know what new information has been found about him, especially genetics wise or involving his hair color.
*I had sworn off this forum after my last few run ins with Tukuler but this topic has grabbed hold of my interest so had to ask the members here)
Posted by the lioness, (Member # 17353) on 16. February 201512:26 AM:
oh snap
Posted by ausar (Member # 1797) on 16. February 201510:29 AM:
Even people who make no informative contributions are entitled to their opinions and are welcome to bump threads of their interest and ask questions already beaten to death.
quote:Originally posted by ausar: Even people who make no informative contributions are entitled to their opinions and are welcome to bump threads of their interest and ask questions already beaten to death.
- Tukuler al~Takruri, the ardo -
A recap is shown below for the new readers. But does anyone have any info on the Hebrews copying or using Egyptian hair styles? Ran across this blurb the other day:
Pirqe R. El. 46 refers to the earrings worn by male Israelites "according to the fashion of the Egyptians." Joseph's elaborate hair-style was subjected to the hermeneutics of midrash and it became an Egyptian hair-style in the view of the rabbis.
--Egyptian Cultural Icons in Midrash, Volume 23 By Rivka Ulmer
The book then goes on to detail Rabinnic commentary on how Joseph copied aspects of Egyptian hair styles and other fashions. Any details on the Hebrews and hair? What bout the Egyptians copying Nubian mercenary hairstyles?
------------------------------------------------------------------
RECAP - THE HAIR THING
Ancient Egyptian hair
Across the web assorted "biodiversity" proponents, wage a 'racial war' using hair studies of ancient Egyptians to prove a "Caucasian Egypt". But in fact the hair of Africans is highly variable, debunking their simplistic claims.
The hair of Africans is highly variable, ranging from tight curls of South African Bantu, to the loose curls and straight hair of peoples of East and NE Africa, all indigenously evolved over millennia as part of Africas high genetic diversity. This diversity undermines and ultimately dismisses simplistic "racial" claims based on hair.
Inconsistencies of the skewed "true negro" model and definitions of African hair
Dubious assertions, double standards and outmoded racial hair claims: Czech anthropologist Strouhal's 1971 study touched on hair, and advanced the most extreme racial definitions, claiming Nubians to be white Europids overrun by later waves of Negroes, and that few Negroes appeared in Egypt until the New Kingdom. Indeed, Strouhal went so far as to argue that 'Negroes' failed to survive long in Egypt, because they were ill-adapted to its arid climate! Tell that to the Saharans, Sudanese and Nubians! Such dubious claims have been thoroughly debunked by modern scholarship, however they continue in various guises by those who attempt to use "hair" to assign race 'percents' and categories to the ancients. Attempts to define racial categories based on the ancient hair rely heavily on extreme definitions, with "Negroids" typically being defined as narrowly as possible. Everything not meeting the extreme "type" is then classified as something else, such as "Caucasian".
Kieta (1990, Studies of Crania from Northern Africa) notes that while many scholars in the field have used an extreme "true negro" definition for African peoples, few have attempted to apply the same model in reverse and define a "true white." Such racial double standards are typical of much scholarship on the ancient Nile Valley peoples. A consistent approach for example would define the straight hair in Strouhal's hair sample as an exclusive Caucasian marker (10 out of 49 or approximately 20%) and make the rest (wavy and curled) hybrid or negro, at >80%. Assorted writers who support the Aryan race percent model however, are careful to avoid such consistency and typically only run the comparison one way.
QUOTE: "Strouhal (1971) microscopically examined some hair which had been preserved on a Badarian skull. The analysis was interpreted as suggesting a stereotypical tropical African-European hybrid (mulatto). However this hair is grossly no different from that of Fulani, some Kanuri, or Somali and does not require a gene flow explanation any more than curly hair in Greece necessarily does. Extremely "wooly" hair is not the only kind native to tropical Africa.." (S. O. Y. Keita. (1993). "Studies and Comments on Ancient Egyptian Biological Relationships," History in Africa 20 (1993) 129-54)
Disturbing attempts to use hair to prove race theories: Fletcher (2002) in Egyptian Hair and Wigs, gives an example of what she calls "disturbing attempts to use hair to prove assumptions of race and gender" involving 1800s European researcher F. Petrie, who sometimes sought to use excavation reports to prove his theories of Aegean settlers flowing into Egypt. Such disturbing attempts continue today in the use of hair for race category or percentage claims involving the ancient peoples, such as the "racial" analysis seen on several Internet blogs and websites, some thinly disguised fronts for neo-nazi groups or sympathizers.
Hair study applied a stereotyped "true negro" model and used late period samples of Egypt, after the coming of Greeks, Hyskos, etc as "representative" excluding the previous 2500 years of ancient civilization. A study of the hair of Egyptian mummies by Czech anthropologists Titlbachova and Titllbach (1977) (reported in Strouhal 1977) using only late period samples found a wide range of hair in mummies. Of the 14 samples, only 4 were from the south of Egypt, and none of the 14 samples were earlier than the 18th Dynasty. Essentially the previous 2,000 years + of Egyptain civilization and peopling are not represented. Only the narrowest definition is used to identify 'true negro' types'. All other intermediate types were deemed 'non-negroid.' If a similar procedure is used in reverse and designates only straight hair as a marker of a European, then only 4 out of 14 or 29% of the samples can be deemed "Caucasoid." Below is a breakdown of the Czech data:
Using modern technology, the same Aryan Race models are undercut with the data actually showing that Egyptians group closer to Africans than vaunted white Nordics.
[1]"Nordic hair measurements"[/i]
Neo-Nazis and sympathizers tout the work of German researcher Pruner-Bey in the 1800s which derived racial indexes of hair including Negroes, Egyptians and Germans. Germanic hair is closer to that of the Egyptians they assert. But is it as they claim?
Using hair for race identification as older research does can be shaky, but even when used, it undercuts Aryan clams as shown above.
Fletcher 2002 decries "disturbing attempts to use hair to prove assumptions of race and gender..
Environmental factors can influence hair color, and the Egyptians routinely placed hair from different sources in mummy wrappings, making claims of "Nordic-haired" or "white" Egyptians dubious.
Mummification practices and dyeing of hair. Hair studies of mummies note that color is often influenced by environmental factors at burial sites. Brothwell and Spearman (1963) point out that reddish-brown ancient color hair is usually the result of partial oxidation of the melanin pigment. Other causes of hair color "blonding" involve bleaching, caused by the alkaline in the mummification process. Color also varies due to the Egyptian practice of dyeing hair with henna. Other samples show individuals lightening the hair using vegetable colorants. Thus variations in hair color among mummies do not necessarily suggest the presence of blond or red-haired Europeans or Near Easterners flitting about Egypt before being mummified, but the influence of environmental factors.
Egyptian practice of putting locks of hair in mummy wrappings. Racial analysis is also made problematic by the Egyptian practice of burying hair, in many "votive or funerary deposits buried separately from the body, a practice found from Predynastic to Roman times despite its frequent omission from excavation reports." (Fletcher 2002) In examining hair samples Fletcher (2004) notes that care is needed to determine what is natural scalp hair, versus hair from a wig, versus hair extensions to natural locks. Tracking the exact source of hair is also critical since the Egyptians were known to have placed locks of hair from different sources among mummy wrappings. (The Search for Nefertiti, By Joann Fletcher, HarperCollins, 2004, p. 93-94, 96)
Egyptians shaved much of their natural hair off and used wigs extensively as covering, obtaining much of the hair for wigs through trade. Discoveries" of "Aryan" or 'Nordic" hair are thus hardly 'proof' of incoming Caucasoids, but may be simply hair purchased from some source and made into a wig. This is much less dramatic than the exciting picture of inflowing 'Aryan' hordes.
The ancient Egyptians shaved off much of their own natural hair as a matter of personal hygiene and custom, and wore wigs in public. According to the Encyclopedia of body adornment (Margo DeMello, 2007, Greenwood Publishing Group, p. 101), "Boys and girls until puberty wore their hair shaved except for a side locl left on the side of their head. Many adults- both men and women- also shaved their hair as a way of coping with heat and lice. However, adults did not go about bald, and instead wore wigs in public and in private.. Wigs were initially worn by the elites, but later worn by women of all classes.."
The widespread use of wigs in ancient Egypt thus complicates and contradicts attempts at 'racial' analysis. Fletcher (2002) shows that many Egyptian wigs have been found with what is defined as straighter 'cynotrichous' hair. This however is hardly a marker of massive European or Near Eastern presence or admixture. Fletcher notes that the Egyptians often eschewed their own personal hair, shaving carefully and using wigs widely. The hair for these wigs was often obtained through trade. Indeed, "hair itself being a valuable commodity ranked alongside gold and incense in account lists from the town of Kahun." Image gallery | Articles | Google
Egyptian trading links with other regions is well known, and a commodity like straighter 'cynotrichous' hair could have been easily obtained via the Sahara, Levant, the Maghreb, Mediterranean contacts, or even the hair of Asiatic war captives or casualties from Egypt's numerous conflicts. There is little need to postulate mass influxes of European admixtures or populations to account for hair types in wigs. The limb proportion studies of the ancient Egyptians showing them to be much more related to tropical types than to Europids, is further demonstration of the fallacy of using hair as 'proof' of a 'Aryan' or predominantly European admixed Egypt.
Nubian wigs and wigs in Egypt
Such exchanges or use of hair appear elsewhere in the Nile valley. Tomb finds show Nubians themselves wearing wigs of straight hair. But one Nubian from the Royal valley, of the 12th century, named Maherpra, was found to be wearing a wig himself, made up of tightly curled 'negroid' hair, on top of his natural covering (Fletcher 2002). The so-called "Nubian wig" also appears in Egyptian art relief's depicting daily life, a stylistic arrangement thought to imitate those found in southern Egypt or Nubia. Such wigs appear to have been popular with both Egyptians and Nubians. Fletcher 2004 notes that the famous queen Nefertiti made frequent use of the Nubian wig: "Nefertiti and her daughter seem to have set a trend for wearing the Nubian wig.. a coiffure first worn by Nubian mercenaries and clearly associated with the military." A detail of a wall scene in Theban tomb TT.55 shows the queen wearing the Nubian wig. Infantrymen from the Nubia. Note both bow and battle-axe carried into combat.
Hair studies of Nubians have also been undertaken. One study at Semna, in Nubia (Daniel Hrdy 1978- Analysis of Hair Samples of Mummies from Semna South, American Journal of Physical Anthropology, (1978) 49: 277-262), found curling patterns intermediate between Northwest European and African samples. The X-group, especially males, showed more African elements than the Meroitic in the curling variables. Crimping and curvature data patterned in a northwest Europe direction. These data plots however do not necessarily indicate race admixture or percentages, or the presence of European migrants or colonists (see Keita 2005 below), but rather a data pattern of variation in how hair curls, and native African diversity which cases substantial overlap with non-African groups. This is a routine occurrence within human groups.
Africa has the highest phenotypic variation, just as it has the highest geentic variation- accommodating a wide range of features for its peoples without the need for any "race mix: Relethford (2001) shows that ".. methods for estimating regional diversity show sub-Saharan Africa to have the highest levels of phenotypic variation, consistent with many genetic studies." (Relethford, John "Global Analysis of Regional Differences in Craniometric Diversity and Population Substructure". Human Biology - Volume 73, Number 5, October 2001, pp. 629-636) Hanihara 2003 notes that [significant] "..intraregional diversity are present in Subsaharan Africans.." While ancient Egypt had gene flow in various eras, hair variations easily fall under this pattern of built-in, indigenous diversity, as well as the above noted cultural practice of using wigs with hair from different places obtained through trade.
Among Europeans for example, some people have curlier hair and some have straighter hair than others. Various peoples of East and West Africa also have narrow noses, which are different from other peoples elsewhere in Africa, nevertheless they still remain Africans. DNA studies also note greater variation within selected populations that without. Since Africa has the highest genetic diversity in the world, such routine variation in characteristics such as hair need not indicate any racial percentage or admixture, but simply part of the built-in genetic diversity of the ancient peoples on the continent. Indeed, the Semna study author notes that blondism, especially in young children, is common in many dark-haired populations (e.g., Australian, Melanesian), and is still found in some Nubian villages. As regards hair color variation, reddish type hair is associated with the presence of pheomelanin, which can also be found in persons with dark brown or even black hair as well. See "Rameses" below. Albinism is another source of red hair.
Dubious attempts at 'racial analysis' using Nubian hair and crania. Assorted supporters of the stereotypical Aryan 'race' model attempt to use hair to argue for a predominantly 'white' Nubia. But as noted above, such attempts are dubious given built-in African genetic diversity. Often 'racial' hair claims attempt to link on with cranial studies purporting to match ancient Nubians with Swedes, Frenchmen, etc. But such claims are also dubious. In a detailed analysis of the Fordisc computer program used to put forward such claims, Williams, Armelagos, et al. (2005) found that the program created ludicrous "matches" between the ancient Nubian crania and peoples from Hungary, Japan, Easter Island and a host of others in far-flung regions! Their conclusion was that the diversity of human populations in the databank explained such wide ranging matches. Such objective mainstream analyses debunk obsolete and improbable claims of 'racial' migrations of alleged Frenchman, Hungarians, or other whites into ancient Nubia, or equally improbable racial 'percentages' supposedly quantifying such claims. (Frank l'engle Williams, Robert L. Belcher, and George J . Armelagos, "Forensic Misclassification of Ancient Nubian Crania: Implications for Assumptions about Human Variation," Current Anthropology, volume 46 (2005), pages 340-346)
Alleged massive influx of Europeans and Middle Easterners to give the ancient peoples hair variation did not happen. Such variation was already in place as part of Africa' built in genetic and phenotypic diversity. As regards diameter, the average diameter of the Semna sample was close to both the Northwest European and East African samples. This again suggests a range of built-in African indigenous variability, and calls into questions various migration theories to the Nile Valley. One study for example (Keita 2005) tested the model of C. Loring Brace (1993) as to the notion of incoming European migrants replacing indigenous peoples of the Nile Valley. Brace's work had also suggested a relationship between northwest Europeans such as Scandanavians and African peoples of the Horn. Data analysis failed to support this model, instead clustering samples much closer to African series than to Europeans. Keita concluded that similarities between African data in his survey (skulls, etc) and non-Africans was not due to gene flow, but a subset of built-in African variability.
Ancient Egyptians cluster much closer to other Egyptians and Nubians. A later study by Brace, (Brace 2005- The questionable contribution..) groups ancient Egyptian populations like the Naqada closer to Nubians and Somalis than European, Mediterranean or Middle Eastern populations, and places various Nubians samples closer to Tanzanian, Dahomeian, and Congoid data points than to Europeans and Middle easterners. The limb proportion studies of Zakrzewski (2003) (Zakrzewski, S.R. (2003). "Variation in ancient Egyptian stature and body proportions". American Journal of Physical Anthropology 121 (3): 219-229.) showing the tropical body plan of the ancient Egyptians also undercuts theories of inflowing European or near Eastern colonists, or the 'native Europid' model of Strouhal (1971).
The yellowish-red-hair of Rameses: proof of a Nordic Egypt?
Red hair itself is within the range of African diversity or that of dark-skinned peoples. Native black Australoids for example routinely produce blonde hair:
Detailed microscopic analysis during the 1980s (Balout 1985) identified some of the hair of Egyptian Pharoah Rameses II as being a yellowish-red. Such a finding should not be surprising given the wide range of physical variability in Africa, the most genetically diverse region on earth, out of which flowed other population groups. Indeed, blondism and various other hair shades are not unknown in East Africa or Nubia, particularly in children, nor are such hair color variants uncommon in dark-haired or dark skinned populations like the Australians. (Hrdy 1978) Given the range of genetic variability in Africa, a red-haired Rameses is hardly unusual. Rameses' reign, in the 19th Dynasty, came over 1,500 years after the Egyptian state had been established, and after the Hyskos interlude. Such latecomers to Egypt, like the Hyskos, Assyrians, Greeks, Romans, Arabs etc would add their own genetic strands to the nations mix. Whatever the blend of genes that occurred with Rameses, his hair offers little supposed "proof" of a "white" or "Nordic" Egypt. If anything, X-rays of the royal mummies from earlier Dynasties by mainstream scientists show that the Egyptians pharaohs and other royals had varied 'Negroid' leanings. See X-Rays of the Royal mummies here, or here.
Pheomelanin and Rameses- found in light and dark-haired populations: The finding of Rameses red hair also deserves further scrutiny. The analysis found evidence of dyeing to make the hair yellowish-red, but some elements were untouched by the dye. These elements of yellowish-red hair in Balouts study, were established on the basis of the presence of pheomelanin, a red-brown polymeric pigment in the skin and hair of humans. However, pheomelanin can also be found in persons with dark brown or even black hair as well, which gives it a reddish hue. Most natural melanins contain sulfur, which is typically associated with pheomelanin. In scientific tests of melanin, black hair contained as much as 5% sulfur, 3% lower than the 8.8% found in Irish red hair, but exceeding the 2.3% found in Scandinavian blond hair. (Jolles, et al. 1996) Thus the yellowish-red hair discovered on Rameses is well within the range of human variation for dark haired people, whatever the exact gene combination that led to the condition.
Rameses hair was not a typical European red, but yellowish-red, within African variation. It was also not ultra straight, further undermining claims of "Nordic" influence. Somalians and Ethiopians are SUB-SAHARANS and they routinely produce straight-haired people without the need for any "race mix" to explain why. The analysis on Rameses also did not show classic "European" red hair but hair of a light red to yellowish tinge. Black haired or dark-skinned populations are quite capable of producing such yellowish-red color variants on their own, as can be seen in today's east and northeast Africa (see child's photo above). Nor is such color variation unusual to Africa. Native dark-skinned populations in Australia, routinely produce people with blond or reddish hair. As noted above, ultra diverse Africa is the original source of such variation.
The analysis also found the hair to be cymotrich or wavy, again a characteristic quite within the range of overall African or Nile valley physical and genetic diversity. A "pure" Nordic type of straight hair was thus not established for Rameses. Hence the notion of white Europeans or red-headed Caucasoids from other areas flowing into ancient Egypt to add hair variation, particularly the early centuries of the dynastic state is unlikely. Such flows may have occurred most heavily in the Greek and Roman era but say nothing about the thousands of years preceding. The presence of pheomelanin conditions or other genetic combinations also explains how the different hair used in Egyptian wigs could vary in color, aside from environmental oxidation, bleaching and dyeing.
Red hair is rare worldwide, and history shows little evidence of Northern Europeans or "Nordics" sweeping into Egypt to give the natives a bit of hair coloring or variation. Most red hair is found in northern and western Europe, especially in the British Isles, and even then it appears in minor frequencies in Europe- some 4% of the population. It is unlikely such populations had any major contact or influence in the ancient Nile Valley. As noted above, red hair is comparatively rare in the worlds populations and pheomelanin conditions are found in dark-haired populations, and thus is well within the range of variation from the Sahara, East Africa and the Nile valley. White Aryan theories of Egypt are seen in the works of HFK Gunther (1927), Archibald Sayce (1925) and Raymond Dart (1939), and still find traction on a number of 'Aryan', neo-nazi and "race" websites and blogs which purport to show a "white Nordic Egypt" using Rameses' "red" hair as an example. Today's scientific research however, has debunked these dubious views, showing that red hair, while not common world wide, is a well known variant within human populations, even those with dark hair.
Straight or curly hair is also routine among sub-Saharans like Somalians, who are firmly part of the East African populations. As regards Somalians for example, Somali DNA overwhelmingly links much more heavily with other Africans including Kenyans & Ethiopians (85%), than with Europeans & Middle Easterners. (15%) On Y-chromosome markers (E3b1), Somalis (77%) and other African populations dwarf small European (5.1%) or Middle Eastern (6.3%) frequencies. The data suggest that the male Somali population is a branch of the East African population.. (Sanchez et al., High frequencies of Y chromosome lineages.. in Somali males (2005)
As one mainstream researcher notes about the dubious value of "racial" hair analysis:
"The reader must assume, as apparently do the authors, that the "coarseness" or "fineness" of hair can readily distinguish races and that hair is dichotomized into these categories. Problematically, however, virtually all who have studied hair morphology in relation to race since the 1920s to the present have rejected such a characterization .. Hausman, as early as 1925, stated that it is "not possible to identify individuals from samples of their hair, basing identification upon histological similarities in the structure of scales and medullas, since these may differ in hairs from the same head or in different parts of the same hair". Rook (1975) pointed out nearly 50 years later out that "Negroid and Caucasoid hair" are "chemically indistinguishable". --Tom Mieczkowsk, T. (2000). The Further Mismeasure: The Curious Use of Racial Categorizations in the Interpretation of Hair Analyses. Intl J Drug Testing 2000;vol 2
Posted by the lioness, (Member # 17353) on 16. February 201510:41 PM:
quote:Originally posted by zarahan- aka Enrique Cardova:
Somalians and Ethiopians are SUB-SAHARANS and they routinely produce straight-haired people without the need for any "race mix" to explain why.
If Africa has the highest phenotypic and genetic variation, that doesn't prove the above
or that Somalians and Ethiopians are exclusively of African or Sub Saharan African ancestry
Posted by Tukuler (Member # 19944) on 17. February 201504:17 PM:
Zarahan why bother in this thread?
A new tacticed troll onslaught has been launched.
You could make a brand new thread and leave anti-Afrikan haters. Why? Obviously the reviver of the thread could've followed DougM's lead or could've bumped up one of the latest Red Ramses threads which I can't image going unknown to him who could find this 8 year mouldie oldie informationless thread.
A new troll onslaught has been launched.
Posted by Tukuler (Member # 19944) on 17. February 201504:32 PM:
Me, fixate on noses?
You've never seen me how do you know what kind of nose or hair I have?
You're obsessed with physical features and want to project that onto me.
Posted by Tukuler (Member # 19944) on 17. February 201505:05 PM:
Gimme a break kid. I, am not the forum You accused me
Why though do you, Zaharan etc., identify/fixate with these features
There are noses, lips, and hair combos that white north&northwest Euros have that no indigenous African has, big deal.
Got anything positive or original to contribute? No, eh? That's what I thought.
DISMISSED (step aside)!!
NEXT (in line please)!!
Got no time for deniers of all the data, especially when PCA charted, that clearly shows an African vs Eurasian (or stayed in Africa vs left Africa only remnants remain) GENERAL assignment of humanity that can be displayed in both clines and clusters along the cline.
You see it's not clines vs clusters Each tells its own factual suggestions Attention must be paid to both that is if one really wants the 'whole' story.
Posted by Tukuler (Member # 19944) on 17. February 201505:26 PM:
If you only care for tL's comments better do it by PM. This is a public forum that all are invited to post to.
Posted by Tukuler (Member # 19944) on 17. February 201506:14 PM:
Yeah OK Have it your way
Enough people have already tried to school you I'm not wasting my time.
Really only chimed in to set you straight about my thought on facial features.
Again you have never read me say a word about HBD because I'm not a blog hopper. You can't even get your people straight, boy.
African populations will naturally vary from Asian populations or European populations or Oceania populations or American populations or ________.
Not only are there AIMS and other population even sub- population genetic biological indicators, ye ole mode one mod one eyeball reveal both clines and cluster as in old school physical anthropology and forensic human identification.
But my job is not to convince you. I leave that to the debaters. I'm into discussion with rational folk basing themselves on critical analysis of reviews, reports, books, articles, etc. not with people who deny PCAs and other reputable visuals of raw or processed data.
Posted by the lioness, (Member # 17353) on 17. February 201506:55 PM:
quote:Originally posted by Dead:
quote:Originally posted by the lioness,:
quote:Originally posted by zarahan- aka Enrique Cardova:
Somalians and Ethiopians are SUB-SAHARANS and they routinely produce straight-haired people without the need for any "race mix" to explain why.
If Africa has the highest phenotypic and genetic variation, that doesn't prove the above
or that Somalians and Ethiopians are exclusively of African or Sub Saharan African ancestry
They have highest mean phenotypic variation. But it isn't uniformly spread. Most Somalis are narrow nosed, but like I said West Africans are only 2% leptorrine. Instead Zaharan will ignore this and try to "pool" himself with narrow nosed Somalis, when he looks nothing like them.
None of the "black" posters here are Somalis, so I don't get their obsession with narrow noses (or wavy hair). I have been here 5/6 years, the same obsession continues. Go look in the other section and others posters there fixate with straight hair too. Do you have an explanation for this?
Yes I have an explantion. People have African descent feel they have a greater similarity to one another genetically, culturally, historically than they do with people outside the continent. You have admitted to not knowing that much about genetics (probably wilful) such as SNP, STR , autosomal chromosomes, haplotypes etc The "obsession" is of people of Africna descent trying to maintain unity and not be exploited by people from foreign colonization and manipulations. It is called the survival instinct not to be dictated to by other world powers and corporate interests. It is called Pan Africanism and it is nothing more than a desire for coalitions like the EU or Nato, etc
Posted by Tukuler (Member # 19944) on 17. February 201507:49 PM:
This is EGYPTOLOGY you don't call me a retard just because you disagree with me -- your posts are outta here.
Go post n ANCIENT EGYPT where people are just as sophomoric as you and revel in speaking insultingly.
Now go ahead and use your juice with Sami to axe me. Go ahead.
I don't moderate this forum on behalf of Sami.
I do it for lovers of Egyptology and African Studies who want a sane adult place to discuss things. If no such folk here why am I wasting my time and effort on some thing I do not own nor make any money off of?
No matter how fair I try to be to certain members who have it in for me they return my deference with contumely and I'm damned tired of it. YOU wll be my example.
quote:Originally posted by Tukuler: [QB] This is EGYPTOLOGY you don't call me a retard just because you disagree with me -- your posts are outta here.
If you can call me a "stupid twit" and KING a "fake ass Xian " who is "slobbering on Clyde's nuts" then why can't you be called a retard?
I'll wait.....
who watching the watcher ?
.
Posted by Punos_Rey (Member # 21929) on 23. February 201509:16 PM:
quote:Originally posted by Tukuler: Zarahan why bother in this thread?
A new tacticed troll onslaught has been launched.
You could make a brand new thread and leave anti-Afrikan haters. Why? Obviously the reviver of the thread could've followed DougM's lead or could've bumped up one of the latest Red Ramses threads which I can't image going unknown to him who could find this 8 year mouldie oldie informationless thread.
A new troll onslaught has been launched.
I did a search, and picked a Ramses II thread at random rather than create a whole new one. I'm sorry for picking the wrong one to necro, you snaggle-toothed jackal.
Thank you for letting me know you've been watching me get dressed, hope you liked viewing my equipment, considering that you're an expert on it.
@Enrique, thanks for the follow up, I was wondering if there had been any more recent/updated studies done on the hair?
Posted by Snakepit1 (Member # 21736) on 24. February 201504:20 PM:
quote:Originally posted by the lioness,:
^^^ Nice Punos but the artist, in my opinion has the skin tone too light in the finshed version below assuming the ancient art is accurate
Ramesses II Reconstruction by Caroline Wilkinson 2004
That "reconstruction" is HIGHLY INACCURATE considering the fact that Ramesses II's father, Seti I is still "dark as night" --->
|
Curious if the board discussed this last year...if so could someone point me to that thread?
Thanx
Redheaded Pharaoh Ramses II 28 February 2006 Pharaoh Ramses II (of the 19th Dynasty), is generally considered to be the most powerful and influential King that ever reigned in Egypt. He is one of the few rulers who has earned the epithet "the Great". Subsequently, his racial origins are of extreme interest.
In 1975, the Egyptian government allowed the French to take Ramesses mummy to Paris for conservation work. Numerous other tests were performed, to determine Ramses precise racial affinities, largely because the Senegalese scholar Cheikh Anta Diop, was claiming at the time that Ramesses was black. Once the work had been completed, the mummy was returned in a hermetically sealed casket, and it has remained largely hidden from public view ever since, concealed in the bowels of the Cairo Museum. The results of the study were published in a lavishly illustrated work, which was edited by L. Balout, C. Roubet and C. Desroches-Noblecourt, and was titled La Momie de Ramsès II: Contribution Scientifique à lÉgyptologie (1985).
Professor P. F. Ceccaldi, with a research team behind him, studied some hairs which were removed from the mummys scalp. Ramesses II was 90 years-old when he died, and his hair had turned white. Ceccaldi determined that the reddish-yellow colour of the mummys hair had been brought about by its being dyed with a dilute henna solution; it proved to be an example of the cosmetic attentions of the embalmers. However, traces of the hairs original colour (in youth), remain in the roots, even into advanced old age. Microscopic examinations proved that the hair roots contained traces of natural red pigments, and that therefore, during his youth, Ramses II had been red-haired.
|
yes
|
Poetry
|
Did Robert Frost's "The Road Not Taken" mean to celebrate individualism?
|
yes_statement
|
"robert" frost's "the road not taken" meant to "celebrate" "individualism".. the intention of "robert" frost's "the road not taken" was to "celebrate" "individualism".
|
https://blog.gale.com/the-road-not-taken-interpreting-frosts-autumnal-setting/
|
“The Road Not Taken”: Interpreting Frost's Autumnal Setting | Gale ...
|
“The Road Not Taken”: Interpreting Frost’s Autumnal Setting
Published in 1916, Robert Frost’s most popular poem, “The Road Not Taken” (Poetry for Students Volume 2 and Poetry for Students Volume 61), is conventionally understood to be a meditation on the choices we make when confronted with a fork in the road. Despite its popularity, the work is perhaps one of Frost’s most misread poems, acquiring an optimistic, individualistic meaning not necessarily present in the text. Is the poem really about individualism and choosing one’s own path? Or does the poem suggest the choice itself does not matter because all choices are equally valid and equally meaningless? The poem’s autumnal setting may lead to an answer.
Frost begins “The Road Not Taken” describing two
roads splitting in a yellow wood, challenging the narrator with a choice of
which to take. Painting the woods yellow suggests the speaker is out for an
autumn stroll when confronted with this consequential—or inconsequential,
depending on interpretation—decision. Additionally, Frost refers to the path as
grassy, meaning winter has not yet set in. Finally, the third stanza mentions
leaves lining the path.
Why does the season matter? In poetry, imagery associated with autumn is conventionally used to evoke feelings of nostalgia, decay, and death. It is a trope Frost used in another poem, “Nothing Gold Can Stay” (Poetry for Students Volume 3), where he laments the fleeting beauty of summer and the onset of autumn. Just as he does in “Nothing Gold Can Stay,” Frost employs an autumn setting in “The Road Not Taken” to evoke a mournful setting, not a cheerful one.
Frost is perhaps using autumn to conjure regrets, aging, and the grim inevitability of human mortality. Memory, longing, and a life once full of potential all come to the surface as the narrator stands at the crossroads, confronted by the weight of choices and the potential—and peril—of another.
While many read “The Road Not Taken” as a triumph
of individualism and a call to pave one’s own path, the autumn setting suggests
a bleaker meaning. In the third stanza, Frost describes the two paths as equal,
meaning there is no right or wrong choice. The speaker will claim the choice of
road has made all the difference, but in reality, it did not matter because
surely something good was missed by taking one path over another. The speaker’s
sigh in the final stanza is heavy, much like the misty fall air and the
season’s darker days.
Choosing how to interpret “The Road Not Taken” is also an exercise in choosing between two paths. Neither meaning is correct or incorrect, but the act of making a choice reflects how a life is made, through the slow accumulation of decisions. Frost invites the reader to participate in the act of choosing when he presents us with the many ambiguities of his text.
Perhaps, rather than assigning a definitive meaning to
Frost’s work, we should simply reflect on the experience of walking through the
woods on an autumn day.
Perhaps, that is enough.
We invite you to celebrate autumn with For Students! “The Road Not Taken” is one of many autumn-themed poems in the For Students collection.
Poetry for Students is available in print and eBook format on our platform, Gale eBooks.
Meet the Author
Sarah Robertson is a writer, editor, and longtime contributor to For Students.
|
“The Road Not Taken”: Interpreting Frost’s Autumnal Setting
Published in 1916, Robert Frost’s most popular poem, “The Road Not Taken” (Poetry for Students Volume 2 and Poetry for Students Volume 61), is conventionally understood to be a meditation on the choices we make when confronted with a fork in the road. Despite its popularity, the work is perhaps one of Frost’s most misread poems, acquiring an optimistic, individualistic meaning not necessarily present in the text. Is the poem really about individualism and choosing one’s own path? Or does the poem suggest the choice itself does not matter because all choices are equally valid and equally meaningless? The poem’s autumnal setting may lead to an answer.
Frost begins “The Road Not Taken” describing two
roads splitting in a yellow wood, challenging the narrator with a choice of
which to take. Painting the woods yellow suggests the speaker is out for an
autumn stroll when confronted with this consequential—or inconsequential,
depending on interpretation—decision. Additionally, Frost refers to the path as
grassy, meaning winter has not yet set in. Finally, the third stanza mentions
leaves lining the path.
Why does the season matter? In poetry, imagery associated with autumn is conventionally used to evoke feelings of nostalgia, decay, and death. It is a trope Frost used in another poem, “Nothing Gold Can Stay” (Poetry for Students Volume 3), where he laments the fleeting beauty of summer and the onset of autumn. Just as he does in “Nothing Gold Can Stay,” Frost employs an autumn setting in “The Road Not Taken” to evoke a mournful setting, not a cheerful one.
Frost is perhaps using autumn to conjure regrets, aging, and the grim inevitability of human mortality. Memory, longing, and a life once full of potential all come to the surface as the narrator stands at the crossroads, confronted by the weight of choices and the potential—and peril—of another.
|
no
|
Poetry
|
Did Robert Frost's "The Road Not Taken" mean to celebrate individualism?
|
yes_statement
|
"robert" frost's "the road not taken" meant to "celebrate" "individualism".. the intention of "robert" frost's "the road not taken" was to "celebrate" "individualism".
|
http://www.literature2009.blogfa.com/post/16
|
The Road Not Taken
|
Welcome to the world of literature
“The Road Not Taken,” first published in Mountain Interval in 1916, is one of Frost’s most well-known poems, and its concluding three lines may be his most famous. Like many of Frost’s poems, “The Road Not Taken” is set in a rural natural environment which encourages the speaker toward introspection. The poem relies on a metaphor in which the journey through life is compared to a journey on a road. The speaker of the poem must choose one path instead of another. Although the paths look equally attractive, the speaker knows that his choice at this moment may have a significant influence on his future. He does make a decision, hoping that he may be able to visit this place again, yet realizing that such an opportunity is unlikely. He imagines himself in the future telling the story of his life and claiming that his decision to take the road “less traveled by,” the road few other people have taken, “has made all the difference.”
Author Biography
Born in San Francisco, Frost was eleven years old when his father died, and his family relocated to Lawrence, Massachusetts, where his paternal grandparents lived. In 1892, Frost graduated from Lawrence High School and shared valedictorian honors with Elinor White, whom he married three years later. After graduation, Frost briefly attended Dartmouth College, taught at grammar schools, worked at a mill, and served as a newspaper reporter. He published a chapbook of poems at his own expense, and contributed the poem “The Birds Do Thus” to the Independent, a New York magazine. In 1897, Frost entered Harvard University as a special student, but left before completing degree requirements because of a bout with tuberculosis and the birth of his second child. Three years later the Frosts’ eldest child died, an event which led to marital discord and which, some critics believe, Frost later addressed in his poem “Home Burial.”
In 1912, having been unable to interest American publishers in his poems, Frost moved his family to a farm in Buckinghamshire, England, where he wrote prolifically, attempting to perfect his distinct poetic voice. During this time, he met such literary figures as Ezra Pound, an American expatriate poet and champion of innovative literary approaches, and Edward Thomas, a young English poet associated with the Georgian poetry movement then popular in Great Britain. Frost soon published his first book of poetry, A Boy’s Will (1913), which received appreciative reviews. Following the success of the book, Frost relocated to Gloucestershire, England, and directed publication of a second collection, North of Boston (1914). This volume contains several of his most frequently anthologized pieces, including “Mending Wall,” “The Death of the Hired Man,” and “After Apple-Picking.” Shortly after North of Boston was published in Great Britain, the Frost family returned to the United States, settling in Franconia, New Hampshire. The American editions of Frost’s first two volumes won critical acclaim upon publication in the United States, and in 1917 Frost began his affiliations with several American universities as a professor of literature and poet-in-residence. Frost continued to write prolifically over the years and received numerous literary awards as well as honors from the United States government and American universities. He recited his work at the inauguration of President John F. Kennedy in 1961 and represented the United States on several official missions. Though he received great popular acclaim, his critical reputation waned during the latter part of his career. His final three collections received less enthusiastic reviews, yet contain several pieces acknowledged as among his greatest achievements. He died in Boston in 1963.
Poem Summary
Line 1
In this line Frost introduces the elements of his primary metaphor, the diverging roads.
Lines 2-3
Here the speaker expresses his regret at his human limitations, that he must make a choice. Yet, the choice is not easy, since “long I stood” before coming to a decision.
Lines 4-5
He examines the path as best he can, but his vision is limited because the path bends and is covered over. These lines indicate that although the speaker would like to acquire more information, he is prevented from doing so because of the nature of his environment.
Lines 6-8
In these lines, the speaker seems to indicate that the second path is a more attractive choice because no one has taken it lately. However, he seems to feel ambivalent, since he also describes the path as “just as fair” as the first rather than more fair.
Lines 9-12
Although the poet breaks the stanza after line 10, the central idea continues into the third stanza, creating a structural link between these parts of the poem. Here, the speaker states that the paths are “really about the same.” Neither path has been traveled lately. Although he’s searching for a clear logical reason to decide on one path over another, that reason is unavailable.
Lines 13-15
The speaker makes his decision, trying to persuade himself that he will eventually satisfy his desire to travel both paths, but simultaneously admitting that such a hope is unrealistic. Notice the exclamation mark after line 13; such a punctuation mark conveys excitement, but that excitement is quickly undercut by his admission in the following lines.
Lines 16-20
In this stanza, the tone clearly shifts. This is the only stanza which also begins with a new sentence, indicating a stronger break from the previous ideas. The speaker imagines himself in the future, discussing his life. What he suggests, here, though, appears to contradict what he has said earlier. At the end of the poem, in the future, he will claim that the paths were different from each other and that he courageously did not choose the conventional route. Perhaps he will actually believe this in the future; perhaps he only wishes that he could choose “the one less traveled by.”
Themes
Individualism
On the surface, “The Road Not Taken” seems to be encouraging the reader to follow the road “less travelled by” in life, a not-very-subtle metaphor for living life as a loner and choosing independence for its own sake when all other considerations come up equal. There is some evidence that makes this interpretation reasonable. The central situation is that one has to choose one road or the other without compromise — an absolutist situation that resembles the way that moral dilemmas are often phrased. Since there is really no distinction made between the roads except that one has been travelled on more than the other, that would be the only basis on which to make a choice. The tone of this poem is another indicator that an important decision is being made, with careful, deliberate concentration. Since so much is being put into the choice and the less travelled road is the one chosen, it is reasonable for the reader to assume that this is what the message is supposed to be.
The poem’s speaker, though, is not certain that individuality is the right path to take. The less travelled road is said to only “perhaps” have a better claim. Much is made about how slight the differences between the paths are (particularly in lines 9-19), and the speaker expects that when he looks back on this choice with the benefit of increased knowledge, he will sigh. If this is a testament to individuality, it is a pretty flimsy one. This speaker does not celebrate individualism, but accepts it.
Choices and Consequences
The road that forks into two different directions always presents a choice to be made, in life as well as in poetry. The speaker of this poem is not pleased about having to make this choice and says that he would like to travel both roads. This is impossible, of course, if the speaker is going to be “one traveler”: this raises the philosophical question of identity. What the poem implies, but does not state directly, is that the most important factor to consider when making a choice is that the course of action chosen should fit in with the decisions that one has made in the past. This speaker is distressed about being faced with two paths that lead in different directions because the wrong choice will lead to a lack of integrity. If there were no such thing as free will, the problem would not be about which choice to make: the decision would make itself. In the vision of another writer, this is exactly what would happen. Another writer, faced with the same two roads, would know without a second thought which one to follow. The speaker of “The Road Not Taken” is aware of the implications of choosing badly and does not see enough difference between the two roads to make one stand out as the obvious choice. But it is the nature of life that choosing cannot be avoided.
The only way to approach such a dilemma, the poem implies, is to study all of the details until something makes one direction more important than the other. The difference may be small, nearly unnoticeable, but it will be there. In this case, the speaker of the poem considers both sides carefully and is open to anything that can make a difference. From the middle of the first stanza to the end of the third, physical characteristics are examined. For the most part, the roads are found to be the same: “just as fair” in line 6; “really about the same” in line 10; “both ... equally lay” in line 11. The one difference is that one has been overgrown with grass from not being used, and, on that basis, the narrator follows it. There is no indication that this slight distinction is the sign that the speaker was looking for or that he feels that the right choice has been made. On the contrary, the speaker thinks that his choice may look like the wrong decision “ages and ages hence.” It would not be right, therefore, to say that choosing this particular road was the most important thing, but it is the fact that a choice has been made at all “that has made all the difference.”
Style
“The Road Not Taken” is arranged into four stanzas of five lines each. Its rhyme scheme is abaab, which means that the first line in each stanza rhymes with the third and fourth lines, while the second line rhymes with the fifth line.
Most of the lines are written in a loose or interrupted iambic meter. An iambic foot contains two syllables, an unstressed one followed by a stressed one. Because most of the lines contain nine syllables, however, the poem cannot be strictly iambic. Often, the extra syllable will be unstressed and will occur near the caesura, or pause, within the line. The meter can be diagrammed as follows (with the caesura marked //):
Then took / the other, // as just / as fair, Historical Context
The War: The symbolism of the two roads in this poem can be applied to any number of circumstances in life, and therefore we cannot identify any one particular meaning as the one that Frost had in mind. It is interesting to note, though, that in 1916, when it was written, changes of great importance were occurring, both in the author’s life and in the social order of the entire Western world. There are many ways in which the sort of choice presented in this poem would have had meaning for Frost and also for his audience.
The Industrial Revolution in the late 1800s brought about advances in travel and communications that led to advances in international commerce. It became difficult for any country, especially growing economic powers like the United States and Japan, to stay uninvolved. The American public wanted to stay out of the conflict we have come to call World War I: as late as 1916, President Woodrow Wilson won reelection with the campaign slogan “He Kept Us Out Of War.” It was not until the year after this poem was published that pressure to protect trading interests forced America to join the battle.
Early in his career as a poet, from 1912 to 1915, Frost and his family lived in England. When they moved back home, England was already involved in the war. The central question of “The Road Not Taken” reflects the positions taken by the two countries Frost had lived in: Britain joined other countries in the fight, while America struggled to remain isolated. Each side had a good case to make for its own position. Britain, as part of Europe, had been involved in various wars on the continent for centuries, as well as wars in Africa, India, Australia and North America, in defense of British colonies. Through the years, various treaties and alliances helped to end old wars, but as a result of them, Britain had to participate in new wars, even ones that did not directly threaten English land. On the European continent, with so many small countries squeezed in closely together, this sort of cooperation was taken for granted. In the early part of this century, Britain was allied with the Triple Entente, a cooperative defense agreement with France and Russia. When war broke out in 1914 following the assassination of Archduke Franz Ferdinand of Austria, one country after another joined the fighting. Britain held out just a few months after the assassination, but eventually joined too.
Urbanization: The relationship between the individual and society is at the center of “The Road Not Taken.” The poem raises questions about whether one should do things to be part of the majority or follow the “grassy” untraveled path. In 1916 this question was particularly open to debate, due to the growing impersonal control of industrialization. Industrialization was the dominant social force in the last half of the nineteenth century. The Civil War, for example, fought from 1861 to 1865, is generally remembered as a struggle for civil rights, but most historians believe that the reason the two sides had such different views of slavery was a result of each area’s different economic base. The South, the Confederacy, was basically agricultural, with huge plantations that were tended by slave labor, while the Union had manufacturing and some small farms that could be tended by hired hands. The Union’s victory was a huge step in the global movement toward industrialization. As factories went up, families came to cities to obtain jobs in them, and immigrants came from other countries for the same reason. The new city dwellers were not self-reliant, as they had been in the country, but were now cogs in the wheels of a vast machine. The living quarters that cities constructed to house these new workers were cramped together on top of one another — an especially frustrating situation for people who had come from open land. By 1916, artists and philosophers were questioning the depersonalizing effects of urban life and were worried that it had changed the nature of human thought. Frost lived most of his life on farms and in small towns and avoided city life. Although it is not social in content, this poem raises questions about independence and individuality.
In 1916 the growth of Communism in Russia produced a rising feeling of hope that the laborer could win back control of his own live. The stated goal of Communism was to let all workers benefit equally from production by having the government collect profits and redistribute them. The year 1917 marked a high point for those who believed in Communism: the Russian czarist government was overthrown and the new government made Communism a practice, not just a theory. All over the world, the intellectuals who were familiar with the economic principles of Karl Marx, the economist who provided the basis for Communism, believed that a better way of live would finally prevail. Those concerned about how the spread of Communism would affect individuals predicted two very different results. On the one hand, equal distribution of wealth could mean that no one person would be mistreated while someone with more money was treated well. On the other hand, this kind of equality discouraged individual personal achievement. Personal achievement and self-reliance are considered Yankee character traits: Yankees are the people of the New England area of the country, where Frost lived since he was ten years old.
Compare & Contrast
1916: An act of Congress created the National Parks Service to preserve millions of acres of forest land for the enjoyment of future generations.
Today: Many older United States cities are losing residents as corporations and their employees move to outlying areas. While this decentralization gives the earliest relocated residents an opportunity to experience the forest land that is missing in urban environments, the continual expansion, referred to as “urban sprawl,” is destroying woodlands at an unprecedented pace.
1916: Albert Einstein published his general theory of relativity. Science soon accepted it as a more accurate way to measure the effects of gravity than Newton’s laws, even though, as the name suggests, measurement depended upon relative circumstances and not absolute knowledge.
1928: Using principles derived from Einstein’s theory, the uranium atom was split, opening the way for nuclear power and nuclear weapons.
1945: Two nuclear warheads were dropped on Japanese cities to end World War II. 1963: In a showdown over whether the Soviet Union would be allowed to place missiles in Cuba, the world’s two superpowers came closer than ever to launching a nuclear war.
Today: Well aware of the consequences that follow choice, no country has yet used nuclear force in a war since the year the atomic bomb was first invented.
1916: “I believe that the business of neutrality is over,” President Woodrow Wilson said in October. “The nature of modern war leaves no state untouched.” The next year America entered World War I.
1941: Although many European nations were already involved in World War II, America did not become involved until American property at Pearl Harbor was directly attacked.
1991: Over 60 nations from around the world gathered together to oppose Iraq’s invasion of Kuwait, a key supplier of the world’s petroleum.
Today: Various regional conflicts around the world are bring multinational peacekeeping forces together to help stabilize the situations.
1916: The first radio news broadcast was made.
Today: Radio, television, and the internet have channels devoted to narrow subjects of interest.
1916: The first supermarket chain, Piggly Wiggly, was begun in Memphis, Tennessee.
Today: Franchising has made identical versions of chain stores and restaurants familiar in every small town across the country and in most countries.
Critical Overview
Although critics tend to agree about the thematic concerns of “The Road Not Taken,” they are less consistent in evaluating its success. John T. Ogilvie, in an article published in South Atlantic Quarterly, suggests that the road is a metaphor for the writerly life, and that the choice the speaker makes here “leads deeper into the wood” which “though they [the woods] hold a salutary privacy, impose a stern isolation, an isolation endured not without cost.” Roy Harvey Pearce, in his The Continuity of American Poetry, agrees that this poem illustrates Frost’s tendency to write about “moments of pure, unmediated realization” which are “by definition private.” The speaker is able to achieve insight, but only through solitude and separation from others.
Isadore Traschen, however, in an article in The Yale Review, critiques the poem (and its admirers) quite harshly, accusing Frost of unrestrained sentimentality. In this poem, she suggests that “Frost acknowledges that life has limits ..., yet he indulges himself in the sentimental notion that we could be really different from what we have become. He treats this romantic cliche” on the level of the cliche; hence the appeal of the poem for many.” Traschen is arguing here that the common reader is attracted to this poem because its ideas are already so familiar and because many people prefer romantic ideas to realistic ones. Yvor Winters, writing in The Function of Criticism: Problems and Exercises, refers to the speaker as a “spiritual drifter.” Although Winters acknowledges that the poem has some positive qualities, he faults the poem because he believes that Frost was inadequate to his task: “Had Frost been a more intelligent man, he might have seen that the plight of the spiritual drifter was not inevitable, he might have judged it in the light of a more comprehensive wisdom. Had he done this, he might have written a greater poem. But his poem is good as far as it goes; the trouble is that it does not go far enough, it is incomplete, and it puts on the reader a burden of critical intelligence which ought to be borne by the poet.”
Criticism
David Kelly
David Kelly is a freelance writer and instructor at Oakton Community College and College of Lake County, as well as the faculty advisor and co-founder of the creative writing periodical of Oak-ton Community College. He is currently writing a novel. In the following essay, Kelly argues that Frost’s reputation for aptly rendering the common man and his overall skill in manipulating language has fostered misinterpretation of the ironic tone of “The Road Not Taken.”
Irony is frequently used in literature to make a point indirectly, by presenting an apparent meaning that is the opposite of the actual meaning of a piece. Writers find this method most productive in provoking a reader to think. To say “Nice day out” on a sunny day, for example, is merely stating the obvious, while the same words spoken on a cloudy day can make one take a second look out of the window. This distinction between the actual situation and the way it is presented in words reveals more truths than a simple, direct account: in a sense, the reader is invited for a glimpse backstage in the writer’s art, to examine the artifice that most writers try to hide or deny. Robert Frost was so comfortable with words, so masterful at creating a convincing reality, and so skilled at presenting situations that normally went unnoticed that he was ineffective at making his point ironically when he tried in “The Road Not Taken”; readers tend to accept whatever Frost wrote as coming from the same sincere inspiration that guided most of his work. In “The Road Not Taken,” Frost tried his hand at dressing weak, vain thought in the garb of nobility, and instead of the joke he had intended, he wound up with a source of inspiration.
Biographical accounts make it clear that Frost did not intend the message of this poem to be taken at face value. His biographer, Laurence Thompson, explained in Robert Frost: The Years of Triumph 1915-1938, that the poet wrote “The Road Not Taken” as a satire of his friend Edward Thomas. Frost was amused by Thomas’ indecisiveness, by the way he would dither over decisions, unable to make up his mind. The inability of the poem’s speaker to settle comfortably upon a course of action and to follow it without looking back “with a sigh” was, to Frost, a clear indicator that this poem testifies to ideas that belong more to Thomas than to the author. The satire was not so clear when he sent a copy of the poem to Thomas, though. In the end, Frost had to explain to his friend that he was the subject of the poem. The simple conclusion and inappropriately convoluted thought process that leads to it, which Frost assumed would make the humor of his piece obvious, are handled with such gentle subtlety and grace that the final product rings more of truth than of jest.
The strength of a poem should come from the truth that it tells, regardless of the circumstances from which grew. In considering “The Road Not Taken,” we have to ask whether readers who are unaware of Thomas’ personality or Frost’s intentions can be expected to recognize irony independently, from the work alone. Contradictions abound. The two roads are described as being “just as fair,” but the very next line says that one has “a better claim”; the speaker says he “kept the first for another day,” but immediately says he would probably never come back to it. But the contradictions of this world, and especially of human perception, are the business of serious poetry and not necessarily indicators that the poet who points them out is being insincere about his beliefs. In order to tip the general reader off to his intent, to let us all in on his joke, Frost’s premise would have to be so weak or lame that anyone would know not to take it seriously.
Since its initial publication in 1916, “The Road Not Taken” has endeared itself to generations of readers as a testimony to independent thought and to the courage of the individual who leaves the safety of the crowd and strikes out into the unknown. To an extent, this truly is a noble gesture. It is intrinsic to the American tradition, to the pioneers who blazed uncharted paths and the entrepreneurs who invested in an unprovable future. But the presentation of the poem does not bestow such heroic status on the less-travelled road. To those readers who are already inclined toward romantic fondness for the rugged loner, the phrase “all the difference” in the last line is a bold trumpet blast — an announcement that an action of lasting significance has been taken. The poem itself says only, though, that the speaker has reluctantly made a decision. In the absence of any clue about the outcome, “all the difference” simply indicates that the speaker feels his choice to turn right or turn left was monumental. Even a reader unfamiliar with Edward Thomas’ indecisiveness can sense a speaker who is too nervous about taking a step forward and too delighted (but still uncertain) once he or she has acted.
In attempting to portray this comic personality whose emotions are inappropriately hesitant and then suddenly triumphant, Frost’s weakness apparently was that he was too good of a poet. The use of language in this poem is too adroit, too glowing with warm melancholy, to signal to any but the most sensitive reader that the speaker is supposed to be unduly timid. A phrase such as “sorry I could not travel both / and be one traveler” does pose a childishly obvious puzzle, as poetry will often do, but it is worded so uniquely and cleanly that a reader is inclined to give it more respect than a parody deserves. Also, Frost’s central subject of the individual choosing his or her own course in the absence of physical or rational clues is too close to the backbone of religious faith (itself a subject of serious poetry), for most readers to suspect Frost of duplicity. Finally, there is Frost’s reputation as a poet. His humanity was celebrated, not just by the public and the critics, but by the toughest judges: other poets. Carl van Doren wrote that Frost “felt, indeed, the pathos of deserted farms, the tragedy of dwindling townships, the horrors of loneliness pressing upon silent lives”; Randall Jarrell said that “no other poet has written so well about the actions of ordinary men”; Amy Lowell, reviewing Frost’s book that immediately preceded the publication of “The Road Not Taken,” noted that “Mr. Frost’s work is not in the least objective.”
Why would any reader suspect this author of standing apart from his speaker, of passing judgement? If readers do not see that this portrayal was meant to be unflattering (though good-natured), at least part of the reason must be the other cases in which we have been patient with Frost’s characters and
“... Frost ’s poem is more intricate and more complex than the popular understanding of it would indicate: it is also better. more subtle, more perceptive, more analytical, more deeply concerned with human motivation.”
in the end seen that their apparent simplicity is not really so simple at all. For Frost to expect readers to “get” that this indecisive character’s dilemma actually is as simple as it seems shows a touch of naivete on the author’s part.
Poems often are said to take on lives of their own, apart from the intentions of their authors, but usually when an author loses control of his or her work the result is more simplistic and more basic than intended. Writers who relinquish control, who write “as” someone else rather than addressing their audiences directly, often end up being taken less seriously than they had hoped. That “The Road Not Taken” reverses this situation is a testimony to the depth of Robert Frost’s humanity: he was incapable of achieving the shallowness and distance from his subject that humor needs. Subjects such as indeci-siveness or the romanticizing of independence are surely ripe for parody, but they are also appropriate for serious inspection, and Frost’s poem examines them with more wisdom than most serious writers achieve. It is no discredit to the poet to say that he had the Midas touch, that he could not help creating beauty when homeliness might have been more fitting, but because his intent is lost on most readers, there is a narrow way in which we can see “The Road Not Taken” as a failure.
|
Biographical accounts make it clear that Frost did not intend the message of this poem to be taken at face value. His biographer, Laurence Thompson, explained in Robert Frost: The Years of Triumph 1915-1938, that the poet wrote “The Road Not Taken” as a satire of his friend Edward Thomas. Frost was amused by Thomas’ indecisiveness, by the way he would dither over decisions, unable to make up his mind. The inability of the poem’s speaker to settle comfortably upon a course of action and to follow it without looking back “with a sigh” was, to Frost, a clear indicator that this poem testifies to ideas that belong more to Thomas than to the author. The satire was not so clear when he sent a copy of the poem to Thomas, though. In the end, Frost had to explain to his friend that he was the subject of the poem. The simple conclusion and inappropriately convoluted thought process that leads to it, which Frost assumed would make the humor of his piece obvious, are handled with such gentle subtlety and grace that the final product rings more of truth than of jest.
The strength of a poem should come from the truth that it tells, regardless of the circumstances from which grew. In considering “The Road Not Taken,” we have to ask whether readers who are unaware of Thomas’ personality or Frost’s intentions can be expected to recognize irony independently, from the work alone. Contradictions abound. The two roads are described as being “just as fair,” but the very next line says that one has “a better claim”; the speaker says he “kept the first for another day,” but immediately says he would probably never come back to it. But the contradictions of this world, and especially of human perception, are the business of serious poetry and not necessarily indicators that the poet who points them out is being insincere about his beliefs. In order to tip the general reader off to his intent, to let us all in on his joke, Frost’s premise would have to be so weak or lame that anyone would know not to take it seriously.
|
no
|
Poetry
|
Did Robert Frost's "The Road Not Taken" mean to celebrate individualism?
|
yes_statement
|
"robert" frost's "the road not taken" meant to "celebrate" "individualism".. the intention of "robert" frost's "the road not taken" was to "celebrate" "individualism".
|
https://www.paperdue.com/topic/individualism-essays
|
Individualism Essays: Examples, Topics, Titles, & Outlines
|
Individualism Essays (Examples)
According to urge, if ert would speak of arthritis in the thigh he would, in this case, express a true belief, because the term itself would be used in his society to express inflammations in the thigh and in the joints.
The social interpretation described by urge is meant to explain terms that have a certain perception in a certain society. We would be inclined to believe that a tribal organization in Africa may refer to arthritis as a disease describing pains in the chest and that the term would have this connotation in that respective society. A member of that society would refer to his chest pains as arthritis and would express a true belief, according to the social theory.
On the other hand, it seems legitimate to ask ourselves whether the social and societal explanation may turn away from an absolute truth, in the sense of an absolute…
Her husband ignores her and as she becomes increasingly aware of the wallpaper, she is slowly losing herself. Her worst obstacle is not her illness but her husband and this is the reality that Perkins-Gilman establishes. The conclusion of the story brings us to the realization that the narrator will suffer because she is a women and she finally loses the battle when she confesses that she has "got out at last" (773). This story encapsulates the fundamentals of Realism and Naturalism because the narrator's experience represents a true account of what American women endured in the nineteenth century.
In "The Luck of Roaring Camp," we see realistic character sketches emerge. Harte also provides readers with a realistic vision of the local community of Roaring Camp. e can literally see the gold-seekers. The men of the camp are described as "One or two of these were actual fugitives from justice,…
Works Cited
Crane, Stephen. Maggie, a Girl of the Streets. New York: Random House. 2001.
Chopin, Kate. The Awakening and Other Stories. New York: Bantam Books. 1988.
Bellah sees this as dangerous and particularly dangerous is the faith of 'Shelia-ism,' the idea that a society can survive so long as everyone has his or her own personal moral code. Social commitment is portrayed as the lifeblood of society, yet all too often the pressures to 'make it' in America mean that people must take time away from volunteerism and spend more time at work. Despite high levels of church attendance, individual responsibilities and intimate relationships define American's sense of identity (Bellah et al. 250). The self is orchestrated as a personal, rather than a social matter.
However, while it is difficult to argue that America, as a young nation, has had to work harder to construct binding ties of communal self-interest, the authors do not really provide a clear definition as to what that commonality should be. In the new, diverse America, religion as a social 'glue'…
Work Cited
Bellah, R. (et al.). Habits of the Heart. University of California Press, 1996.
In her discourse, "The Treasure of the City of Ladies," De Pizan contemplated how human society had developed the psyche and perception that females are inherently inferior to males. This issue was borne out of the author's observation how literary and scholarly works portray a common stereotype of women as subversive to men, depicted as uneducated and not able to create decisions for themselves. In the words of Pizan, "learned men" tend depict women through 'wicked insults" about their behavior. This drove her to investigate and know the origin of this perception and wrong portrayal of women in Western societies.
Through the help of the different "Ladies" in her discourse, Pizan was able to trace the wrongful creation and institutionalization of women as less incapable of creating and expressing sensible thoughts about relevant and significant issues and concerns in their society. One of the early arguments presented in her analysis…
Politics
Thomas More wrote Utopia in 1515 and in the story this place of "utopia" is told to him by a friend who encounters it upon his travels. Utopia is described by Giles, More's friend, as a place where there isn't any social unrest and suffering is nowhere to be found. More seems to have written Utopia with the idea of individual freedom in mind; however, there are some problems with the Utopia that More deemed as perfect. First of all, if More wrote Utopia with individual freedom in mind, then why are individual activities actually discouraged and in its place is favored the communal life? Individualism doesn't seem to be encouraged in More's utopian society as what is held as virtuous is supporting society as a whole and not straying from the values and mores that have been laid out for that society.
Alternative to Methodological Individualism
In this report, I shall attempt to identify, compare and contrast the comprehensive models of the economic systems focusing on the Methodological Individualism and the Classical Economists approaches. The objective will be to identifying how these two philosophies have basic assumptions about human nature, technology and social institutions. In addition, the report will point out that these philosophies may have some inherent problems. The assumptions made by the Methodological Individualism thinking and the Classical Economists will provide an excellent opportunity to distinguish if how our economy literally works and how it theoretically works and if those are technically one and the same.
Methodological Individualism is a philosophical system that privileges the Individual as sovereign." (Methodological Individualist) Methodological Individualism economists like Friedrich August von Hayek, feel that our economy can be explained by demonstrating that it is simply an outcome the society's combined individual's behaviors. The Methodological…
During that time, the American middle class was threatened, and the reality is that many Americans lost their status as middle class during the economic crises. People literally died of starvation, and the economic markets that had helped create the middle class, once destabilized, helped usher in a greater divide between rich and poor, since only those with the most assets were able to weather the Depression with economic wherewithal. However, context remains important, because the fact is that America did recover from the Depression, and the living standards of the middle class continued to rise after its recovery, and have consistently done so, notwithstanding less substantial economic recessions. As a result, America has become a country associated with vast wealth, not because of the tremendous wealth held by the top 1% of its inhabitants, but because of the incredible wealth held by all but its poorest inhabitants, which dwarves…
Apart from literary arts, individualism is also most evident in the field of education. The development of educational institutions, spearheaded by the Florentine Academy, an informal organization of humanists, helped celebrate human reason in combination of mathematical and moral truths. The conceptualization of an educational institution as the formal venue for human reasoning and thought to be cultivated began with Plato's concept of the Academy. As Renaissance thinkers and humanist began using Greek studies as the foundation for European culture and society's rebirth, informal educational institutions such as Florentine Platonic Academy and religious schools have been established to harness the humanists' skills in critical thinking and further explore ways in which people can best express their individuality (363).
Objectivity is the result of the birth of individualism during the Renaissance period. As European society learned to cultivate and give importance to their ability to reason and think critically, objectivity began…
Ernest Hemingway on individualism and self-realization. Specifically, it will discuss several sources, and incorporate information from at least one Roberts and Jacobs short story, poem, or play. Ernest Hemingway embodies his characters with some of his own rugged individualism and search for meaning in life. Many other authors incorporate this theme in their works, because it seems to touch a cord in many readers, who also hope to learn more about themselves as they read and evaluate great fiction.
INDIVIDUALISM AND SELF-REALIZATION
Ernest Hemingway often portrayed a bit of himself in his works, because many of his protagonists were rugged individualists who searched for meaning in their lives and in the world around them, just as Krebs does in "Soldier's Home." Unfortunately, many of Hemingway's characters never find the comfort of self-realization, and so they are empty characters that never really find themselves. This self-realization process is also a common…
Bibliography
American Heritage Dictionary of the English Language, Fourth Edition. New York: Houghton Mifflin Company, 2000.
Farrell, James T. "The Sun Also Rises 1943." Ernest Hemingway: The Man and His Work. Ed. McCaffery, John K.M. New York: World Publishing Co., 1950. pp. 221-225.
Democracy in America by Tocqueville
Tocqueville provides various reasons from despots as to why citizens must embrace individualism. In his arguments, Tocqueville shows that democracy breeds selfish individualism. According to Tocqueville, individualism is a feeling that is calm that makes every citizen be disposed from one another, or isolates citizens, from the rest of the masses, and is withdrawn from a circle of family and friends. This makes individualism be a stepping-stone that leads to egoism. Individualism is invaluable to people living in a democratic society. These people tend to be disjointed from their larger families and the communities. This is unlike in aristocratic societies where people are living together, with the families and communities taken to be critically important and of immense concern. In fact, despots do not like such societies or people. They like people who portray signs of caring for nothing or no one as long as…
Communication Accommodation Theory holds that we will adjust our communication styles when dealing with people of different cultures. We will use different language and different speaking styles depending on the audience of our speech. One particular example is with authority figures. Most people will speak to authority figures in a more formal way that they would to friends and even family members. The same is true in the workplace -- the setting embeds something formal in the setting, and that formality is then reflected in the speech the individual uses.
This phenomenon can also be viewed in a social setting. When an individual socializes with people of different groups, accommodation in speech is common. A good example the differences between conversation styles between social groups of people of one gender vs. interactions between members of groups from different genders. Single-gender groups will not use accommodating language, but mixed-gender groups…
Rather than limit themselves to what has always been done, individualism encourages people to explore different personas throughout their life, trying on different identities in school and at work.
The idea that the individual is valuable also underlines our modern political system in a progressive fashion. Every person has the right to freedom of expression, even if the majority disagrees with his or her viewpoint -- the minority view may still have something to contribute to the majority's point-of-view. Even our modern medical system, holds every life is seen as potentially and equally valuable. Thus, the idea of individualism is thus yoked to a kind of philosophical pluralism -- every individual is equally valuable, and is entitled to the same opportunities in life.
If the risks that were taken in the past worked out well, individuals would be more likely to take risks in the future. It is clear from those findings that it is important to differentiate between the different types of risks when it comes to the study of risk theory. Sociological contexts must also be more clearly examined and drawn out in order to be able to completely grasp the nature of the risks that are prevalent in today's modern society. The article showcases this information very well, and it is important in that it shows the need for further study into this issue and a better understanding of what must be learned in order to properly address risk.
ibliography
Cebulla, a. (2007). Class or individual? A test of the nature of risk perceptions and the individualization thesis of risk society theory. Journal of…
Bibliography
Cebulla, a. (2007). Class or individual? A test of the nature of risk perceptions and the individualization thesis of risk society theory. Journal of Risk Research, 10(2), 129-148.
The Sense of Self and the Omniscient I in hitmans Song of MyselfIntroductionalt hitman\\\'s \\\"Song of Myself\\\" is an epic poem that celebrates the individual self while exploring the interconnectedness of all things. The poem is filled with imagery and symbolism, and it is characterized by an omnipresent \\\"I\\\" that seems to encompass all of humanity. hitman\\\'s conception of the self in this poem is one that is both public and universal while simultaneously deeply personal. This essay will examine the sense of self and the omniscient \\\"I\\\" present in \\\"Song of Myself\\\" and explore how hitman\\\'s perception of self relates to the common public perception of the self at the time, historically and culturally.The Self and IThe sense of self in \\\"Song of Myself\\\" is both individualistic and universal. On the one hand, hitman celebrates the uniqueness of the self, urging the reader to \\\"celebrate [their]self\\\" (line 1) and…
Works CitedWhitman, Walt. \\\\\\\\\\\\\\\"Song of Myself.\\\\\\\\\\\\\\\" Leaves of Grass. 1855.Reynolds, David S. Walt Whitman\\\\\\\\\\\\\\\'s America: A Cultural Biography. Vintage, 1996.Richardson, Robert D. Emerson: The Mind on Fire. University of California Press, 1995.
Individualism in the Eyes of Thoreau and Emerson
Literary works and philosophical ideologies in the early 19th century is characteristically individualistic, where belief in humanity's natural freedom (that is, affinity with nature) was given importance. The ideology of individualism is evident in the works of Henry Thoreau and Ralph Waldo Emerson, 19th century philosophers and literary writers who composed the works Walden and Self-reliance, respectively. These works from both philosophers advocate the need for an individual to assert his/her identity in a society intolerant of differences and changes. In Walden, Thoreau narrates and documents his attempt in establishing a new life in the woods, primarily to deviate from the comforts that he and the society has learned to depend on. In his discourse, Thoreau states that, "many of the so-called comforts of life, are not only not indispensable, but positive hindrances to the elevation of mankind... To be a philosopher…
Although within capitalism Marx understands that an individual seeks a
better situation for himself, his choices and the reasons for making his
choices are based upon the capitalist system that society has instituted.
Furthermore, Marx's view of history and the motivations of history
are much different than Hobbes and Locke. To Marx, all of history is a
class struggle. In the capitalist system laborers give their labor to the
capitalists. Locke writes about the body and labor that, "nobody has any
right to but himself. The labour of his body and the work of his hands, we
may say, are properly his" (Chap 5). This means, to Locke, that a laborer
is working with his own property, his own body, as an individual. Marx
differs in this assumption as not only does the laborer have very little
choice in the system, but also that while laboring "a crowd of people…
600). What Cushman means with this is that the self has become empty resulting from the loss of the community, tradition, and shared experience connected to specific cultures or communities (Cushman, 1990, p. 600). This empty self then needs emotional fulfillment, which individuals have sought in consuming products and ideas offered by the media and by shops. Indeed, the author claims that the current psychological phenomena of narcissism and borderline states are the direct product of the emptiness created by the post-World War II loss of connection to humanity via common culture and belief systems.
Twenge (2006, p. 2), on the other hand, believes that individualism has reached an ultimate high with today's young generation, or what the author refers to as the "Generation Me." This is a generation for whom morality and human connection are exclusively focused upon the individual as well as individual desires or ideals. Even love…
Here we see that the staff and the students had their own responsibilities and those responsibilities are quite different from the traditional ones we find in traditional schools. Horton thought that a significant aspect of the teacher's role was to empower students to "think and act for themselves" (Thayer-Bacon). We can see that Horton placed responsibility on both the students and the staff. They were to learn from one another but the staff was to be aware of the student's plight as well as help them be the best that they could be.
Is what Highlander does "really" adult education? Why or why not?
Highlander does educate but it is not typical in comparison to traditional learning. When we think of adult education, we think of textbooks, professors giving lectures, students taking notes, and a most definite dividing line between the two. Students and professors do not generally have to…
The trainer will then focus on the steps to be taken to develop new skills. For example, if the trainer wants to talk about motivating, leading, negotiating, selling or speaking, it is best to start with what the learners do well before showing some chart on Maslow's theory, Posner's leadership practices, or selling skills from some standard package that has been develop elsewhere. Many foreign trainers make grave errors because they do not consider the values and beliefs of the trainee's culture. Training must make a fit with the culture of those being trained, including the material being taught, as well as the methods being used (Schermerhorn, 1994).
Abu-Doleh (1996) reports that Al-Faleh (1987), in his study of the culture influences on management development, asserts that "a country's culture has a great influence on the individual and managerial climate, on organizational behaviour, and ultimately on the types of management development…
American National Character (history)
The Ongoing Search for an "American National Character"
This assignment asks the following pertinent and challenging questions: Is it possible to find trends amongst so much diversity? What characteristics are distinctly American, regardless of class, race, and background? What is problematic about making these generalizations and inheriting the culture? What have we inherited exactly? What problems arise with our ideals - and are we being honest with ourselves? Discuss individualism and the "American Dream." Are these goals realized and are they realistic? This paper seeks solid answers to these often elusive questions.
The search for a national character should be never-ending, and the pivotal part of the search that should be enlightening and enriching for the seeker of that knowledge may just be the inspiration from the books and authors springing into the seeker's mind along the way to discovery.
Who is presently engaged in a…
References
Bellah, Robert. Habits of the Heart: Individualism and Commitment in American Life.
" (Shiele, 2006) All of these are important yet they do not address the use of "the worldviews and cultural values of people of color as theoretical bases for new social work practice models" (Shiele, 2006) but instead hold the beliefs that: (1) that only White people - especially White men - have the ability and skill to develop theories and social work practice models; (2) that people of color, specifically African-Americans, lack the ability and skill to develop theories and social work practice models; (3) that the precepts of European-American culture are the primary, if not the only, precepts through which social problems can be analyzed and solved; and/or (4) that culture, and the internalization of culture by the theorist, has little or no effect on theory - that theory or theorizing is mostly or completely an objective activity." (Shiele, 2006)
ibliography
conservative intellectual movement, but also the role of William uckley and William Rusher in the blossoming of the youth conservative movement
Talk about structure of paper, who not strictly chronologically placed (ie hayek before the rest) - in this order for thematic purposes, to enhance the genuiness of the paper (branches of the movement brought up in order of importance to youth conservative revolt) For instance, Hayek had perhaps the greatest impact on the effects of the movement - uckley and Rusher. These individuals, their beliefs, their principles were extremely influential in better understanding the origins, history, and leaders of American conservatism.
Momentous events shape the psyche of an individual as the person matures. A child grows up in poverty vows to never be like his parents, and keeps this inner vow to become a millionaire. A young woman experiences sexual trauma as a teen, and chooses a career that…
Bibliography
George Nash, The Conservative Intellectual Movement in America Since 1945 http://www.nationalreview.com/22dec97/mcginnis122297.html . National review online The Origins of Conservatism George Mc Ginnis
individual is a product of society, rather than its cause.' Discuss.
The relationship between the individual and the society are recurrent themes and profoundly linked concepts in the fields of anthropology and sociology. While the individual is defined as a human being who is considered isolated from and separate from the broader community, the society is thought of as the aggregate of these individuals or a more holistic structure that extends beyond the individuals themselves. However, both concepts are problematic since their significance varies according to whether the approach is holistic, focusing on society, or individualistic, focusing on the individual. Therefore, the causal relationship between the individual and society is of the utmost importance in the related academic fields. Since this subject is evidently central to the study of humans, many social theorists have taken a focused interest in these relationships. A classical debate brings into conflict, advocates of society's…
Columbus reveled in making distinctions between his own culture and 'the other,' in a way that prioritized his own culture, even though ironically he went in search of a non-estern civilization's Indian bounty of spices.
Columbus' eradication of another civilization is the most extreme form of estern civilization's prioritization of distinction, in contrast to Buddhism's stress upon the collapse of such distinction. The most obvious negative legacy of Columbus, for all of his striving and inquiry, is the current racial divisions of our own society and the damaged material and cultural state of Native Americans. Although a change of attitude cannot heal these distinctions alone, adopting at least some of the Buddhist spirit of the acceptance of the 'Other' as one with the self or 'non-self' might be an important first step in creating common ground in our nation. Our nation was founded not simply in democracy, but upon European…
Huckleberry Finn and What Makes an American
What Makes Twain's Huckleberry Finn American?
"Those canonic ideals -- self-government, equal opportunity, freedom of speech and association, a belief in progress, were first proclaimed during the era of the evolution and the early republic and have developed more expansive meanings since then," these are the basic core ideals which make something truly American (Kazin & McCartin 1). The freedom to live as we want, say what we want, and govern ourselves -- these are what make us Americans in culture and ideology. In literature, these core elements are also often what define a book or character as truly American. Mark Twain's Huckleberry Finn adheres to the very ideals of what it is to be an American, which is what makes the work and its author truly Americanized in style and content.
At the same time, however, citizens use this belief to attempt to get as much as they can from the "system," exhibiting the same qualities that lead them to distrust the government.
There is also a deeper element to the problem, however, in what can most succinctly be described as the bastardization of the system of government and society envisioned by the revolutionaries like Madison, Washington, Jefferson, and Hamilton (Bellah, 250-6). The notion of democracy has come to be equated with individual freedom and truly rampant individualism, where the ability for each individual in society to protect their own interests is seen as the paramount effect of democracy. The framers of the Constitution and of American government and society as a whole, however, established a republic wherein the individual good was tied to the common good, and this was supposed to remain an explicit and conscious part of society (Bellah,…
Collectivism and individualism do exist concurrently in many countries throughout the world. The U.S. is a prime example of a society where they cohabitate. There are radical religious sects that strongly oppose morale issues such as abortion, gay marriage, or traditional beliefs that stand side-by-side with the very individuals that their beliefs are intended to suppress. However, despite their coexistence, the two sides are often subjects for heavy controversy and significant reasons for much of the political debate, and near rioting uproar throughout societies across the map. It is apparent that these two orientations are able to exist in one culture at the same time, but not without great consequence.
The history of the United States has had a great impact on the way that cultural values have been developed and kept. Citizens of the United States agree that each individual has the right to their own beliefs and the…
n the introduction to Lao She's novel Lo-t'o Hsiang Tzu, first published in serial form between September of 1936 and May of 1937, the translator relates that Lao She's message in his previous works had explored "the conservatism of the traditionally educated. . . (and) their blindness to the necessity to modernize China. . . (with) the Chinese as the obstacle to progress," yet in Rickshaw, Lao She focuses on "the self-centeredness of the Chinese which he calls ndividualism. . . (being) their crucial failing" (viii). The main character in the novel, Hsiang Tzu, a rickshaw puller, appears to be the "personification of this great flaw," namely individualism. Thus, Hsiang Tzu "is not a victim of a sick society but one of its representatives, a specimen of a malady that must be cured"…
In Chapter Thirteen, Hsiang Tzu reluctantly decides to try his luck at the Jen Ho employment agency, due to knowing "There's no other place I can go." It would seem that Tzu's wish to be an independent person is now a lost dream, for he feels nothing but "grievance, mortification and helplessness" in his heart. Thus, he "surrenders" to his fate and declares "the respectability, the ambition, the loyalty and the integrity he had put so much store in would never do him any good because his was a dog's fate!" (120). What he is trying to express is that his fate is like that of a human animal who works very hard for practically nothing and has no future and no prospects. Thus, his independent spirit is broken, for he knows he will end up like all the others in Peking, a common laborer without brains or hope.
At the conclusion of Rickshaw, Hsiang Tzu experiences the ultimate denigration when, as a member of a funeral procession, someone says to him "You boy! I'm talking to you, Camel! Look sharp, you motherfucker!" This indicates that Tzu, at least in the eyes of others, is nothing more than a pack animal and is undeserving of any respect. Completely humiliated, Tzu simply continues walking while looking for cigarette butts on the street "worth picking up." The final paragraph says it all: "Handsome, ambitious, dreamer of fine dreams, selfish, individualistic," this is Hsiang Tzu, a "degenerate, selfish, unlucky offspring of society's diseased womb, a ghost caught in Individualism's blind alley" (249).
In conclusion, it appears that Lao She is not actually against being an individual, but Hsiang Tzu's tale is indeed an allegory for China's plight during the 1930's. What he is apparently trying to say is that in order for China to become a full-fledged modern nation, its people must stick together instead of pursuing their own selfish ambitions and personal wants.
This study set out to test the hypotheses that people from Eastern cultural backgrounds compared to those from Western backgrounds would make fewer dispositional attributions about the behavior of fictitious characters that the read about and would also demonstrate a more collective attitude towards themselves.
With respect to the first hypothesis, that Western participants would make a greater number of dispositional attributions that would participants with Eastern cultural heritages, that hypothesis was supported. However, there are a few caveats that need to be mentioned with regards to this. First, the scenarios that were presented to the participants only provided two alternatives to explain the behavior of the person. One alternative was a negative dispositional explanation, the other was a situational explanation could have been interpreted as far-fetched in some cases. Miller (1984) found that the tendency for Westerners to make internal attributions was higher for…
Aristotle, Hobbes, Machiavelli and Bellah
hat are the different conceptions of knowledge that inform Hobbes's and Aristotle's respective accounts of politics? Be specific about questions of individualism, virtue, and justice. In Bellah's terms, what kind of politics would they support? How are they related to Bellah's views on the relationship between social science and social life?
Aristotle stated repeatedly that the needs of the state and society overrode individual pleasures, desires and happiness, while Hobbes regarded unchecked individualism as a menace to public peace and good order. Public virtue and justice for Aristotle were not based on purely individual feelings, desires or personal happiness, for "which it is satisfactory to acquire and preserve the good even for an individual, it is finer and more divine to acquire and preserve it for a people and for cities" (Aristotle 2). Virtue is the chief end of political life, but only the vulgar…
WORKS CITED
Aristotle. Nicomachean Ethics. Hackett Publishing Company Inc., 1994.
Bellah, Robert N. Habits of the Heart: Individualism and Commitment in American Life. University of California Press, 2008.
Moreover, because of the high levels of tourism, no one would be out of place in Miami. acism exists in North America, in the United States and in Miami; however, it is determined to be as pronounced in some other more conservative cities.
Ethnicity
With race and ethnicity it is important to mindful of the history of America in relation to how immigrants have been treated in general, and to Latin immigrants specifically. There are a number of ethnic groups represented within the Latin immigrant population and there should not be blanket generalizations applied to the group as though they represent one culture or ethnic group.
Non-verbal Communication
Ofttimes, nonverbal communication can be as significant as verbal communication. For those individuals of Latin descent, some of the more general associations with nonverbal communication are the importance of shaking hands in the introduction process. Culturally, there is purportedly the view that…
References
Hofstede, G. (1984) Culture's Consequences: International Differences in Work Related
ar Films
Taking Jeanine Basinger at her word would leave us with far fewer war films than we think we have. Basinger is a 'strict constructionist,' accepting as war films only those that have actual scenes of warfare (Curley and etta, 1992. p. 8; Kinney, 2001, p. 21). That means that the four films that will be considered here, and especially the two orld ar II films, are not war films. By Basinger's yardstick, neither Casablanca nor Notorious, neither Born on the Fourth of July nor Coming Home would qualify as war films.
On the other hand, films such as hite Christmas, a lightweight Bing Crosby-Danny Kaye-Rosemary Clooney-Vera Ellen comedy about the aftermath of war for an old soldier might well be a 'war' movie. The opening scene is one in which the old soldier, Dean Jagger, is reviewing his troops when, somewhere in Italy during the Christmas lull, bombs…
Works Cited
Canby, Vincent. Review/Film; How an All-American Boy Went to War and Lost His Faith. (1989, December 20). Online.
Business in Czech epublic
Doing business in a foreign country is never easy. It is not so much about the tax regulations, import/export duties or getting a license. The main challenges accrue from the differences in cultural values and social or religious beliefs. For Steve, it may prove easier to at least communicate with the people and establish a bond with them. It is also important to know that Czech epublic is very keen on attracting foreign investment and a strong U.S. presence is desired. For this reason, Steve doesn't need to worry about whether he will be welcome in that country or not. As for cultural differences, it must be borne in mind that both Czech epublic and the U.S. have some similarities and some differences but these differences can act as a major hurdle if not properly understood. Business is often taken seriously in the Czech epublic and…
classroom instruction and are these ideas/strategies feasible for a particular classroom, can they be adapted, alter, or incorporated to benefit students with disabilities?
A Critique of the Journal Article 'Cultural Models of Transition: Latina Mothers of Young Adults with Developmental Disabilities' and Implications for Classroom Instruction
The journal article Cultural models of transition: Latina mothers of young adults with developmental disabilities was a qualitative examination of attitudes of Latina mothers of young adults with disabilities, toward approaches to the transitions of those young adults from school-age activities to more independent living. According to the authors: "Sixteen Latina mothers of young adults with disabilities participated in the study, recruited from an agency
Monzo, Shapiro, Gomez, & Blacher, Summer 2005). The qualitative study emphasized five themes: life skills and social adaptation; importance of family and home vs. individualism and independence; mothers' roles and decision-making expertise; information…
Germany is a parliamentary democracy. It is a multi-party system, which means that political parties must often share power to govern. It is currently led by Chancellor Angela Merkel, who became Germany's first female chancellor in 2009. Merkel is the leader of the center-right Christian Democrats (CDU). Merkel won a close election and serves as "chancellor in a 'grand coalition' involving the CDU, its Christian Social Union (CSU) allies and the centre-left Social Democratic Party (SPD)" ("Germany," BBC, 2012). Merkel has faced recent resistance from the Germany populace, who are growing increasingly discontented with the feeling that Germany is being forced to 'bail out' financially undisciplined members of the European Community, to preserve the EC in the wake of the meltdown of the…
Hofstede
Azure Sky Tea needs to determine the best choice of a home base.
A number of factors must be taken into consideration including the cultural dimensions of the different potential host nations.
Possible Combinations
There are a number of countries that Azure Sky Tea can consider. The company can take into consideration when making this decision. Hofstede identified a number of different cultural dimensions that can be examined for each potential host country. These are individualism, uncertainty avoidance, power distance and masculinity/femininity (Hofstede, 2013). Individualism reflects the importance of the individual in the culture, compared with collectivism which emphasizes a collective group. Uncertainty avoidance reflects "the degree to which the members of a society feel uncomfortable with uncertainty." Power distance reflects power roles in a society, manifested mainly in the interactions between people in different positions within the company. Masculinity emphasizes competition, assertiveness and achievement, while femininity is seen…
Asian women. There are three references used for this paper.
Asian women face a number of challenges in the workplace. It is important to look at how individualism-collectivism is a barrier to these women, and determine possible resolutions which can help them overcome this barrier.
Individualism-Collectivism
Of the "psychological dimensions that differentiate between individuals from different cultures, it is argued that the individualism-collectivism dimension is most relevant to vocational psychology (Leung, 2002)." Compared to the work values of individualism and self-direction which are seen in the United States, Asian communities exhibited work values that are "more collectivistic in orientation, such as altruism, tradition, and conformity. Parental and family expectations have always been salient factors in the career choice process for Asian women (Leung, 2002)."
Goals
Employees of Japanese businesses were asked to "rate their experiences of conflicts with their supervisors in terms of goals, tactics, and outcomes. The findings indicated…
Narrator
In many ways, the literary movements and philosophies of determinism and individualism are opposites of one another. Determinism is one of the facets of Naturalism, and is based on the idea that things happen due to causes and effects largely out of the control of people and that choice is ultimately an illusion. Individualism, however, is widely based on the idea of free will and the fact that people can take action to control their surroundings and their fates in life. Theodore Dreiser's Sister Carrie provides an excellent example of determinist literature and is based on the critical ideas of amorality and environmental factors controlling a person's fate, while Mark Twain's The Adventures of Huckleberry Finn is an example of individualism and illustrates the idea that a person can take action to make his or her own fate.
Dreiser's work chronicles the rise to wealth and social prominence of…
Learning Project
As our nation becomes increasingly more diverse we will be presented with the challenge of understanding our cultural differences. The purpose of this paper is to develop and design a learning project that compares cultural differences of two ethnic/cultural groups. For the purposes of this project we will compare the differences between Asian and Western cultures. The project will be based on the cultural impact of performance in workforce, production, sales, customer services, etc.
efore we can create a learning project we must first understand the cultural backgrounds of both groups.
Cultural ackgrounds
Asian Culture
The economic boom seen in various Asian countries during the 90's called into question the work ethic and cultural values that made these nations successful. One of the most definitive explanations for the work values that are prevalent in Asia, especially China, has been attributed to the concept of Confucianism. Confucianism is the…
"Lady Gaga in part because she keeps us guessing about who she, as a woman, really is. She has been praised for using her music and videos to raise this question and to confound the usual exploitative answers provided by 'the media'… Gaga's gonzo wigs, her outrageous costumes, and her fondness for dousing herself in what looks like blood, are supposed to complicate what are otherwise conventionally sexualized performances" but this complication does not necessarily lead to a feminist liberation (Bauer 2010).
Still, Gaga has been embraced by a generation of women, some who shun and some who embrace the feminist label. "Lady Gaga idealizes this way of being in the world. But real young women, who, as has been well documented, are pressured to make themselves into boy toys at younger and younger ages, feel torn. They tell themselves a Gaga-esque story about what they're doing. hen they're on…
Works Cited
Bauer, Joy. "Lady Power." The New York Times. June 20, 2010. June 21, 2010.
We went in assuming we would be rather homogenous and then found that the dynamic of the group could have broken down as a virtue of differences. Once those differences were noted by myself, the group leader the task became essentially easier, as more time working in the collective was sought by the group and as an individualist, simply had to adapt to this idea and allow for this time.
Within the works of Charles Handy there is also a message that influenced my thinking on this project and its dynamic and communication strategies. Handy stresses that the application of political ideas to company management is inevitable and in particular he stresses that federalism is the concept most likely to be utilized to demonstrate company structure and change. Not only did find this to be true regarding the materials gathered in the project context, HP, but also in the collective…
Success cannot be genealized; too often the wod is used as a tem efeing to financial independence o owning one's own company. Yet the sanitation woke who goes to bed each night with a smile on he face also connotes success in the moden wold. I suppot a multiplicity of success, a divesity of deams fulfilled.
My success, howeve, definitely includes financial independence and caee ecognition, but it also includes the clea conscience that comes fom knowing that I did it all by and fo myself, with confidence and conviction. Like Roak in Rand's book, I got whee I am today due to my had wok and not hand-outs. Thus fa I have not compomised my beliefs o goals to fit with pevailing noms, just as Roak would not deign to design that which disgusted him o sell out. Like Roak I listen to intenal cues and heed not the…
references, determining which courses to take in college, and seeking professional experience that will help me master the skill sets requisite for success as a CPA. Therefore, I hope to find internship positions within firms that I am interested in the hopes of eventually securing an entry-level position immediately upon graduation. I also need to network with role models in the field, and when I am enrolled at USC I will immediately seek interactions with like-minded yet challenging individuals in a mutually supportive atmosphere.
Along with taking relevant coursework I hope also to participate fully in campus life: through social and athletic activities I can truly flourish as a student at USC. Though I am just taking the first steps toward a successful accounting career I know now that I will contribute to the USC campus environment. My singular set of skills and philosophies will be an asset to the USC community, in which I feel I will flourish and succeed. My definition of success therefore currently includes admission to the university as an accounting major. Thank you for your consideration.
They are therefore not determined or restricted by factors such as norms, morals or external principles. A concise definition of this view is as follows:
Constructivism views all of our knowledge as "constructed," because it does not reflect any external "transcendent" realities; it is contingent on convention, human perception, and social experience. It is believed by constructivists that representations of physical and biological reality, including race, sexuality, and gender are socially constructed
Constructivist epistemology)
Another theoretical and philosophical stance that is pertinent to the understanding of the status of the family in modern society is the post-structural or deconstructive view. This is allied to a certain extent with the constructivist viewpoint, which sees society as a social construction and denies the reality of transcendent factors. This view therefore sees the family as a structure which is not fixed or static but is relative in terms of the norms and values…
References
Anderson, G.L. (Ed.).1997, the Family in Global Transition. St. Paul, MN: Professors World Peace Academy.
Empress Luxury Lines Case Study
The situation faced by Antonio definitely is a difficult one. On the one hand, Kevin could be jeopardizing the health and well being of the company if he chooses to approach the insurance company and tell them about the incident. Yet, on the other, Antonio would essentially be condoning criminal behavior I he simply sweeps the situation under the rug to avoid potential consequences. With everything taken into consideration, Antonio should take an individualistic approach and allow Kevin to make the choice he is going to make, thus protecting the ethical and moral sanctity of the company despite any consequences that may arise.
There are two main approaches that would govern potential decisions to be made by Antonio. First, there is the utilitarian approach that aims to produce the best result for all members in the organization. This ethical approach looks at the overall utility…
References
Daft, Richard L. (2012). Management. 10th ed. Cengage Learning.
Dykes, J'Mikel. (2010). Top ten management of the utilitarian approach to ethics: An overview of moral worth. Bukisa. Web. http://www.bukisa.com/articles/379692_top-ten-management-on-the-utilitarian-approach-to-ethics-an-overview-of-moral-worth
Husted, Bryan W. (2001). The Impact of Individualism and Collectivism on Ethical Decision Making by Individuals in Organizations. Institute of Technology Monterrey. Web. http://egade.sistema.itesm.mx/investigacion/documentos/documentos/4egade_husted.pdf
"
In the instance of America's shameful racial history, the self-interest of southern whites combined with the violent coercion of black slavery would produce a highly objectionable variance on the 'social contract.' It is therefore a decidedly important reality that certain individuals refused this contract, One is especially inclined in such instances to recognize the importance of non-conformity in helping to drive improvements in human rights, equality and other dimensions of positive civil order. For instance, we consider luminaries such as Martin Luther King, Jr. Or the earliest participants in the American feminist movement, whose willful decision to resist the forces of authoritarianism as self-defined individuals would be essential to moving our society in a more progressive direction. In the case of Martin Luther King in particular, we recognize the considerable risk to his own person that the Civil Rights leaders undertook in spite of the prevailing cultural mores of…
The United States has not always been a free space for strong female characters. In fact, in its earliest stages, most women were confined to very strict gender rules and restrictions. That is definitely true in the case of the Puritan culture that settled in the North East in the 17th century. Catharine Maria Sedgwick's Hope Leslie presents a surprisingly strong and independent female protagonist who fights for what she believes in and against the constraining gender norms of the very conservative Puritan culture in the early days of the Massachusetts colony. This represents a connection between the American idea of independence and individualism and women's role in American history. Sedgwick is also standing up against the gender norms face d in her own era with such a strong female lead.
Prayer is the contemplation of the facts of life from the highest point-of-view. It is the soliloquy of a beholding and jubilant soul." (36)
5) Travel
Travel is too often used as an escape and reflects deep spiritual discontent. "The soul is no traveller; the wise man stays at home, and when his necessities, his duties, on any occasion call him from his house, or into foreign lands, he is at home still." (39)
6) Individualism
Acting independently requires a great degree of courage, as individualism is not rewarded in a society that champions conformity. "It is easy in the world to live after the world's opinion; it is easy in solitude to live after our own; but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude." (9)
Until the 19th century, nature in art was usually, if present at all, merely the in background of portraits. History and human beings were considered the true, fitting subjects of art. However, as nature began to retreat from everyday life with the rise of technology, artists began to look on nature as a source of inspiration. As nature became rarer, artists gave nature more significance and importance -- nature became more symbolically significant, even as 'real' nature was being overrun by factories, cities, and railroads. Rather than something to be tamed, nature was now something precious. But although human beings may not be present in all Romantic depictions of nature, human thoughts about nature clearly are -- an artist always paints his or her own point-of-view, not a literal representation of nature. Even in the most realistic depictions of nature, the artist is always selective in what…
The exoticism and escapism of Romantic Art is manifest by the focus in the features of Napoleon on the bright or the wider scenes of the battlefield. However, it is the works of Francisco Goya that perhaps most perfectly epitomizes the intense individualism and emotion of Romantic art. Even the titles of Goya's works like "Yo lo Vi (This I saw)" and "Para Eso Yo Nacido (for this I was born) places the artist's individual consciousness squarely in the center of the meaning of the painting. There is no attempt at objectivity, and no apology for the subjective nature of the representation.
The Third of May" although a political work, is not of a noble or significant figure, or a beautiful human body like "Marat." Most of the painting has a hazy quality, as if seen through the night, except for the illumination of the victims. It shows the ugliness…
A McDonald's hamburger in the United tates and in the United Kingdom for example is to be sold within the same price range when the exchange rate is calculated. McDonald's has had a large amount of success in its global expansion. The reasons for this comprise a number of factors, one of which is the perceived value to the purchaser. In all countries where McDonald's is sold, the customer perceives the value of food purchased for a certain price as economically viable. The food is of the same quality and portion size globally. This kind of stability is valued by the customer.
Possible short-term problems for McDonald's relate to the daily changes in foreign exchange rates. It is hardly practicable to change prices on a daily basis. Customers have come to expect stability from the company, especially in terms of price, which makes maintaining PPP a challenge. This problem is…
Sources
Antweiler, Werner (2006). Purchasing Power Parity. University of British Columbia. http://fx.sauder.ubc.ca/PPP.html
recurring dream in which I am standing at a podium in front of a large audience. I am the head of an organization, although my exact title and the nature of the organization are vague. In the dream, I deliver a speech, detailing some aspect of company policy. I am sure of myself; I speak with authority and conviction but for some reason I stand alone. Not one member of the crowd agrees with me, likes me, or supports me. When I wake up I feel a strange mixture of pride and humiliation. Yet like Howard Roark, hero of Ayn Rand's novel The Fountainhead, I realize that my unpopularity does not preclude my success. Roark succeeds not according to an external scale of measurement, based on societal values or norms and fueled by conformity. Rather, Roark is a hero and a success because of his unflinching individualism and his willingness…
Women's History
This report aims to present my views on the fact that wage work during the late 19th and early 20th centuries have more or less reinforced women's roles within their families or more accurately, have provided an extension to their familial roles. The objective of this work is to therefore present an argument that contradicts a belief held by many historians that wage work actually enabled women to develop a new sense of individualism as well as economic independence. These liberations are supposed to have liberated women from their roles in the traditional home. The report also attempts to incorporate how the effects of race and/or ethnicity come into play in this situation.
First and foremost, the idea of wage work and non-wage work must be explored to give credence to the topic at hand. Women have traditionally been unpaid for the bulk of their work while they…
Manning (1993) undertook one of the early researches on the question of whether cohabitating and non-cohabitating single women have equal tendencies towards marriage prior to childbirth. In addition, Manning also looks for differences between black and white women, as well as socioeconomic status. Her research finds that for Caucasian women in their twenties, those who cohabitate with their mates are more likely to get married prior to childbirth. This statistical relationship was not observed among African-American women in the same age group.
This research therefore suggests that cohabitation carries different meanings for the two groups, an issue which may be of interest to symbolic interactionists. For African-Americans, cohabitation and childrearing were deemed more acceptable. In contrast, Caucasian women were more likely to consider cohabitation a stage in the marriage process.
esearch is still being conducted regarding the effects of cohabitation unions on children, especially since statistics show that at least…
Bumpass, LL and H. Lu. 1999. Trends in cohabitation and implications for children's family contexts in the U.S. CDE Work Paper No. 98-15. Center for Demography Ecology, University of Wisconsin-Madison.
This viewpoint was the justification for global colonization, the enslavement of numerous groups of indigenous people, and the massive enforcement of certain religions (such as Christianity) on different peoples throughout the world. There are a couple of interesting facts in denoting the contemporary view of this subject among the Western world. The U.S. was the only country surveyed in which more people still adhered to the belief that their culture was better than that of other civilizations and countries. All of the European countries have apparently abandoned this notion, or at the very least have more people who disbelieve the fact that they are culturally superior to others than those who do. Germany is nearly evenly divided on this subject (No author, 2011).
The category in which the U.S.'s views on autonomy are most prevalent is that in which compares the values of individualism vs. The importance of a state…
Cultural Comparisons and Management Functions
This paper examines cultural comparisons and discusses how an American manager carries out management functions in the process of supervising German employees. With respect to individualism vs. collectivism, both Germany and the U.S. score high in individualism, that is, the degree to which individuals further their own interests. However, according to Hofstede's model of cultural dimensions, Germany's score of 67 ranks far enough below the U.S. score of 91 that the manager should expect differences in their approaches to working together in teams for instance. German employees would have only a moderate amount of group cohesion, with only a moderate amount of interpersonal connection and sharing of responsibility.
For the American manager, the two country's respective scores indicate that the manager should expect his or her German employees to be less individualistic than their manager. The manager should place a relatively high value on people's…
Self-Reliance and the Road Not Taken
American Transcendentalism: Emerson and Frost
There are several qualities that are inherent in American literature that help to set it apart from English literature. Among the earliest themes explored in American literature was the concept of self-reliance and individuality. These concepts are prevalent of writers and advocates of Transcendentalism, a subset of American Romanticism. Ralph aldo Emerson explored the concept of individuality in his essay, "Self-Reliance," and also aimed to define how self-worth is measured. Likewise, Robert Frost embraces the concepts of individuality and self-worth as defined by Emerson. Emerson's influence on Frost can be seen in the theme and narrative of Frost's poem "The Road Not Taken." Both Emerson and Frost comment on the importance of the self and the impact that individuality has on a person.
Transcendentalism is an American literary, political, and philosophical movement that aimed to bring an individual to…
Health and Fitness Survey
Hour Fitness, a global leader in fitness, is committed to making fitness accessible and affordable to people of all fitness levels. The company is the largest privately owned fitness chain in the world, with clubs in the United States, Europe and Asia. In the United States, 24-Hour Fitness and its Q. Sports Clubs division are the industry leaders in fitness. In Asia clubs operate as California Fitness. In Europe clubs operate as S.A.T.S. Sports Clubs. In Norway, Sweden and Denmark, clubs operates as Form and Fitness.
Convenient locations, the latest equipment, affordable prices, knowledgeable staff and outstanding service, as well as facilities that are open up to 24 hours a day, seven days a week, are all factors that have contributed to the company's tremendous growth and success.
values of American culture. Specifically, it will connect this theme with two or more of the following: American energy consumption and foreign policy. Many writers (American and foreign) have commented on the core values of American culture, using terms like "rugged individualism," "individual freedom," "self-reliance," "pioneer spirit" and "democracy." Do you see a theme here?
AMERICAN CULTURE
Americans have always been noted for their loved of individual freedoms and their self-reliance. This tradition began before the Revolutionary War, when America stood up for her rights as a colony of England. Americans have been called "rugged individualists" who embody a "pioneer spirit" because we demanded our rights then, and we continue to do so today in a wide variety of areas, and all you have to do is turn on the television or read a newspaper to see some of these core values which are exhibited every day in our culture.…
American National Character
America can almost be thought of as a massive experiment in culture. Here we have a nation inhabited almost entirely by immigrants; all with different languages, customs, beliefs, and appearances who are forced to somehow reach a common understanding and identity. Through the over two hundred years of American history many differences have threatened to unravel our diverse nation, but still, many commonalities have ultimately held it together. Amidst such a range of economic, political, and racial mixtures it is a daunting task to identify what characteristics are uniquely American.
Yet, what can be considered "American" can also be traced to the roots of the nation. The place now called the United States was founded by puritan settlers who valued the notion of all men's equality in the eyes of God. Accordingly, the authors of the U.S. Constitution included equality under the law as one of its…
Term Paper
Alternatives to Methodological Individualism
Alternative to Methodological Individualism
In this report, I shall attempt to identify, compare and contrast the comprehensive models of the economic systems focusing on the Methodological Individualism and the…
Term Paper
Pursuit of Individualism and Objectivity
Apart from literary arts, individualism is also most evident in the field of education. The development of educational institutions, spearheaded by the Florentine Academy, an informal organization of humanists,…
Term Paper
Ernest Hemingway on Individualism and Self-Realization
Ernest Hemingway on individualism and self-realization. Specifically, it will discuss several sources, and incorporate information from at least one Roberts and Jacobs short story, poem, or play. Ernest Hemingway…
Essay
Why Citizens in Democracy Must Embrace Individualism
Democracy in America by Tocqueville
Tocqueville provides various reasons from despots as to why citizens must embrace individualism. In his arguments, Tocqueville shows that democracy breeds selfish individualism. According…
Essay
Communication Individualism Is Defined as
2.
Communication Accommodation Theory holds that we will adjust our communication styles when dealing with people of different cultures. We will use different language and different speaking styles depending…
Term Paper
Thoreau and Emerson
Individualism in the Eyes of Thoreau and Emerson
Literary works and philosophical ideologies in the early 19th century is characteristically individualistic, where belief in humanity's natural freedom (that is,…
Term Paper
American National Character History
American National Character (history)
The Ongoing Search for an "American National Character"
This assignment asks the following pertinent and challenging questions: Is it possible to find trends amongst so…
Essay
Views and Conceptions of Aristotle Hobbes Machiavelli and Bellah
Aristotle, Hobbes, Machiavelli and Bellah
hat are the different conceptions of knowledge that inform Hobbes's and Aristotle's respective accounts of politics? Be specific about questions of individualism, virtue, and…
Term Paper
Children With Disabilities
classroom instruction and are these ideas/strategies feasible for a particular classroom, can they be adapted, alter, or incorporated to benefit students with disabilities?
A Critique of the Journal Article…
Term Paper
Cohabitation the Practice of Cohabitation
Manning (1993) undertook one of the early researches on the question of whether cohabitating and non-cohabitating single women have equal tendencies towards marriage prior to childbirth. In addition, Manning…
Essay
American and European Values Traditionally
This viewpoint was the justification for global colonization, the enslavement of numerous groups of indigenous people, and the massive enforcement of certain religions (such as Christianity) on different peoples…
Essay
Cultural Comparisons and Management Functions This Paper
Cultural Comparisons and Management Functions
This paper examines cultural comparisons and discusses how an American manager carries out management functions in the process of supervising German employees. With respect…
Term Paper
Global Cultural Analysis Nigeria
Global Business Cultural Analysis
Nigeria
Nigerian History
Synopsis of Nigerian government
Nigerian monarchy to presidential system
The evolution of Nigeria from British control to a civilian democratic government
Nigerian…
|
German employees would have only a moderate amount of group cohesion, with only a moderate amount of interpersonal connection and sharing of responsibility.
For the American manager, the two country's respective scores indicate that the manager should expect his or her German employees to be less individualistic than their manager. The manager should place a relatively high value on people's…
Self-Reliance and the Road Not Taken
American Transcendentalism: Emerson and Frost
There are several qualities that are inherent in American literature that help to set it apart from English literature. Among the earliest themes explored in American literature was the concept of self-reliance and individuality. These concepts are prevalent of writers and advocates of Transcendentalism, a subset of American Romanticism. Ralph aldo Emerson explored the concept of individuality in his essay, "Self-Reliance," and also aimed to define how self-worth is measured. Likewise, Robert Frost embraces the concepts of individuality and self-worth as defined by Emerson. Emerson's influence on Frost can be seen in the theme and narrative of Frost's poem "The Road Not Taken." Both Emerson and Frost comment on the importance of the self and the impact that individuality has on a person.
Transcendentalism is an American literary, political, and philosophical movement that aimed to bring an individual to…
Health and Fitness Survey
Hour Fitness, a global leader in fitness, is committed to making fitness accessible and affordable to people of all fitness levels. The company is the largest privately owned fitness chain in the world, with clubs in the United States, Europe and Asia. In the United States, 24-Hour Fitness and its Q. Sports Clubs division are the industry leaders in fitness. In Asia clubs operate as California Fitness. In Europe clubs operate as S.A.T.S. Sports Clubs. In Norway, Sweden and Denmark, clubs operates as Form and Fitness.
|
yes
|
Poetry
|
Did Robert Frost's "The Road Not Taken" mean to celebrate individualism?
|
yes_statement
|
"robert" frost's "the road not taken" meant to "celebrate" "individualism".. the intention of "robert" frost's "the road not taken" was to "celebrate" "individualism".
|
https://georgiabulletin.org/commentary/2017/05/field-dreams-shines-light-faith/
|
'Field of Dreams' shines in the light of faith - Georgia Bulletin ...
|
By DAVID A. KING, PH.D., Commentary | Published May 19, 2017
If Robert Frost’s “The Road Not Taken” is perhaps our most misunderstood poem in American literature, then “Field of Dreams” must be our most misinterpreted baseball movie.
Frost’s famous closing line, “I took the one less traveled by, and that has made all the difference,” is persistently misread as a tribute to individualism and nonconformity, rather than what it really is, a lament for what might have been.
Likewise, the line from “Field of Dreams,” “If you build it, he will come,” which has entered the American language as a sort of colloquial testament to certainty, is now taken as a homespun acknowledgement of self-reliance and positive consequence.
Both lines, and both misinterpretations, are as American as they can be, and even when taken out of context, each evokes a wistful memory or even perhaps a smile.
Yet “Field of Dreams” deserves better. Far from being seen only as a sweet and simple film about the power of dreams, perseverance and the human spirit, “Field of Dreams” needs to be viewed from a theological perspective. For the Catholic audience, this means seeing the film as a meditation upon the mystery of purgatory.
Before the movie adaptation in 1989, “Field of Dreams” was an interesting and compelling novel titled, “Shoeless Joe,” in reference to the famous ballplayer Joe Jackson, who was among the eight Chicago White Sox permanently ousted from baseball for his supposed role in throwing the 1919 World Series.
Of his book, author W.P. Kinsella has said, “I have to disappoint fans by telling them that I do not believe in the magic I write about. Though my characters hear voices, I do not. There are no gods; there is no magic. I may be a wizard though, for it takes a wizard to know there are none.”
Those reflections about his work represent precisely why the film adaptation of the novel is superior to what is indeed a fine book. The filmmakers—and the movie is a masterwork of collaboration between director, cast and crew—approach the material with the assumption that there is a kind of magic in the world, that there is a God, and that there exists an underlying order the Catholic refers to as mystery.
The specific mystery in “Field of Dreams” is purgatory, which many Catholics tend to overlook, particularly in the midst of the church’s current emphasis upon evangelism and mercy. To me, purgatory actually represents an especially beautiful mercy, for it is inextricably joined to hope.
Consider this excellent summary of purgatory from The Catholic Encyclopedia: “The souls of those who have died in the state of grace suffer for a time a purging that prepares them to enter heaven and appear in the presence of the beatific vision. It is an intermediate state in which the departed souls can atone for unforgiven sins before receiving their final reward. The final testing of one’s faith is not a punishment but one of response.”
That statement should have been used to publicize the release of the film! Instead, one of the official taglines for the movie was, “If you believe in the impossible, the incredible can come true.” It’s less eloquent, but also less vague: of “Field of Dreams” and its connection to purgatory, a better tagline might be, “You ain’t seen nothing yet.”
Iowa farm turned baseball field
I’m assuming that you know the story, but for the sake of refreshment, here is a quick plot summary:
Ray Kinsella is a happily married Iowa farmer. Raised by his father, who played a little baseball and who taught his son a love of the game, Ray rebelled and in college was swept up in a fervor of 1960s radicalism and idealism. As a result, Ray and his father were estranged, and the father died without reconciling with his son. One evening in the cornfields, Ray hears a strange voice that implores, “If you build it, he will come.”
As time goes by, Ray is given a series of visions that convince him he should plow under his corn, build a baseball diamond, and await the return of Shoeless Joe Jackson. Ray gets part of it right. The field is built, and it is beautiful. Shoeless Joe does indeed come to play.
Yet Ray’s adventures are just beginning. The voice next implores him to “ease his pain.” Hence, Ray finds himself in Boston, kidnapping the writer Terence Mann and taking him to a Red Sox game. At the ballgame, both Ray and Terence see a message on the Jumbotron, exerting them to go in search of Archie “Moonlight” Graham, who played in one major league ballgame without getting a chance at bat. Told to “go the distance,” Ray and Terence find Moonlight Graham, who it turns out had retired from baseball to become a small town doctor. Moonlight died in 1972, but through a series of fantastic events, Ray has a vision of the older man.
The next day, as they travel back to Iowa, Ray and Terence pick up a young hitchhiker, a ballplayer, whose name is Archie Graham. In the end, as the local bank tries to foreclose on the farm, Shoeless Joe is joined by a legion of former ballplayers, including a man who turns out to be Ray’s father. Moonlight gets his chance to bat, Terence is summoned off into the cornfield and happily vanishes, and Ray gets to play catch with his estranged father.
The film ends with an image of thousands of cars, their headlights slicing the Iowa darkness like souls, all bound for the mystical ballpark Ray has built.
Something beyond the cornfield
A summary really doesn’t do the movie justice. “Field of Dreams” is a beautiful film, evocative and lyrical, and marvelously cast and performed. It’s one of my perennial springtime films, one I watch and enjoy every year. Yet my growing appreciation of the film’s purgatory context has enriched my annual screening.
I doubt the filmmakers set out to make a meditation upon purgatory, and it’s certain that W.P. Kinsella puts no stock in it, but to appreciate fully any work of art means casting aside authorial intent and relying instead upon informed subjectivity.
“Field of Dreams” is not just a baseball movie, though of course it has to channel its larger themes through baseball, because baseball, unlike most other sports, exists outside of time. Purgatory, too, exists outside of time, though within the communion of souls and the church. We can’t measure it in terms of days or years; just as an inning could conceivably go on forever, so with our time of atonement in purgatory.
Each character on Ray Kinsella’s Iowa baseball field is atoning: Shoeless Joe for his suspected role in a gambling conspiracy, Terence Mann for his resentment and remorse, Moonlight Graham for denying the hope of a second chance, and Kinsella and his father for their estrangement. Yet in their purgation, each one is given mercy beyond measure. Jackson and all the other ballplayers get to play the game they love; Terence Mann is given the opportunity to be once again writer and philosopher; given the gift of choice, Moonlight Graham gets to bat and practice medicine; the Kinsellas are reconciled. That all of this occurs on a baseball field, in an Iowa cornfield, in the middle of nowhere is a beautiful affirmation of the mystery of purgatory.
In Dante’s “Divine Comedy,” the souls in hell know that they are lost. They are explicitly told, upon entering, to “abandon all hope.” Those in purgatory have the hope of eternal redemption, though they do suffer from desire. They all wish for little things of life—a train ride, a smoke, a kiss. Yet the characters in “Field of Dreams” know that there is something beyond the baseball diamond, in those mysterious waves of corn.
On more than one occasion, a character in the film poses the question, “Is this heaven?” “No,” answers Ray, “it’s Iowa.” I like to think Ray knows better.
In his beautiful ode to the union between art and the eternal, Keats writes in “Ode on a Grecian Urn” that “heard melodies are sweet, but those unheard are sweeter.” The characters undergoing purgation on that Iowa baseball field know one aspect of their collective redemptive suffering; they know that at some point their games have to stop. Yet they are all beginning to sense as well that they are being readied for something greater, something that in fact will never end.
St. Paul reminds us that “Now we see through a glass darkly; then, face to face. Now we know in part; then we will know even as we are known.” I think this is the essence of both “Field of Dreams” and purgatory.
There are two wonderful moments near the end of the film where the living and the dead connect with each other; one when Moonlight Graham chooses to leave the field to save life, and the other when Terence Mann is invited into the corn.
The very day I wrote this piece, in a marvelous synchronicity, the Gospel reading for the day just happened to be, “Do not let your hearts be troubled. You have faith in God; have faith also in me. In my Father’s house there are many dwelling places. If there were not, would I have told you that I am going to prepare a place for you? And if I go and prepare a place for you, I will come back again and take you to myself, so that where I am you also may be” (Jn 14:1-3).
We are often implored to pray for the holy souls in purgatory. The vision of “Field of Dreams” poses a question the church has often asked itself: are those souls also able to pray for us? This is a great mystery of our faith. Perhaps if we build upon it?
David A. King, Ph.D., is associate professor of English and Film Studies at Kennesaw State University and director of adult education at Holy Spirit Church in Atlanta.
|
By DAVID A. KING, PH.D., Commentary | Published May 19, 2017
If Robert Frost’s “The Road Not Taken” is perhaps our most misunderstood poem in American literature, then “Field of Dreams” must be our most misinterpreted baseball movie.
Frost’s famous closing line, “I took the one less traveled by, and that has made all the difference,” is persistently misread as a tribute to individualism and nonconformity, rather than what it really is, a lament for what might have been.
Likewise, the line from “Field of Dreams,” “If you build it, he will come,” which has entered the American language as a sort of colloquial testament to certainty, is now taken as a homespun acknowledgement of self-reliance and positive consequence.
Both lines, and both misinterpretations, are as American as they can be, and even when taken out of context, each evokes a wistful memory or even perhaps a smile.
Yet “Field of Dreams” deserves better. Far from being seen only as a sweet and simple film about the power of dreams, perseverance and the human spirit, “Field of Dreams” needs to be viewed from a theological perspective. For the Catholic audience, this means seeing the film as a meditation upon the mystery of purgatory.
Before the movie adaptation in 1989, “Field of Dreams” was an interesting and compelling novel titled, “Shoeless Joe,” in reference to the famous ballplayer Joe Jackson, who was among the eight Chicago White Sox permanently ousted from baseball for his supposed role in throwing the 1919 World Series.
Of his book, author W.P. Kinsella has said, “I have to disappoint fans by telling them that I do not believe in the magic I write about. Though my characters hear voices, I do not. There are no gods; there is no magic. I may be a wizard though, for it takes a wizard to know there are none.”
Those reflections about his work represent precisely why the film adaptation of the novel is superior to what is indeed a fine book.
|
no
|
Poetry
|
Did Robert Frost's "The Road Not Taken" mean to celebrate individualism?
|
no_statement
|
"robert" frost's "the road not taken" did not "mean" to "celebrate" "individualism".. the purpose of "robert" frost's "the road not taken" was not to "celebrate" "individualism".
|
https://www.woot.com/blog/post/the-debunker-is-robert-frosts-the-road-not-taken-about-individualism
|
The Debunker: Is Robert Frost's "The Road Not Taken" about ...
|
The Debunker: Is Robert Frost's "The Road Not Taken" about Individualism?
The Debunker: Is Robert Frost's "The Road Not Taken" about Individualism?
by Ken Jennings
4 years ago
THE DEBUNKER April is National Poetry Month in the United States and Canada! Dreamed up in 1966 by the Academy of American Poets, National Poetry Month is a chance to celebrate poetry of all kinds and get the poetry-skeptical to read or write some of their own. But Ken Jennings from Jeopardy! is here this month to tell you that not everything you think you know about American poetry is historically accurate. Here's the poem he sent us for the occasion: "This is just to say / I have corrected the false poetry facts / that were in your brain / and which / you have probably / believed since high school / Forgive me / they were irresistible / so wrong / and so easy to Google."
The Debunker: Is Robert Frost's "The Road Not Taken" about Individualism?
Okay, first of all, it's "The Road Not Taken," not "The Road Less Traveled." Robert Frost's yearbook and dorm poster and graduation speech staple is, according to Google search metrics, the most famous poem of the 20th century by a wide margin. But, as the New York Times Book Review critic David Orr convincingly argued in a 2015 book, the poem probably doesn't mean what you think it means.
According to Lawrence Thompson's Pulitzer-winning biography of Robert Frost, "The Road Not Taken" was first sent by Frost to his friend, the British poet Edward Thomas, in 1915. The poem was a playful reference to the long walks Frost and Thomas used to take in the Gloucestershire countryside. Thomas apparently had the annoying habit of agonizing over forks in the path, and always lamenting the missed opportunities. Frost saw this as silly "grass is always greener" thinking, and wrote the poem to tease Thomas. Thomas didn't get the reference, and Frost had to write back several times explaining the joke.
Most modern readers, writers Orr, assume we are to admire the narrator for taking "the one less traveled by." What a rugged individualist, in Frost's New England mode! But now that we know the poem's origin, this reading is hard to support in the text. First, the narrator admits that the two paths had actually been worn "about the same"—any impression that one is more difficult or adventurous than the other is more of a private fancy than a factual report. The famous "And that has made all the difference" ending that so impresses high schoolers is meant to be ironic, according to Frost. He told Thomas the narrator's lament is "a mock sigh, hypocritical for the fun of the thing." The narrator here says he will one day paint this choice of roads as a weighty one, but that's just the romantic nature of hindsight. In reality, there was no right road or wrong road. There's just Edward Thomas, standing at a fork in the path and feeling sad that he "could not travel both."
Quick Quiz: : Robert Frost probably reached his biggest audience when he wrote his poem "The Gift Outright" for what 1961 event?
Community
Boring Stuff
The Fine Print
Woot.com is operated by Woot.com LLC.
Products on Woot.com are sold by Woot.com LLC.
Product narratives are for entertainment purposes and frequently employ literary point of view; the narratives do not express Woot's editorial opinion.
Aside from literary abuse, your use of this site also subjects you to Woot's terms of use and privacy policy. Ads by Longitude.
|
The Debunker: Is Robert Frost's "The Road Not Taken" about Individualism?
The Debunker: Is Robert Frost's "The Road Not Taken" about Individualism?
by Ken Jennings
4 years ago
THE DEBUNKER April is National Poetry Month in the United States and Canada! Dreamed up in 1966 by the Academy of American Poets, National Poetry Month is a chance to celebrate poetry of all kinds and get the poetry-skeptical to read or write some of their own. But Ken Jennings from Jeopardy! is here this month to tell you that not everything you think you know about American poetry is historically accurate. Here's the poem he sent us for the occasion: "This is just to say / I have corrected the false poetry facts / that were in your brain / and which / you have probably / believed since high school / Forgive me / they were irresistible / so wrong / and so easy to Google. "
The Debunker: Is Robert Frost's "The Road Not Taken" about Individualism?
Okay, first of all, it's "The Road Not Taken," not "The Road Less Traveled." Robert Frost's yearbook and dorm poster and graduation speech staple is, according to Google search metrics, the most famous poem of the 20th century by a wide margin. But, as the New York Times Book Review critic David Orr convincingly argued in a 2015 book, the poem probably doesn't mean what you think it means.
According to Lawrence Thompson's Pulitzer-winning biography of Robert Frost, "The Road Not Taken" was first sent by Frost to his friend, the British poet Edward Thomas, in 1915. The poem was a playful reference to the long walks Frost and Thomas used to take in the Gloucestershire countryside. Thomas apparently had the annoying habit of agonizing over forks in the path, and always lamenting the missed opportunities. Frost saw this as silly "grass is always greener" thinking, and wrote the poem to tease Thomas. Thomas didn't get the reference, and Frost had to write back several times explaining the joke.
|
no
|
Poetry
|
Did Robert Frost's "The Road Not Taken" mean to celebrate individualism?
|
no_statement
|
"robert" frost's "the road not taken" did not "mean" to "celebrate" "individualism".. the purpose of "robert" frost's "the road not taken" was not to "celebrate" "individualism".
|
https://medium.com/inspired-writer/the-most-inspirational-poem-is-very-misunderstood-1139855bb48
|
The Most Inspirational Poem In American History Is Very ...
|
The Most Inspirational Poem In American History Is Very Misunderstood
“Everyone wants to look back and think that their choices matter.”
“Two roads diverged in a wood, and I — I took the one less traveled by, And that has made all the difference.” — Robert Frost, “The Road Not Taken”
The above words have been heard across English classes all over America, on inspirational posters everywhere. On job interviews, I can imagine people often say “I took the road less traveled by, and that has made all the difference” in response to huge life decisions. I can imagine thousands of college essays that cite these same words from Robert Frost. Scholar David Orr notes in The Paris Review that the poem has become “so ubiquitous…part of everything from coffee mugs to refrigerator magnets to graduation speeches.”
By taking the road less traveled by, we are unconventional. We took the high road and harder path. We signify ourselves as trailblazers. We are the ones who took control of our fates, our destinies and didn’t let anyone else decide for us.
That is a touching message. As I learned in my college Robert Frost class and in my personal Robert Frost studies, the only disappointing part is the final lines of the poem are often taken out of context, and the poem is often woefully misunderstood.
I have written on “The Road Not Taken” before, but I have a different evaluation on why we misunderstand the poem. The part about taking the road less traveled by is only the last stanza in the poem. The first stanza of “The Road Not Taken” talks about a traveler “sorry I could not travel” both roads in a divergence in the woods. This is an indecisive man, who carefully studies and agonizes over both roads.
He looks down one road, then looks down the other. He realizes one road is “just as fair” as the other. He realizes both roads are “worn…really about the same.”
This context flies in the face of taking the “road less traveled by” if both roads are worn about the same and just as fair. They would have given him a similar journey regardless, but the narrator, in the third stanza, still agonizes. He thinks “I kept the first for another day,” thinking over and over to himself how things could have been…
|
The Most Inspirational Poem In American History Is Very Misunderstood
“Everyone wants to look back and think that their choices matter.”
“Two roads diverged in a wood, and I — I took the one less traveled by, And that has made all the difference.” — Robert Frost, “The Road Not Taken”
The above words have been heard across English classes all over America, on inspirational posters everywhere. On job interviews, I can imagine people often say “I took the road less traveled by, and that has made all the difference” in response to huge life decisions. I can imagine thousands of college essays that cite these same words from Robert Frost. Scholar David Orr notes in The Paris Review that the poem has become “so ubiquitous…part of everything from coffee mugs to refrigerator magnets to graduation speeches.”
By taking the road less traveled by, we are unconventional. We took the high road and harder path. We signify ourselves as trailblazers. We are the ones who took control of our fates, our destinies and didn’t let anyone else decide for us.
That is a touching message. As I learned in my college Robert Frost class and in my personal Robert Frost studies, the only disappointing part is the final lines of the poem are often taken out of context, and the poem is often woefully misunderstood.
I have written on “The Road Not Taken” before, but I have a different evaluation on why we misunderstand the poem. The part about taking the road less traveled by is only the last stanza in the poem. The first stanza of “The Road Not Taken” talks about a traveler “sorry I could not travel” both roads in a divergence in the woods. This is an indecisive man, who carefully studies and agonizes over both roads.
He looks down one road, then looks down the other. He realizes one road is “just as fair” as the other. He realizes both roads are “worn…really about the same.”
This context flies in the face of taking the “road less traveled by” if both roads are worn about the same and just as fair.
|
no
|
Comics
|
Did Spiderman originally have organic web shooters?
|
yes_statement
|
"spiderman" "originally" had "organic" "web" "shooters".. the "original" version of "spiderman" featured "organic" "web" "shooters".
|
https://screenrant.com/tobey-maguire-spiderman-organic-webshooters/
|
Why Tobey Maguire's Spider-Man Has Organic Web Shooters
|
Why Tobey Maguire's Spider-Man Has Organic Web Shooters
Tobey Maguire’s iteration of Spider-Man was extremely accurate to the comic source material, but one of the few differences between the two was his version of Spider-Man's organic web-shooters. This is a marked difference from the two later versions of Spider-Man played by Tom Holland and Andrew Garfield, which restored the original comics' mechanical web-shooters. This led to a mostly humorous comparison when the three characters met up in Spider-Man: No Way Home, with Holland and Garfield's Spider-men being confused as to why Tobey Maguire can shoot webs without mechanical assistance.
For most of Spider-Man’s nearly 60 years of existence in comic books and various adaptations, the iconic superhero used a pair of wrist-mounted devices that shoot a synthetic web-like fluid, and Peter Parker invented the web-shooters and web fluid. While most Spider-Man adaptations retain the iconic mechanical web-shooters, Sam Raimi’s Spider-Man trilogy opted to give the character played by Tobey Maguire organic web-shooters, providing the power to shoot web from his body but otherwise functioning almost identically to the comic iteration’s devices. In both versions, Spider-Man shoots web for transportation and as his go-to non-lethal weapon against criminals, allowing him to entangle his foes without killing or injuring them. Here's why Sam Raimi decided to give Tobey Maguire organic webbing.
Realism Is Why Tobey Maguire Has Organic Web-Shooters
Sam Raimi’s first Spider-Man movie was a game-changer, raising the standards of superhero films in terms of verisimilitude and respect for the comic source material that’s matched by few movies, even today. Prior to the late 1990s, Stan Lee had spent decades trying to find support for big-budget adaptations of Marvel's superhero characters, with little success. While Blade and X-Men proved that superhero films based on characters other than Superman or Batman could be successful, Spider-Man brought the genre to soaring new heights. Without the Spider-Man trilogy, the Marvel Cinematic Universe that’s dominated pop culture might not exist in the form that audiences are familiar with today. Much of the appeal of Raimi’s Spider-Man movies comes from their relative realism. While Peter and his adversaries have outlandish abilities and equipment, they’re all written as believable and naturalistic characters outside their costumed personas.
The Raimi Spider-Man movies made some notable adjustments to the comic source material, often for the sake of realism. Rather than being located in their own buildings, The Daily Bugle and Norman Osborn’s penthouse are situated in the Flatiron Building and Tudor City Complex, for instance. The Green Goblin’s iconic Halloween-themed costume is replaced with an armored bodysuit, tying it to Norman Osborn’s collaborations with the U.S. military. The decision to give Tobey Maguire organic web-shooters comes from this school of thought, and while it took some adjusting for fans of Spider-Man comics, it eventually made its way into various comic incarnations of the web-slinger.
Raimi's Spider-Man Almost Had Normal Web-Shooters
While Sam Raimi’s first Spider-Man movie was in development, Peter was originally intended to use mechanical web-shooters like his comic counterpart. Props for the film iteration of the devices were made and scenes of Peter practicing with his web-shooters were filmed. Several TV spots for Spider-Man include footage of Peter Parker testing his web-shooters out in his room, and the props were showcased by Activision at 2001’s E3 to promote the then-upcoming Raimi Spider-Man video game. Raimi’s web-shooters had a homemade and DIY quality to them, incorporating the additional web fluid cartridges on the wristbands and using a pair of small finger pads for activation. At some point, however, Raimi decided against mechanical web-shooters and went with the mutation as the explanation for Spider-Man shooing web.
Sam Raimi had two reasons for why Tobey Maguire can shoot webs without a device. The first was that Peter’s spider bite altered his body, imbuing him with traits of the spider that bit him. This included proportionate strength and speed, as well as heightened reflexes and the ability to climb walls, but the comic version was missing the ability to create webs, the primary form of hunting for most spiders, so Raimi gave his version of Spider-Man organic webs. The second reason was that Raimi found it difficult to believe that a high school student, even a genius like Peter, would have the resources to build a device and synthetic webbing that no private company or government entity could make.
Spider-Man's Comic Book Organic Web-Shooters
While the mainstream comic version of Spider-Man had mechanical web-shooters for most of his history, several alternate universe iterations had organic web-shooters. One of the most well-known examples is Miguel O’Hara from Spider-Man 2099. With a significantly different origin and slightly different powerset than Peter Parker, O’Hara can generate webs from his wrists. The alternate universe version of Peter Parker in Spider-Man Noir also gained the ability to shoot webbing from his wrists, though the spider that bit him came from a mystical Spider Idol, giving his powers a magical origin.
In a 2004 story arc of Spectacular Spider-Man, however, the mainstream version of Peter Parker received the ability to generate webbing as well. Parker was mutated to have additional spider powers as a side effect of an encounter with The Queen (Ana Soria), including glands in his wrists that shot webbing with nearly identical properties to his mechanical web-shooters. The change was most likely made to coincide with the blockbuster Spider-Man films. Peter’s new powers didn’t last long, however, as the controversial One More Day Spider-Man storyline not only removed Peter and Mary Jane’s marriage (and the world’s knowledge of Spider-Man’s secret identity), but it also removed Peter’s additional mutations, including his organic web-shooters.
Why Maguire's Spider-Man Having Organic Web-Shooters Was So Controversial
The idea of giving Spider-Man organic web-shooters on film originated in James Cameron’s Spider-Man movie concept, which perhaps excessively used them as a metaphor for puberty. Comic fans in the early 2000s were not pleased with such a significant change from the source material and Cameron’s crude ideas. One fan was quoted as expressing disinterest in an additional mutation further alienating Peter Parker from his peers when his awkward demeanor already did that sufficiently enough. There was at least one website created that was dedicated to preventing the Spider-Man film from using the idea. Ultimately, the organic-web shooters were handled tastefully by Raimi, who used the concept to support his goal of emphasizing the natural evolution of Peter Parker.
The Organic Webbing Is Tobey Maguire Spider-Man's USP
Following Spider-Man: No Way Home, there are now three on-screen Spider-Men in Marvel's ever-growing cinematic multiverse. While there are no concrete plans to bring back Maguire's Spider-Man for another crossover or a possible Spider-Man 4, it's very possible given the financial success of No Way Home and Marvel's ongoing interest in multiverse stories. In this new multiverse, Tobey Maguire's Spider-Man having organic web shooters makes him distinct from other iterations of the character. Maguire's Peter Parker's organic powers are a sign of how he is portrayed as more of an ordinary, emotional young adult instead of a scientific whiz kid.
Raimi's Spider-man movies provided the MCU a template for relatable superheroes, but the success of the latter also accustomed audiences to more complex plots of the type found in long-running comic books. Whereas Raimi gave Peter Parker organic web-shooters in par to streamline the Spider-Man mythology, this was no longer necessary by the time Garfield and Holland's versions came along. However, there is still a generation of fans for whom Tobey Maguire is the definitive Spider-Man, and his organic web-shooters help to set him apart in Spider-Man: No Way Home and any future crossover projects.
|
So Controversial
The idea of giving Spider-Man organic web-shooters on film originated in James Cameron’s Spider-Man movie concept, which perhaps excessively used them as a metaphor for puberty. Comic fans in the early 2000s were not pleased with such a significant change from the source material and Cameron’s crude ideas. One fan was quoted as expressing disinterest in an additional mutation further alienating Peter Parker from his peers when his awkward demeanor already did that sufficiently enough. There was at least one website created that was dedicated to preventing the Spider-Man film from using the idea. Ultimately, the organic-web shooters were handled tastefully by Raimi, who used the concept to support his goal of emphasizing the natural evolution of Peter Parker.
The Organic Webbing Is Tobey Maguire Spider-Man's USP
Following Spider-Man: No Way Home, there are now three on-screen Spider-Men in Marvel's ever-growing cinematic multiverse. While there are no concrete plans to bring back Maguire's Spider-Man for another crossover or a possible Spider-Man 4, it's very possible given the financial success of No Way Home and Marvel's ongoing interest in multiverse stories. In this new multiverse, Tobey Maguire's Spider-Man having organic web shooters makes him distinct from other iterations of the character. Maguire's Peter Parker's organic powers are a sign of how he is portrayed as more of an ordinary, emotional young adult instead of a scientific whiz kid.
Raimi's Spider-man movies provided the MCU a template for relatable superheroes, but the success of the latter also accustomed audiences to more complex plots of the type found in long-running comic books. Whereas Raimi gave Peter Parker organic web-shooters in par to streamline the Spider-Man mythology, this was no longer necessary by the time Garfield and Holland's versions came along.
|
no
|
Comics
|
Did Spiderman originally have organic web shooters?
|
yes_statement
|
"spiderman" "originally" had "organic" "web" "shooters".. the "original" version of "spiderman" featured "organic" "web" "shooters".
|
https://movies.stackexchange.com/questions/8168/why-does-the-amazing-spider-man-not-have-the-natural-power-to-shoot-web
|
character - Why does the Amazing Spider-Man not have the natural ...
|
But the difference I found in this movie and the last one was that, in this movie the Spider-Man did not have his natural power to generate and shoot web. Instead of it he creates a wrist gadget to shoot web.
I would like to know if there is some logic of the producers behind this.
Well I like the web shooters much better. It make's The Amazing Spider-man more interesting. For one, they weren't bitten by the same spider,and they weren't bitten in the same place. The place the original spider-man was bitten was in the hand so he got to shoot webs from his hands.:D Hope I helped.
– user7741
Jan 23, 2014 at 23:24
The problem with the web shooters and the thought process I got from movie is that it just makes him have super human powers not spiderman powers. Without the organic web he could of been any superhero. This is my opinion based on the cartoons and not comics because I never had access to comics but I watched all the cartoons growing up as a kid.
– user8188
Feb 22, 2014 at 21:21
I for one think the web shooters are awesome. I mean just the thought of an average teenage boy creating them goes to show how intelligent he is. I don't have a problem with the organic web, but the web shooters are way cooler. They raise some curiosity as to what would Spiderman do if the web ran out and he was still engaged in a fight.
– user9125
Apr 16, 2014 at 11:31
Producing and secreting that much organic material that quickly puts a tremendous metabolic load on the body. (That's the reason why breast feeding is such a good way for a new mother to get her weight back down.) Organic webs would have required Peter to suddenly start eating a huge amount more than his previous diet in order to keep up with it. Ironically, the Andrew Garfield version of Peter Parker started eating a lot more, and the Tobey Maguire version did not.
2 Answers
2
First of all Spider-Man doesn't have Organic webs in its starting stage in comics. He got Organic web later in the Comics series.
In the Sam Raimi's trilogy of Spider-Man he does not follow the real Spider-Man comics story as it is and skipped the Artificial Web-shooters thing and even skipped the main characters like Gwen Stacy. So, you can say that it's the director's/writer's choice to choose which aspects they grasp from the original content or not.
Here is some description of his main powers from origin to now in main universe of Marvel Comics -
Original abilities
When Peter Parker was bitten by a lethally irradiated spider,
radioactive mutagenic enzymes in the spider's venom quickly caused
numerous body-wide changes. Immediately after the bite, he was granted
his original powers: primarily superhuman strength, reflexes, and
balance; the ability to cling tenaciously to most surfaces; and a
subconscious precognitive sense of danger, which he called a
"spider-sense."
Additional abilities
Spider-Man's web-shooters were perhaps his most distinguishing trait,
after his costume. Peter had reasoned that a spider (even a human one)
needed a web. Since the radioactive spider-bite did not initially
grant him the power to spin webs, he had instead found a way to
produce them artificially. The wrist-mounted devices fire an adhesive
"webbing".
Organic webs
In the "Disassembled" storyline Parker undergoes a transformation that
results in the ability to produce organic web fluid from his wrists,
and is able to fire his webbing in much the same manner as his
artificial web-shooters. According to the new 2007 Spider-Man
handbook, Parker has grown spinnerets in his forearms that terminate
in small pores at the junction of his wrists. By pressing down with
his middle fingers to his palm, he causes the pores to open and the
spinnerets to eject the organic fluid with a force equal to or greater
than that of his web-shooters.
By the way Sam Raimi's Script is inspired of James Cameron scriptment, which took the idea of organic web-shooters for
Stevens's Failed Script. Since 1985 there have been many scripts
written for the Spider-Men. But James Cameron's Script got the most
attention and become the basis of 2002's film.[source]
But the 2012's The Amazing Spider-Man follows the similar path to the comics and they choose the artificial web-shooter for their movie to establish Peter Parker as a genius scientist.
The only thing I would add is that the invented web shooters go further to point out the intelligence of Peter Parker, he is not just a dude with spider abilities, he is a talented scientist/engineer himself.
@JoshuaDrake: I believe you're correct, and that focus is necessary in The Amazing Spiderman because Peter is given the Spiderman suit by Tony Stark, while also generally being portrayed as a somewhat bumbling character (to comical effect, more so than in the Raimi films). Without the web shooters, viewers might assume that Peter is unfit to be a (super)hero, other than having "lucked" into that radioactive spider bite.
...one key spider-like attribute has historically not come naturally to Spider-Man: the ability to create webs.
Instead, Spider-Man comes equipped with what are known as web-shooters, artificial devices that allow him to spin a web, any size.
Spider-Man's web-shooters are fairly ingenious devices of his own invention; as a brilliant but socially isolated student with a particular talent for science, Peter Parker came up with them to complement his newly-acquired spider powers.
Cameron had lobbied Carolco, the independent studio behind Terminator 2, to purchase the rights to the Spider-Man comics, which they did in 1990.
He wrote a Spider-Man scriptment for Carolco that was widely admired in Hollywood. The comic’s creator, Stan Lee, adored it and gave a Cameron-directed Spider-Man movie his hearty endorsement.
[...]
But Cameron made some thoughtful changes to the iconic character, starting with the Spider-Man’s wrist shooters. Lee’s comic called for Peter Parker to build them himself, but Cameron thought a biological explanation was more plausible.
“I had this problem that Peter Parker, boy genius, goes home and creates these wrist shooters that the DARPA labs would be happy to have created on a 20-year program. I said, wait a minute, he’s been bitten by a radioactive spider, it should change him fundamentally in a way that he can’t go back.”
[...]
Several elements of Cameron’s version made it into Sam Raimi’s take on the web-slinger. Specifically, the organic web-shooters.
Director Marc Webb says that when he took on the reboot project he wanted to go his own path, which meant breaking from the Raimi movies in places where it made sense — and when it came to the webbing he sought out some very specialized counsel.
“I had a meeting with Stan Lee and we talked about the web-shooters. I was curious about the incarnation of them [because] of course in the previous films [they went away from them] and we wanted to reestablish ourselves ... the other thing was the fact that the web-shooters were able to dramatize Peter’s intellect and I thought that was really cool. … To me, it’s something I remember from when I was a kid and thinking ‘It would be cool if I could build those.’”
@AnkitSharma - The problem with the Leslie Stevens script is that it changed Spider-Man's origin completely: The result is not the acquisition of spider-like powers, but, instead, a transformation into an eight-legged human-tarantula hybrid. With Peter Parker basically turning into an actual spider the reason for_organic_ web-shooters are therefore quite different from James Cameron's thought process. So I wouldn't say Cameron took the idea from Stevens. (Btw, the Stevens script was rejected by Stan Lee)
|
I for one think the web shooters are awesome. I mean just the thought of an average teenage boy creating them goes to show how intelligent he is. I don't have a problem with the organic web, but the web shooters are way cooler. They raise some curiosity as to what would Spiderman do if the web ran out and he was still engaged in a fight.
– user9125
Apr 16, 2014 at 11:31
Producing and secreting that much organic material that quickly puts a tremendous metabolic load on the body. (That's the reason why breast feeding is such a good way for a new mother to get her weight back down.) Organic webs would have required Peter to suddenly start eating a huge amount more than his previous diet in order to keep up with it. Ironically, the Andrew Garfield version of Peter Parker started eating a lot more, and the Tobey Maguire version did not.
2 Answers
2
First of all Spider-Man doesn't have Organic webs in its starting stage in comics. He got Organic web later in the Comics series.
In the Sam Raimi's trilogy of Spider-Man he does not follow the real Spider-Man comics story as it is and skipped the Artificial Web-shooters thing and even skipped the main characters like Gwen Stacy. So, you can say that it's the director's/writer's choice to choose which aspects they grasp from the original content or not.
Here is some description of his main powers from origin to now in main universe of Marvel Comics -
Original abilities
When Peter Parker was bitten by a lethally irradiated spider,
radioactive mutagenic enzymes in the spider's venom quickly caused
numerous body-wide changes. Immediately after the bite, he was granted
his original powers: primarily superhuman strength, reflexes, and
balance; the ability to cling tenaciously to most surfaces; and a
subconscious precognitive sense of danger, which he called a
"
|
no
|
Comics
|
Did Spiderman originally have organic web shooters?
|
yes_statement
|
"spiderman" "originally" had "organic" "web" "shooters".. the "original" version of "spiderman" featured "organic" "web" "shooters".
|
https://www.resetera.com/threads/thank-you-spider-man-no-way-home-for-confirming-what-i-have-been-arguing-for-two-decades-now-open-spoilers.529237/
|
Thank you Spider-Man No Way Home for confirming what I have ...
|
Always has been. Imagine inheriting the powers of [insert creature here] and not getting the thing it's known for. Imagine Zebraman having to paint on his stripes. He'd be laughed out of superhero school.
In the 90s Spider-Man cartoon it's explicitly stated that the spider bite somehow gave him the knowledge of how spider web worked and allowed him to make it. If the genius aspect of Peter bothers you, you can always think about it that way, lol
Also the reason he can't sell it is because it's just really strong glue that lasts an hour
Also I am already over this weird "The Andrew Garfield movies aren't fucking trash" retcon people seem to be trying to pull. He was great in No Way Home. So was Jamie Foxx. The one scene between them is better than both ASM movies combined.
Also I am already over this weird "The Andrew Garfield movies aren't fucking trash" retcon people seem to be trying to pull. He was great in No Way Home. So was Jamie Foxx. The one scene between them is better than both ASM movies combined.
It really doesn't. It makes him a clever engineer and a really smart chemist, but not a super genius. The actual formula for web fluid came with the rest of his powers. If someone without a strong chemistry background had gotten it, they wouldn't have been able to make the fluid, yes, but if it wasn't for the bite, neither would Peter.
The webshooters aren't that special, just something an average person couldn't build.
One Winged Slayer
Also I am already over this weird "The Andrew Garfield movies aren't fucking trash" retcon people seem to be trying to pull. He was great in No Way Home. So was Jamie Foxx. The one scene between them is better than both ASM movies combined.
If there's anything 30 years of comic books being in pop culture, Dr Doom tooting horns, and Slayven's threads have taught me, it's that we more we move away from that tier of dumpster fire writing, the better.
At least the radioactive semen stuff was in its own What If non-canon story. The JMS stuff was in the main series and his attempt to basically recreate Spider-Man. Dude thought he was Alan Moore'ing it, and I guess he kind of was in a way since Marvel let him do that buck wild stuff, but it never did quite fit.
At least the radioactive semen stuff was in its own What If non-canon story. The JMS stuff was in the main series and his attempt to basically recreate Spider-Man. Dude thought he was Alan Moore'ing it, and I guess he kind of was in a way since Marvel let him do that buck wild stuff, but it never did quite fit.
|
Always has been. Imagine inheriting the powers of [insert creature here] and not getting the thing it's known for. Imagine Zebraman having to paint on his stripes. He'd be laughed out of superhero school.
In the 90s Spider-Man cartoon it's explicitly stated that the spider bite somehow gave him the knowledge of how spider web worked and allowed him to make it. If the genius aspect of Peter bothers you, you can always think about it that way, lol
Also the reason he can't sell it is because it's just really strong glue that lasts an hour
Also I am already over this weird "The Andrew Garfield movies aren't fucking trash" retcon people seem to be trying to pull. He was great in No Way Home. So was Jamie Foxx. The one scene between them is better than both ASM movies combined.
Also I am already over this weird "The Andrew Garfield movies aren't fucking trash" retcon people seem to be trying to pull. He was great in No Way Home. So was Jamie Foxx. The one scene between them is better than both ASM movies combined.
It really doesn't. It makes him a clever engineer and a really smart chemist, but not a super genius. The actual formula for web fluid came with the rest of his powers. If someone without a strong chemistry background had gotten it, they wouldn't have been able to make the fluid, yes, but if it wasn't for the bite, neither would Peter.
The webshooters aren't that special, just something an average person couldn't build.
One Winged Slayer
Also I am already over this weird "The Andrew Garfield movies aren't fucking trash" retcon people seem to be trying to pull. He was great in No Way Home. So was Jamie Foxx. The one scene between them is better than both ASM movies combined.
If there's anything 30 years of comic books being in pop culture, Dr Doom tooting horns, and Slayven's threads have taught me, it's that we more we move away from that tier of dumpster fire writing, the better.
|
no
|
Comics
|
Did Spiderman originally have organic web shooters?
|
yes_statement
|
"spiderman" "originally" had "organic" "web" "shooters".. the "original" version of "spiderman" featured "organic" "web" "shooters".
|
https://scifi.stackexchange.com/questions/195819/does-spider-man-s-webbing-dissolve-in-raimi-s-spider-man-trilogy
|
marvel - Does Spider-Man's webbing dissolve in Raimi's Spider ...
|
Classically, Peter Parker designed his webbing so that it would dissolve after an hour or so, but before then would be stronger than steel. This was done so that cops arriving on the scene could arrest the evil-doer and put them away. It additionally made it impossible for someone to collect a sample and analyze it in a lab unless they were really quick.
Raimi’s Spider-Man, however, naturally produces his own webbing. I am not a biologist or a zoologist, only a humble chemist, but I’m pretty certain that spider webs in nature do not naturally dissolve away within a day or more. Given that, I would think that this might cause difficulty in arrests. Did his webs dissolve in the trilogy, or are there pieces of webbing hanging in the wind attached to buildings all over Manhattan?
Nice question though asked before apparently, the linked dupe is asking about the organic webs. The answers aren't exactly great for the films though and go off on a tangent with the comics as a source material. This seems like the case for a bounty.
“I am not a biologist or a zoologist, only a humble chemist” — Well there’s your problem, clearly not enough research effort. You want to ask questions on Scifi.SE? Then you come correct! Go get those two extra degrees, then we can talk.
3 Answers
3
I've had a look around for an answer specific to Raimi's trilogy for a while and can't find anything. However, to coincide with Raimi's trilogy and the change to have Peter have organic webs and not web shooters the comics also made that same change. To cut a long story short Peter is turned into a spider by Queen, births from the spider and ever since then his abilities were "replaced" and he uses organic webbing. He is once again back to web shooters since the "One More Day" arc though.
However, during his time with organic webs in the comics Marvel released Official Handbook of the Marvel Universe Spider-Man: Back in Black which had details of his new abilities. In the Spider-Man Update section and the abilities/accessories panel is the following quote which states his organic webs last for a week before decomposing.
Since the Queen's transformation, Parker can produce silk from glands within his forearms, limited by his body's health and nutrition. These organic webs have many similar properties to the artificial webbing though they require a week to decay rather than decomposing within two hours.
Click image to enlarge.
Considering we don't see lots of webbing hanging around in Raimi's trilogy and it is never mentioned as a plot point I'd imagine that the organic webbing either follows the normal 1-2 hours of the shooters or the updated week we see in the comics.
Of interest to note is that originally the Raimi trilogy was going to use web shooters and the prototypes were even displayed at E3. As this was the case it could be that they were going for the normal 1-2 hours and never changed it when they changed to organic webbing.
There are over 100 different versions of Spiderman across movies and comics. I don't see how comparing one comic version with the one movie version on the basis that, out of the 100+ versions, they both have organic web shooters, means that what happens in one universe applies to the other.
@Astralbee This was main comic continuity to the main movie universe at the time and the move to organic was because of the movies. Of course it doesn't have to relate and I say as such in my answer. However, as they are the only two universes with organic webs it makes sense there would be some relation.
I'll start this with pointing out a difference between the semantics of "web" and "cobweb". The difference implies that a "web" is maintained, and a "cobweb" is abandoned. Comparing the two, you can notice a difference in both strength and stickiness. I personally have experienced webs that were surprisingly resilient to being broken - similar to plucking a leaf from a tree almost. Whereas entire cobwebs can be brushed away by hand. Additionally, while a cobweb might "stick" to you when you walk through one, you can simply brush it off, whereas with a web, you do need to actually pull it off. The adhesiveness is caused by fluid coating the web in places, which would "dry up" over time and lose its adhesiveness.
Now, in the comic series, Spider-Man is mostly known for his mechanical web shooter, and his special formula for its rapid disintegration. However, after constantly having to deal with running out of ammo, Peter does actually develop organic shooters. Since this was actually developed after the initial creation of the mechanical web shooters, it likely follows a similar design, i.e. the quick disintegration style.
However, in the movies, Peter starts with this power. There is very little talk about this, other than the constant debate about whether or not it was a good idea, so there is little else other than assumption that can be mode here. However, it can be explained with several assumptions:
It acts exactly the same way as normal spider webs. After a time, it simply disintegrates on its own. Out in the streets of New York, the wind, rain, and general weather would likely take its toll on the web. And since peter is not overly involved in creating large webs on a regular occurrence, a few strands of web he uses to swing around on are likely to get blown away and eventually degrade over time anyway.
It acts the same way as the manufactured web works. Since it's stated that Peter got all of his abilities from the Spider that bit him - his physique, his enhanced reflexes, his fixed eyesight, and web, it could be implied that the web does not work the same way that manufactured web works - disintegrating quickly over the space of a couple of hours.
The Sam Raimi series does have a few inconsistencies from the original ideal of what Spider-Man's abilities are, as well as how his enemies work as well. E.g., Doc Oc, for example doesn't quite work the way he does in the movies. Otto Octavius is obsessed with proving himself superior to Spider-Man, not affected by a Rogue AI in his nervous system whispering to him like the devil on his shoulder.
So it can be sorted of "hand-waved" that the web works in such a way that it is not a menace to the city's Janitorial agency.
I'm sure you know this much - that Stan Lee (and whoever else was involved, but let's not get into that) made the decision to give him web-shooters rather than inherit the web-spinning powers of an actual spider because it would have been "disgusting" to have him produce webbing from glands on his abdomen. A good choice.
Raimi made the reverse decision, apparently citing the improbability that a teenager, no matter how smart, would have the knowledge and means to create such a gadget (of course, the reboot with Andrew Garfield got around this problem by having him steal the tech from Oscorp). The webs (thankfully) still came out of his wrists though.
So, my answer is that, as Raimi's Spider-Man gained the power of web-shooting from the spider, along with his other spider-like attributes, we have to assume it would behave like a spider's web would. According to the National Wildlife Federation:
Real spiders produce several types of webs—some that are not sticky but serve as a superstructure for webs, some that are sticky and capture prey, some used for wrapping up prey in neat little packages... Some smaller spiders producer gossamer web, used as a sort of sail that catches the wind and can carry a spider far and wide, which probably explains in part why spiders are found almost everywhere in the world.
So real spiders can create different types of webbing, some of which is stronger than others. It actually makes more sense that an organic web-shooting ability would allow for this, meaning Raimi's Spidey could spin webbing that would deliberately weaken and deteriorate naturally over time, unless of course the "genetically modified super-spider" (Raimi also realised that in this day and age we all know radioactivity doesn't grant things superpowers) was deliberately engineered to be different to a normal spider in this way.
@Valorum He inherited "spider-like" abilities, right? Spiders climb walls, spin webs, have a "spider sense" (the hairs that detect low-level vibrations) and have a strength disproportionate to their size. Spidey's powers may differ slightly but he didn't inherit anything that wasn't from a spider.
@Valorum I don't think spiderman uses "magic static" to cling to walls - if he has in any iteration of the character then that is bad writing. I'm sure that Raimi's Spiderman has hairs, or barbs that come out of his fingers. As for sensing the future - no, the "sense" that spiders have is just the ability to pick up vibrations. But just as you hear a car horn and that tells you a car is coming before you've seen it, the glimpses of the future that some iterations of Spiderman "see" are just a visualisation of his acute senses.
|
@Astralbee This was main comic continuity to the main movie universe at the time and the move to organic was because of the movies. Of course it doesn't have to relate and I say as such in my answer. However, as they are the only two universes with organic webs it makes sense there would be some relation.
I'll start this with pointing out a difference between the semantics of "web" and "cobweb". The difference implies that a "web" is maintained, and a "cobweb" is abandoned. Comparing the two, you can notice a difference in both strength and stickiness. I personally have experienced webs that were surprisingly resilient to being broken - similar to plucking a leaf from a tree almost. Whereas entire cobwebs can be brushed away by hand. Additionally, while a cobweb might "stick" to you when you walk through one, you can simply brush it off, whereas with a web, you do need to actually pull it off. The adhesiveness is caused by fluid coating the web in places, which would "dry up" over time and lose its adhesiveness.
Now, in the comic series, Spider-Man is mostly known for his mechanical web shooter, and his special formula for its rapid disintegration. However, after constantly having to deal with running out of ammo, Peter does actually develop organic shooters. Since this was actually developed after the initial creation of the mechanical web shooters, it likely follows a similar design, i.e. the quick disintegration style.
However, in the movies, Peter starts with this power. There is very little talk about this, other than the constant debate about whether or not it was a good idea, so there is little else other than assumption that can be mode here. However, it can be explained with several assumptions:
It acts exactly the same way as normal spider webs. After a time, it simply disintegrates on its own. Out in the streets of New York, the wind, rain, and general weather would likely take its toll on the web.
|
no
|
Comics
|
Did Spiderman originally have organic web shooters?
|
yes_statement
|
"spiderman" "originally" had "organic" "web" "shooters".. the "original" version of "spiderman" featured "organic" "web" "shooters".
|
https://www.cbr.com/spidermans-webbing/
|
Thwip/Tuck: 20 Things Webheads Never Knew About Spiderman's ...
|
Thwip/Tuck: 20 Things Webheads Never Knew About Spiderman’s Webbing
Can Spidey really spin a web any size? Find out the answer and more as we look at some impressive facts about Spider-Man's webbing.
He spins a web, any size, and catches thieves just like flies. You know who we're talking about, our friendly neighborhood Spider-Man! The wall-crawler has been shooting webs and fighting crime since the '60s and we just can't get enough of him. With his incredible strength, superior Spidey-Sense, and amazing intellect, he's got everything a superhero could ever want. His red and blue (and sometimes black) attire and mask have been a symbol of Marvel Comics since the earliest days of publication. Spidey has a lot of elements and tricks that get him to stand out from the rest of the comic book crew, but we're here today to talk about what puts the web in web-head.
Iron-Man has his arc-reactor, Thor has his hammer, Doctor Strange has his cloak, and Captain America has his shield. What's Spider-Man without his web? This iconic tool of the trade is Spidey's bread and butter; it can be a weapon, a trap, a safety net, and even a mode of transportation around the New York skyline. It's one of those iconic superheroic assets that never leaves the character no matter what interpretation. But how much is truly known about Spidey's weapon of choice? How strong is it? How long does it last? Do trash men find bits and threads of it lying about the city? We'll answer all those questions and more as we swing straight into twenty fantastic facts about Spider-Man's Webbing
20 A PARKER ORIGINAL
With all the re-imaginings and adaptations of the character, certain origins can be lost in the pages. Newer movies, graphic novels, and comic book runs are always giving new spins on the material, and Spidey is no exception. It's quite a surprise, but some fans have forgotten that the web-shooters were created and designed by Peter Parker himself.
Originally, Peter Parker designed the shooters and the fluid to give his alter-ego the web-swinging ability of an acrobatic arachnid. He's responsible for the mechanics, design, and concept, and all by himself to boot. Though later interpretations would see him swipe the technology from Oscorp, develop it at school, or acquire it elsewhere, the most well-known weapons of webbing are the Parker originals.
19 MENTAL THREADS
When it comes to superhero alter-egos, what comes to mind? Is it Bruce Wayne, Gotham's billionaire playboy? Could it be Clark Kent, mild-mannered reporter for the Daily Planet? Or maybe it's Dr. Bruce Banner, a scientist hiding a green-eyed secret. Something tells us Peter Parker, a student at Midtown High School, doesn't exactly sound like a dashing alter-ego for a swinging superhero.
That being said, Stan Lee wanted Peter to be a relatable character, but also show off more of an intellect than money or reputation. That's also why he had him design the web-shooters himself, to show the readers a superhero with a skilled mind and intellect. After all, aren't brains a bit better than brawn alone?
18 ALL IN THE WRIST
Where the comic books and later adaptations had Spider-Man's web-shooters be a device or some form of experimental weaponry. In the 2002 film, however, Sam Rami had a different idea in mind for his spidery spinnerets. Let's just say the Spider-Suit wasn't the only thing skin tight.
Unlike most interpretations, Sam Rami's Spider-Man had organic web glands in his wrists, cutting out the web-shooters entirely. True, this version made Peter more spider-like in concept, but we can't help feel there's a certain something missing from our friendly neighborhood Spider-Man. Still, there's the lessened chance of running out of webbing, right?
17 STAYING LOADED
Alright, so we know what makes the webbing, but what exactly isthe webbing? The shooters are loaded with cartridges of a special pressurized fluid that creates the webbing when shot out of the mechanism. Think of something akin to an aerosol rifle, only much smaller. Spidey needs something he can conceal, after all.
Though the exact chemical compounds are not actually known, they've been getting Spider-Man around New York City for decades. Since the original formula, Spidey has improved and innovated his webbing, proving that it's always evolving, just like the hero who created it!
16 SURFING THE WEB
The image of Spider-Man swinging through New York City is as iconic as Batman looming over Gotham or Superman flying above Metropolis. However, how is it that his web is able to rocket him up the sides of humongous skyscrapers? Easy -- it's all in the cartridge!
Spider-Man's web is pressurized at around 300psi, and this amount of force allows the web to shoot in a strand up to 60 feet. This is perhaps the reason why Spidey is able to swing from rooftops and gain altitude to pose on the top of the Empire State Building. For such a tiny cartridge, his shooters really do take him places.
15 TAKE YOUR PICK
Another fascinating feature of the web-shooters, Spider-Man has the ability to change the consistency and shape of the web thanks to a special nozzle. His typical webs come in three different shapes. He can shoot a steady stream to swing from, a spray to make webs, and a short blast, more commonly known as the web-ball.
These three make up the base form of Spidey's attacks. We've all seen him swing from a rope, bind bad-guys on the run with a net, and anyone who has played Marvel Vs. Capcom is familiar with his pesky projectile attack. Either way, Spidey, even with the basics, is still well-armed and ready for a fight.
14 A WEB FOR ANY OCCASION
Not only can the webbing be shot out in three different patterns, but it can be molded and shaped to meet any need. It's stretchable and malleable. Whether it's used to bind up some unsightly criminals in a pair of instant handcuffs or jam the Green Goblin's Glider with a large web-bomb, the webbing can fit almost any need.
Not only is this skill useful in the heat of battle, it also has practical uses as well. Spider-Man has used his web for a child's swing, a hammock, a way to hold onto his pizza deliveries, and even as a pair of underwear -- no joke! The uses for his webbing are limited only by his imagination.
13 SEA-WORTHY
Spidey's webs are not easy to break, but that factor also has some impressive perks. One of the least known but most notable features of this strong webbing is the fact it is not water soluble. That's right, these webs are waterproof.
They float, they stay strong, and they retain their stickiness, but not only that -- used the right way, Spider-Man has actually walked on water. Granted, it's not a skill he gets to use often, but the fact remains true. It's easy to forget just how incredibly durable his webbing can be. It definitely makes the boat scene from Homecoming a bit more believable.
12 WITH GREAT POWER
Spider-Man's Webs have come a long way from being just stereotypically sticky. Along with an adhesive function, Peter has upgraded his cartridges to be much more than flypaper for bad guys. With the many innovations and technological advancements he's given his webs, Tony Stark would be proud.
The webs have been anything and everything from ice-powered, electromagnetic, and corrosively acidic. Spidey literally has a web for any disaster, any need, and any supervillain. Now that we've seen the Iron-Spider, we're hoping these upgrades meet the MCU soon.
11 PACKING HEAT
Though it's in an extended universe, we definitely want to include this more mercenary-flavored version of the wall-crawler. On Earth-8351, Spider-Man goes head to head with Wolverine in sort of a gun-for-hire partnership. Trust us, we do mean "gun-for-hire." What looks like a strange hybrid of Deadpool and the Red Hood walking around in Spidey's suit certainly throws us all for a loop in this reimagining.
Though this Spider-Man does not carry a gun, he has modified his web-shooters to be literal wrist-rockets and fire actual bullets! The sudden use of an automatic weapon even takes Wolverine by surprise, as it is certainly more lethal than adhesive webbing. This is definitely not your friendly neighborhood Spider-Man.
10 WILD WILD WEBS
What do you get if you mix Clint Eastwood, The Lone Ranger, and Spider-Man together? You get the Web-Slinger, of course. This spider-themed legend of the west is one of the stranger adaptations in the Spider-Verse, but certainly one of the more creative ones. A westernized version of our favorite web-head, this Spidey isn't swinging from rooftops, but packing a pair of pistols.
Though this version does carry a gun, he doesn't use bullets like our previous entry. Instead, he opts for a webbing-based projectile to pop from his pistol. Perhaps the stickiest gun in the west, this version of Spider-Man definitely has some cool tools of the trade.
9 SIMULATED SPIDER SILK
One would think that with as much webbing as Spider-Man uses, he'd certainly leave a mess for the public works of New York City to clean up. You would think that, but you'd be wrong. For a comic book character, Spider-Man gets realistic in the most unexpected of places.
Much like a real spider's web, Spidey's webbing is only temporary. Usually, it dissolves in a couple of hours and apparently often leaves a rather unpleasant odor. It makes for a clean getaway, and with no evidence to trace back to Peter Parker. It's smart, simple, and effective.
8 SEEN BETWEEN THE TOWERS
When the 2002 Spider-Man film was first teased, one of the iconic pieces was to have been a giant spiderweb between the World Trade Center Towers with a helicopter caught between them. This was part of an early teaser ad for the Sam Rami film but after the 9/11 attacks, the teaser was pulled, and now remains a piece of haunting history.
The teaser can still be found on YouTube today, but the image is positively bone-chilling. At one time, the sight of the web on the New York Skyline was one of the most awe-inspiring things a young Spider-Fan could ever hope to see. This might have been an iconic piece, but unfortunately, it only sparks thoughts of the tragic event.
7 ONCE, TWICE, THREE TIMES A SPIDEY
Spider-Man has certainly gone through the ringer when it comes to movies. From Toby Maguire to Tom Holland, each film adaptation has had their own version of his web-swinging abilities, but only one has been the most accurate. It's not the organic version of the Sam Rami film, nor the spy-tech version from Sony, but the homemade spider-tech from the MCU.
If it's one thing the MCU got right, it's the adaptation of Spider-Man. The version seen in Civil War and Homecoming really delivers on the original concept of the hero. You can tell from his hoodie and augmented mask that this was clearly something he made at home. It's pretty impressive considering it was the Disney-owned version to actually get the character right.
6 DROPPING A LINE
Admit it, we've all wondered what it would feel like to shoot a web and swing through the city like our favorite acrobatic arachnid. So how does Spidey's web maintain that hold and support his weight? Does his web-shooter make a rope or is there a thickness setting on the nozzle? The answer is actually quite ingenious.
Spider-Man's web-rope isn't actually a rope at all, but a series of small threads bound together to strengthen the web. Think of (or look up) a piece of steel cable in a theatre's stage rigging. The hundreds of threads wound tightly together create a line that easily supports a swinging Spider-Man. It's impressive how even the smallest details can affect a character's powers.
5 HULK HALTING
The Incredible Hulk is Marvel's heaviest hitter. The big-green-rage-machine is certainly a tough customer for any Marvel hero, but a certain few have stopped him in his tracks, Spider-Man included. Though he lacks the hard-hitting machine power of Tony Stark's behemoth Hulk-Buster armor, he has been able to, albeit briefly, hit pause on some of Banner's tantrums.
Spidey's webbing is indeed strong enough to tie up this monster, even if only for a while. Enough webbing can certainly weigh him down, and we're not even talking about his higher upgrades. It seems that even the Hulk has trouble getting out of sticky situations.
4 JUST THE RIGHT TOUCH
Anyone who's ever imitated Spider-Man or dressed up as him for Halloween or a comic convention knows how to pose the fingers for his web-shooters. It's middle and ring finger down, rest are splayed out, pretty simple right? As surprising as it may seem, there's actually a bit more thought to web-blasting than that.
The web trigger is actually operated by pressing the two middle fingers downward and pushing the palm towards them. This is the only way the shooters are able to fire; otherwise, the shooters would fire every time Spider-Man makes a fist. Like the web threads, it's the little details that make these gadgets fantastic and fascinating.
3 WEB FOR SALE
Here's one piece of info that even took us by surprise -- did you know Peter wanted to sell the web fluid? In the early development stages of his homemade web formula, he originally wanted to sell the mixture as a sort of adhesive glue. Unfortunately for the money-strapped high school student, the chemical's glue-like properties were only temporary, and who needs a temporary glue?
Thankfully, the formula was improved and made into a stronger compound for Spidey's daily swinging sessions. It may not be the most exciting of origin stories, but it certainly raises an eyebrow or two. Who knew Peter would have been desperate enough to sell such a game-changing substance.
2 GOTTA CALL THE DOC
Peter Parker isn't the only one to dip his hand in the web-making game. In the Superior Spider-Man series, Peter Parker passes away, but his body is taken over by a rather unexpected ally. The new Spider-Man was not Peter, but a (somewhat) reformed Doc Ock. Seeking penance for his wicked ways, the "good" doctor takes on a new life as Peter Parker, as well as improving his alter ego.
Doc Ock lends his scientific genius and expertise to improve his new persona, including giving the new Spider-Suit a set of mechanical spider legs and some new applications to his web fluid, like activating a solvent that will dissolve the stuff. The suit also came with a generally new and improved web formula, making it stronger, longer lasting, and better than before.
1 IT CAN SNAP
It wasn't the fall that ended Gwen Stacy, it was the catch. In perhaps the darkest, most tragic moment in Spider-Man's history, Gwen Stacy falls to her doom during a battle between Spider-Man and the Green Goblin in a clock tower. When the tower falls and Stacy goes down the shaft, Spider-Man, even with his agility, is too late to save her.
He tries to catch her with a shot of web, but the force of the fall is too great. Though he catches her before she hits the ground, there is an audible snap, and Gwen perishes as a result. Though many tie her demise to the hands of the Goblin, the true culprits were a poorly timed web-shot and an unforgiving force of gravity.
|
He's responsible for the mechanics, design, and concept, and all by himself to boot. Though later interpretations would see him swipe the technology from Oscorp, develop it at school, or acquire it elsewhere, the most well-known weapons of webbing are the Parker originals.
19 MENTAL THREADS
When it comes to superhero alter-egos, what comes to mind? Is it Bruce Wayne, Gotham's billionaire playboy? Could it be Clark Kent, mild-mannered reporter for the Daily Planet? Or maybe it's Dr. Bruce Banner, a scientist hiding a green-eyed secret. Something tells us Peter Parker, a student at Midtown High School, doesn't exactly sound like a dashing alter-ego for a swinging superhero.
That being said, Stan Lee wanted Peter to be a relatable character, but also show off more of an intellect than money or reputation. That's also why he had him design the web-shooters himself, to show the readers a superhero with a skilled mind and intellect. After all, aren't brains a bit better than brawn alone?
18 ALL IN THE WRIST
Where the comic books and later adaptations had Spider-Man's web-shooters be a device or some form of experimental weaponry. In the 2002 film, however, Sam Rami had a different idea in mind for his spidery spinnerets. Let's just say the Spider-Suit wasn't the only thing skin tight.
Unlike most interpretations, Sam Rami's Spider-Man had organic web glands in his wrists, cutting out the web-shooters entirely. True, this version made Peter more spider-like in concept, but we can't help feel there's a certain something missing from our friendly neighborhood Spider-Man. Still, there's the lessened chance of running out of webbing, right?
17 STAYING LOADED
Alright, so we know what makes the webbing, but what exactly isthe webbing?
|
no
|
Animation
|
Did Walt Disney create Mickey Mouse?
|
yes_statement
|
"walt" disney "created" mickey mouse.. mickey mouse was "created" by "walt" disney.
|
https://www.waltdisney.org/blog/birth-mouse
|
The Birth of a Mouse | The Walt Disney Family Museum
|
User account menu
The Birth of a Mouse
“He popped out of my mind onto a drawing pad 20 years ago on a train ride from Manhattan to Hollywood at a time when the business fortunes of my brother Roy and myself were at lowest ebb, and disaster seemed right around the corner,” Walt penned in a 1948 essay titled “What Mickey Means to Me.” The disaster Walt mentioned was the brazen theft of both his successful cartoon character Oswald the Lucky Rabbit, as well as most of the Disney artists, at the hands of Universal distributor Charles Mintz. As for who popped out of Walt’s mind? Why, that was Mickey Mouse!
Just before Walt left New York for the cross-country train ride back to Hollywood, he sent his brother Roy a telegram. Nowhere in it did he outline the possible career-ending blow he and his brother had just sustained. He simply indicated when he would arrive home, and took care to add, “Don’t worry everything OK,” to ease his brother’s nerves. Everything was not okay. Walt knew he had to come up with a new character, and fast. Walt’s daughter Diane Disney Miller recalled, “It was on that long train ride that dad conceived of a new cartoon subject, a mouse who was then refined and further developed by Ub Iwerks, and given his name by my mother.”
The first Mickey Mouse cartoon actually completed was Plane Crazy. Inspired by Charles Lindbergh’s heroic first solo flight across the Atlantic, its plot entailed Mickey and some animal friends attempting to assemble their own airplane. The cartoon premiered in Hollywood on May 15, 1928, in the form of a test screening. It failed to obtain distribution. The second Mickey Mouse cartoon, The Gallopin’ Gaucho, met with the same fate. One unpleasant distributor even told Walt, “They don’t know you and they don’t know your mouse.”
The third time was the charm for Mickey, however, when Steamboat Willie premiered on November 18, 1928, in New York’s Colony Theatre. It was one of the very first cartoons to ever successfully utilize synchronized sound, and was so popular, it was talked about more than the feature film it was meant to just compliment. Walt received $1,000 for a two-week run—the highest sum ever paid for a cartoon on Broadway. Walt Disney Studios, with its small but loyal staff, was saved, and a cartoon star was born.
But, when was he born?
Oddly enough, Mickey’s “official” birthday changed dates seemingly every year for decades following 1928. In 1933, Walt himself proclaimed, "Mickey Mouse will be five years old on Sunday. He was born on October 1, 1928. That was the date on which his first picture was started, so we have allowed him to claim this day as his birthday." That date wouldn’t last. Ranging from late September to December, Mickey’s birthday was often altered to conform to specific promotions. It wasn’t until 1978 that Dave Smith, the founder of the Disney Archives, determined that the premiere of Steamboat Willie was truly Mickey Mouse’s first public appearance, therefore his date of birth.
This of course makes November 18, 1928 Minnie Mouse’s birthday too, as she was there hurrying along the banks of the river trying to catch Pegleg Pete’s steamboat. Ever resourceful, Mickey found a way to get her aboard even after the boat had departed. The two realized an instant connection, and the rest, as they say, is history.
Happy birthday Minnie. And happy birthday, Mickey!
Keith Gluck is a WDFM volunteer, writer/editor for thedisneyproject.com, a Disney fan site. His Disney life started early, visiting Disneyland before turning one, and writing his very first book report on a Walt Disney biography for kids.
|
User account menu
The Birth of a Mouse
“He popped out of my mind onto a drawing pad 20 years ago on a train ride from Manhattan to Hollywood at a time when the business fortunes of my brother Roy and myself were at lowest ebb, and disaster seemed right around the corner,” Walt penned in a 1948 essay titled “What Mickey Means to Me.” The disaster Walt mentioned was the brazen theft of both his successful cartoon character Oswald the Lucky Rabbit, as well as most of the Disney artists, at the hands of Universal distributor Charles Mintz. As for who popped out of Walt’s mind? Why, that was Mickey Mouse!
Just before Walt left New York for the cross-country train ride back to Hollywood, he sent his brother Roy a telegram. Nowhere in it did he outline the possible career-ending blow he and his brother had just sustained. He simply indicated when he would arrive home, and took care to add, “Don’t worry everything OK,” to ease his brother’s nerves. Everything was not okay. Walt knew he had to come up with a new character, and fast. Walt’s daughter Diane Disney Miller recalled, “It was on that long train ride that dad conceived of a new cartoon subject, a mouse who was then refined and further developed by Ub Iwerks, and given his name by my mother.”
The first Mickey Mouse cartoon actually completed was Plane Crazy. Inspired by Charles Lindbergh’s heroic first solo flight across the Atlantic, its plot entailed Mickey and some animal friends attempting to assemble their own airplane. The cartoon premiered in Hollywood on May 15, 1928, in the form of a test screening. It failed to obtain distribution. The second Mickey Mouse cartoon, The Gallopin’ Gaucho, met with the same fate. One unpleasant distributor even told Walt, “They don’t know you and they don’t know your mouse.”
The third time was the charm for Mickey, however, when Steamboat Willie premiered on November 18, 1928, in New York’s Colony Theatre.
|
yes
|
Animation
|
Did Walt Disney create Mickey Mouse?
|
yes_statement
|
"walt" disney "created" mickey mouse.. mickey mouse was "created" by "walt" disney.
|
https://en.wikipedia.org/wiki/Mickey_Mouse
|
Mickey Mouse - Wikipedia
|
Mickey Mouse is an American cartoon character co-created in 1928 by Walt Disney and Ub Iwerks. The longtime icon and mascot of The Walt Disney Company, Mickey is an anthropomorphic mouse who typically wears red shorts, large yellow shoes, and white gloves. Inspired by such silent film personalities as Charlie Chaplin and Douglas Fairbanks, Mickey is traditionally characterized as a sympathetic underdog who gets by on pluck and ingenuity in the face of challenges bigger than himself.[2] The character’s depiction as a small mouse is personified through his diminutive stature and falsetto voice, the latter of which was originally provided by Disney. Mickey is one of the world's most recognizable and universally acclaimed fictional characters of all time.
Origin
Mickey Mouse was created as a replacement for Oswald the Lucky Rabbit, an earlier cartoon character that was created by the Disney studio but owned by Universal Pictures.[3]Charles Mintz served as a middleman producer between Disney and Universal through his company, Winkler Pictures, for the series of cartoons starring Oswald. Ongoing conflicts between Disney and Mintz and the revelation that several animators from the Disney studio would eventually leave to work for Mintz's company ultimately resulted in Disney cutting ties with Oswald. Among the few people who stayed at the Disney studio were animator Ub Iwerks, apprentice artist Les Clark, and Wilfred Jackson. On his train ride home from New York, Walt brainstormed ideas for a new cartoon character.
Mickey Mouse was conceived in secret while Disney produced the final Oswald cartoons he contractually owed Mintz. Disney asked Ub Iwerks to start drawing up new character ideas. Iwerks tried sketches of various animals, such as dogs and cats, but none of these appealed to Disney. A female cow and male horse were also rejected. (They would later turn up as Clarabelle Cow and Horace Horsecollar.) A male frog was also rejected, which later showed up in Iwerks' own Flip the Frog series.[4] Walt Disney got the inspiration for Mickey Mouse from a tame mouse at his desk at Laugh-O-Gram Studio in Kansas City, Missouri.[5] In 1925, Hugh Harman drew some sketches of mice around a photograph of Walt Disney. These inspired Ub Iwerks to create a new mouse character for Disney.[4]
Name
"Mortimer Mouse" had been Disney's original name for the character before his wife, Lillian, convinced him to change it.[6][7] Actor Mickey Rooney claimed that during his time performing as the title character of the Mickey McGuire film series (1927–1934), he met Walt Disney at the Warner Bros. studio, inspiring Disney to name the character after him.[8] Disney historian Jim Korkis argues that Rooney's story is fictional, as Disney Studios was located on Hyperion Avenue at the time of Mickey Mouse's development, with Disney conducting no business at Warner Bros.[9][10] Over the years, the name 'Mortimer Mouse' was eventually given to several different characters in the Mickey Mouse universe: Minnie Mouse's uncle, who appears in several comics stories, one of Mickey's antagonists who competes for Minnie's affections in various cartoons and comics, and one of Mickey's nephews, named Morty.
Debut (1928)
Mickey was first seen in a test screening of the cartoon short Plane Crazy, on May 15, 1928, but it failed to impress the audience and Walt could not find a distributor for the short.[11] Walt went on to produce a second Mickey short, The Gallopin' Gaucho, which was also not released for lack of a distributor.
Steamboat Willie was first released on November 18, 1928, in New York. It was co-directed by Walt Disney and Ub Iwerks. Iwerks again served as the head animator, assisted by Johnny Cannon, Les Clark, Wilfred Jackson and Dick Lundy. This short was intended as a parody of Buster Keaton's Steamboat Bill, Jr., first released on May 12 of the same year. Although it was the third Mickey cartoon produced, it was the first to find a distributor, and thus is considered by The Disney Company as Mickey's debut. Willie featured changes to Mickey's appearance (in particular, simplifying his eyes to large dots) that established his look for later cartoons and in numerous Walt Disney films.[citation needed]
The cartoon was not the first cartoon to feature a soundtrack connected to the action. Fleischer Studios, headed by brothers Dave and Max Fleischer, had already released a number of sound cartoons using the DeForest system in the mid-1920s. However, these cartoons did not keep the sound synchronized throughout the film. For Willie, Disney had the sound recorded with a click track that kept the musicians on the beat. This precise timing is apparent during the "Turkey in the Straw" sequence when Mickey's actions exactly match the accompanying instruments. Animation historians have long debated who had served as the composer for the film's original music. This role has been variously attributed to Wilfred Jackson, Carl Stalling and Bert Lewis, but identification remains uncertain. Walt Disney himself was voice actor for both Mickey and Minnie and would remain the source of Mickey's voice through 1946 for theatrical cartoons. Jimmy MacDonald took over the role in 1946, but Walt provided Mickey's voice again from 1955 to 1959 for The Mickey Mouse Club television series on ABC.[citation needed]
Audiences at the time of Steamboat Willie's release were reportedly impressed by the use of sound for comedic purposes. Sound films or "talkies" were still considered innovative. The first feature-length movie with dialogue sequences, The Jazz Singer starring Al Jolson, was released on October 6, 1927. Within a year of its success, most United States movie theaters had installed sound film equipment. Walt Disney apparently intended to take advantage of this new trend and, arguably, managed to succeed. Most other cartoon studios were still producing silent products and so were unable to effectively act as competition to Disney. As a result, Mickey would soon become the most prominent animated character of the time. Walt Disney soon worked on adding sound to both Plane Crazy and The Gallopin' Gaucho (which had originally been silent releases) and their new release added to Mickey's success and popularity. A fourth Mickey short, The Barn Dance, was also put into production; however, Mickey does not actually speak until The Karnival Kid in 1929 (see below). After Steamboat Willie was released, Mickey became a close competitor to Felix the Cat, and his popularity would grow as he was continuously featured in sound cartoons. By 1929, Felix would lose popularity among theater audiences, and Pat Sullivan decided to produce all future Felix cartoons in sound as a result.[12] Unfortunately, audiences did not respond well to Felix's transition to sound and by 1930, Felix had faded from the screen.[13]
Black and white films (1929–1935)
In Mickey's early films he was often characterized not as a hero, but as an ineffective young suitor to Minnie Mouse. The Barn Dance (March 14, 1929) is the first time in which Mickey is turned down by Minnie in favor of Pete. The Opry House (March 28, 1929) was the first time in which Mickey wore his white gloves. Mickey wears them in almost all of his subsequent appearances and many other characters followed suit. The three lines on the back of Mickey's gloves represent darts in the gloves' fabric extending from between the digits of the hand, typical of glove design of the era.
When the Cat's Away (April 18, 1929), essentially a remake of the Alice Comedy, "Alice Rattled by Rats", was an unusual appearance for Mickey. Although Mickey and Minnie still maintained their anthropomorphic characteristics, they were depicted as the size of regular mice and living with a community of many other mice as pests in a home. Mickey and Minnie would later appear the size of regular humans in their own setting. In appearances with real humans, Mickey has been shown to be about two to three feet high.[14] The next Mickey short was also unusual. The Barnyard Battle (April 25, 1929) was the only film to depict Mickey as a soldier and also the first to place him in combat. The Karnival Kid (1929) was the first time Mickey spoke. Before this he had only whistled, laughed, and grunted. His first words were "Hot dogs! Hot dogs!" said while trying to sell hot dogs at a carnival. Mickey's Follies (1929) introduced the song "Minnie's Yoo-Hoo" which would become the theme song for Mickey Mouse films for the next several years. The same song sequence was also later reused with different background animation as its own special short shown only at the commencement of 1930s theater-based Mickey Mouse Clubs.[15][16] Mickey's dog Pluto first appeared as Mickey's pet in The Moose Hunt (1931) after previously appearing as Minnie's dog "Rover" in The Picnic (1930).
The Cactus Kid (April 11, 1930) was the last film to be animated by Ub Iwerks at Disney. Shortly before the release of the film, Iwerks left to start his own studio, bankrolled by Disney's then-distributor Pat Powers. Powers and Disney had a falling out over money due Disney from the distribution deal. It was in response to losing the right to distribute Disney's cartoons that Powers made the deal with Iwerks, who had long harbored a desire to head his own studio. The departure is considered a turning point in Mickey's career, as well as that of Walt Disney. Walt lost the man who served as his closest colleague and confidant since 1919. Mickey lost the man responsible for his original design and for the direction or animation of several of the shorts released till this point. Advertising for the early Mickey Mouse cartoons credited them as "A Walt Disney Comic, drawn by Ub Iwerks". Later Disney Company reissues of the early cartoons tend to credit Walt Disney alone.
Disney and his remaining staff continued the production of the Mickey series, and he was able to eventually find a number of animators to replace Iwerks. As the Great Depression progressed and Felix the Cat faded from the movie screen, Mickey's popularity would rise, and by 1932 The Mickey Mouse Club would have one million members.[17] At the 5th Academy Awards in 1932, Mickey received his first Academy Award nomination, received for Mickey's Orphans (1931). Walt Disney also received an honorary Academy Award for the creation of Mickey Mouse. Despite being eclipsed by the Silly Symphony short the Three Little Pigs in 1933, Mickey still maintained great popularity among theater audiences too, until 1935, when polls showed that Popeye was more popular than Mickey.[18][19][20] By 1934, Mickey merchandise had earned $600,000 a year.[21] In 1935, Disney began to phase out the Mickey Mouse Clubs, due to administration problems.[22]
About this time, story artists at Disney were finding it increasingly difficult to write material for Mickey. As he had developed into a role model for children, they were limited in the types of gags they could present. This led to Mickey taking more of a secondary role in some of his next films, allowing for more emphasis on other characters. In Orphan's Benefit (1934), Mickey first appeared with Donald Duck who had been introduced earlier that year in the Silly Symphony series. The tempestuous duck would provide Disney with seemingly endless story ideas and would remain a recurring character in Mickey's cartoons.
Color films (1935–1953)
Mickey first appeared animated in color in Parade of the Award Nominees in 1932; however, the film strip was created for the 5th Academy Awards ceremony and was not released to the public. Mickey's official first color film came in 1935 with The Band Concert. The Technicolor film process was used in the film production. Here Mickey conducted the William Tell Overture, but the band is swept up by a tornado. It is said that conductor Arturo Toscanini so loved this short that, upon first seeing it, he asked the projectionist to run it again. In 1994, The Band Concert was voted the third-greatest cartoon of all time in a poll of animation professionals. By colorizing and partially redesigning Mickey, Walt would put Mickey back on top once again, and Mickey would reach popularity he never reached before as audiences now gave him more appeal.[23] Also in 1935, Walt would receive a special award from the League of Nations for creating Mickey.
Mickey was redesigned by animator Fred Moore which was first seen in The Pointer (1939). Instead of having solid black eyes, Mickey was given white eyes with pupils, a Caucasian skin colored face, and a pear-shaped body. In the 1940s, he changed once more in The Little Whirlwind, where he used his trademark pants for the last time in decades, lost his tail, got more realistic ears that changed with perspective and a different body anatomy. But this change would only last for a short period of time before returning to the one in "The Pointer", with the exception of his pants. In his final theatrical cartoons in the 1950s, he was given eyebrows, which were removed in the more recent cartoons.
In 1940, Mickey appeared in his first feature-length film, Fantasia. His screen role as The Sorcerer's Apprentice, set to the symphonic poem of the same name by Paul Dukas, is perhaps the most famous segment of the film and one of Mickey's most iconic roles. The apprentice (Mickey), not willing to do his chores, puts on the sorcerer's magic hat after the sorcerer goes to bed and casts a spell on a broom, which causes the broom to come to life and perform the most tiring chore—filling up a deep well using two buckets of water. When the well eventually overflows, Mickey finds himself unable to control the broom, leading to a near-flood. After the segment ends, Mickey is seen in silhouette shaking hands with Leopold Stokowski, who conducts all the music heard in Fantasia. Mickey has often been pictured in the red robe and blue sorcerer's hat in merchandising. It was also featured into the climax of Fantasmic!, an attraction at the Disney theme parks.
After 1940, Mickey's popularity would decline until his 1955 re-emergence as a daily children's television personality.[24] Despite this, the character continued to appear regularly in animated shorts until 1943 (winning his only competitive Academy Award—with canine companion Pluto—for a short subject, Lend a Paw) and again from 1946 to 1952. In these later cartoons, Mickey was often just a supporting character in his own shorts, where Pluto would be the main character.
The last regular installment of the Mickey Mouse film series came in 1953 with The Simple Things in which Mickey and Pluto go fishing and are pestered by a flock of seagulls.
Similar to his animated inclusion into a live-action film in Roger Rabbit, Mickey made a featured cameo appearance in the 1990 television special The Muppets at Walt Disney World where he met Kermit the Frog. The two are established in the story as having been old friends, although they have not made any other appearance together outside of this.
Mickey Mouse, as he appears in the Paul Rudish years, and the modern era
In 2013, Disney Channel started airing new 3-minute Mickey Mouse shorts, with animator Paul Rudish at the helm, incorporating elements of Mickey's late twenties-early thirties look with a contemporary twist.[26] The creative team behind the 2017 DuckTales reboot had hoped to have Mickey Mouse in the series, but this idea was rejected by Disney executives.[27] However, this did not stop them from including a watermelon shaped like Mickey Mouse that Donald Duck made and used like a ventriloquist dummy (to the point where he had perfectly replicated his voice (supplied by Chris Diamantopoulos)) while he was stranded on a deserted island during the season two finale.[28] On November 10, 2020, the series was revived as The Wonderful World of Mickey Mouse and premiered on Disney+[29]
In August 2018, ABC television announced a two-hour prime time special, Mickey's 90th Spectacular, in honor of Mickey's 90th birthday. The program featured never-before-seen short videos and several other celebrities who wanted to share their memories about Mickey Mouse and performed some of the Disney songs to impress Mickey. The show took place at the Shrine Auditorium in Los Angeles and was produced and directed by Don Mischer on November 4, 2018.[30][31] On November 18, 2018, a 90th anniversary event for the character was celebrated around the world.[32] In December 2019, both Mickey and Minnie served as special co-hosts of Wheel of Fortune for two weeks while Vanna White served as the main host during Pat Sajak's absence.[33]
Mickey is the subject of the 2022 documentary filmMickey: The Story of a Mouse, directed by Jeff Malmberg. Debuting at the South by Southwest film festival prior to its premiere on the Disney+ streaming service, the documentary examines the history and cultural impact of Mickey Mouse across. The feature is accompanied by an original, hand-drawn animated short film starring Mickey titled Mickey in a Minute.[34]
Comics
Mickey and Horace Horsecollar from the Mickey Mouse daily strip; created by Floyd Gottfredson and published December 1932
Mickey first appeared in comics after he had appeared in 15 commercially successful animated shorts and was easily recognized by the public. Walt Disney was approached by King Features Syndicate with the offer to license Mickey and his supporting characters for use in a comic strip. Disney accepted and Mickey Mouse made its first appearance on January 13, 1930.[35] The comical plot was credited to Disney himself, art to Ub Iwerks and inking to Win Smith. The first week or so of the strip featured a loose adaptation of "Plane Crazy". Minnie soon became the first addition to the cast. The strips first released between January 13, 1930, and March 31, 1930, has been occasionally reprinted in comic book form under the collective title "Lost on a Desert Island". Animation historian Jim Korkis notes "After the eighteenth strip, Iwerks left and his inker, Win Smith, continued drawing the gag-a-day format."[36]
In early 1930, after Iwerks' departure, Disney was at first content to continue scripting the Mickey Mouse comic strip, assigning the art to Win Smith. However, Disney's focus had always been in animation and Smith was soon assigned with the scripting as well. Smith was apparently discontent at the prospect of having to script, draw, and ink a series by himself as evidenced by his sudden resignation.
Disney then searched for a replacement among the remaining staff of the Studio. He selected Floyd Gottfredson, a recently hired employee. At the time Gottfredson was reportedly eager to work in animation and somewhat reluctant to accept his new assignment. Disney had to assure him the assignment was only temporary and that he would eventually return to animation. Gottfredson accepted and ended up holding this "temporary" assignment from May 5, 1930, to November 15, 1975.
Walt Disney's last script for the strip appeared May 17, 1930.[36] Gottfredson's first task was to finish the storyline Disney had started on April 1, 1930. The storyline was completed on September 20, 1930, and later reprinted in comic book form as Mickey Mouse in Death Valley. This early adventure expanded the cast of the strip which to this point only included Mickey and Minnie. Among the characters who had their first comic strip appearances in this story were Clarabelle Cow, Horace Horsecollar, and Black Pete as well as the debuts of corrupted lawyer Sylvester Shyster and Minnie's uncle Mortimer Mouse. The Death Valley narrative was followed by Mr. Slicker and the Egg Robbers, first printed between September 22 and December 26, 1930, which introduced Marcus Mouse and his wife as Minnie's parents.
Starting with these two early comic strip stories, Mickey's versions in animation and comics are considered to have diverged from each other. While Disney and his cartoon shorts would continue to focus on comedy, the comic strip effectively combined comedy and adventure. This adventurous version of Mickey would continue to appear in comic strips and later comic books throughout the 20th and into the 21st century.
In Europe, Mickey Mouse became the main attraction of a number of comics magazines, the most famous being Topolino in Italy from 1932 onward, Le Journal de Mickey in France from 1934 onward, Don Miki in Spain and the Greek Miky Maous.
In 1958, Mickey Mouse was introduced to the Arab world through another comic book called "Sameer". He became very popular in Egypt and got a comic book with his name. Mickey's comics in Egypt are licensed by Disney and were published since 1959 by "Dar Al-Hilal" and they were successful, however Dar Al-Hilal stopped the publication in 2003 because of problems with Disney. The comics were re-released by "Nahdat Masr" in 2004 and the first issues were sold out in less than 8 hours.[37]
Portrayal
Taking inspiration from silent film personalities such as Charlie Chaplin's Tramp, Mickey is traditionally characterized as a sympathetic underdog who gets by on pluck and ingenuity in the face of challenges much bigger than himself.[38] Originally characterized as a cheeky lovable rogue, Mickey was rebranded over time as a nice guy, usually seen as an honest and bodacious hero. In 2009, Disney began to rebrand the character by putting less emphasis on his friendly, well-meaning persona and reintroducing the more adventurous and stubborn sides of his personality, beginning with the video game Epic Mickey.[39]
Throughout the earlier years, Mickey's design bore heavy resemblance to Oswald, save for the ears, nose, and tail.[40][41][42] Ub Iwerks designed Mickey's body out of circles in order to make the character simple to animate. Disney employees John Hench and Marc Davis believed that this design was part of Mickey's success as it made him more dynamic and appealing to audiences.
Mickey's circular design is most noticeable in his ears. In animation in the 1940s, Mickey's ears were animated in a more realistic perspective. Later, they were drawn to always appear circular no matter which way Mickey was facing. This made Mickey easily recognizable to audiences and made his ears an unofficial personal trademark. The circular rule later created a dilemma for toy creators who had to recreate a three-dimensional Mickey.
In 1938, animator Fred Moore redesigned Mickey's body away from its circular design to a pear-shaped design. Colleague Ward Kimball praised Moore for being the first animator to break from Mickey's "rubber hose, round circle" design. Although Moore himself was nervous at first about changing Mickey, Walt Disney liked the new design and told Moore "that's the way I want Mickey to be drawn from now on."
Each of Mickey's hands has only three fingers and a thumb. Disney said that this was both an artistic and financial decision, explaining, "Artistically five digits are too many for a mouse. His hand would look like a bunch of bananas. Financially, not having an extra finger in each of 45,000 drawings that make up a six and one-half minute short has saved the Studio millions." In the film The Opry House (1929), Mickey was first given white gloves as a way of contrasting his naturally black hands against his black body. The use of white gloves would prove to be an influential design for cartoon characters, particularly with later Disney characters, but also with non-Disney characters such as Bugs Bunny, Woody Woodpecker, Mighty Mouse, Mario, and Sonic The Hedgehog.
Mickey's eyes, as drawn in Plane Crazy and The Gallopin' Gaucho, were large and white with black outlines. In Steamboat Willie, the bottom portion of the black outlines was removed, although the upper edges still contrasted with his head. Mickey's eyes were later re-imagined as only consisting of the small black dots which were originally his pupils, while what were the upper edges of his eyes became a hairline. This is evident only when Mickey blinks. Fred Moore later redesigned the eyes to be small white eyes with pupils and gave his face a Caucasian skin tone instead of plain white. This new Mickey first appeared in 1938 on the cover of a party program, and in animation the following year with the release of The Pointer.[43] Mickey is sometimes given eyebrows as seen in The Simple Things (1953) and in the comic strip, although he does not have eyebrows in his subsequent appearances.[citation needed]
Originally characters had black hands, but Frank Thomas said this was changed for visibility reasons.[44] According to Disney's Disney Animation: The Illusion of Life, written by former Disney animators Frank Thomas and Ollie Johnston, "The characters were in black and white with no shades of grey to soften the contrast or delineate a form. Mickey's body was black, his arms and his hands- all black. There was no way to stage an action except in silhouette. How else could there be any clarity? A hand in front of a chest would simply disappear."[45]
Multiple sources state that Mickey's characteristics, particularly the black body combined with the large white eyes, white mouth, and the white gloves, evolved from blackface caricatures used in minstrel shows.[46][47][48][49][50]
Voice actors
A large part of Mickey's screen persona is his famously shy, falsetto voice. From 1928 onward, Mickey was voiced by Walt Disney himself, a job in which Disney appeared to take great personal pride. Composer Carl W. Stalling was the first person to provide lines for Mickey in the 1929 shorts The Karnival Kid and Wild Waves,[51][52] and J. Donald Wilson and Joe Twerp provided the voice in some 1938 broadcasts of The Mickey Mouse Theater of the Air,[53] although Disney remained Mickey's official voice during this period. However, by 1946, Disney was becoming too busy with running the studio to do regular voice work which meant he could not do Mickey's voice on a regular basis anymore. It is also speculated that his cigarette habit had damaged his voice over the years.[54] After recording the Mickey and the Beanstalk section of Fun and Fancy Free, Mickey's voice was handed over to veteran Disney musician and actor Jimmy MacDonald. Walt would reprise Mickey's voice occasionally until his passing in 1966, such as in the introductions to the original 1955–1959 run of The Mickey Mouse Club TV series, the "Fourth Anniversary Show" episode of the Walt Disney's Disneyland TV series that aired on September 11, 1957, and the Disneyland USA at Radio City Music Hall show from 1962.[55]
MacDonald voiced Mickey in most of the remaining theatrical shorts and for various television and publicity projects up until his retirement in 1976.[56] However, other actors would occasionally play the role during this era. Clarence Nash, the voice of Donald Duck, provided the voice in three of Mickey's theatrical shorts, The Dognapper, R'coon Dawg, and Pluto's Party.[57]Stan Freberg voiced Mickey in the Freberg-produced record Mickey Mouse's Birthday Party.
Alan Young voiced Mickey in the Disneyland record album An Adaptation of Dickens' Christmas Carol, Performed by The Walt Disney Players in 1974.[58][59]
The 1983 short film Mickey's Christmas Carol marked the theatrical debut of Wayne Allwine as Mickey Mouse, who was the official voice of Mickey from 1977 until his death in 2009,[60] although MacDonald returned to voice Mickey for an appearance at the 50th Academy Awards in 1978.[61] Allwine once recounted something MacDonald had told him about voicing Mickey: "The main piece of advice that Jim gave me about Mickey helped me keep things in perspective. He said, 'Just remember kid, you're only filling in for the boss.' And that's the way he treated doing Mickey for years and years. From Walt, and now from Jimmy."[62] In 1991, Allwine married Russi Taylor, the voice of Minnie Mouse from 1986 until her death in 2019.
Les Perkins did the voice of Mickey in two TV specials, "Down and Out with Donald Duck" and "DTV Valentine", in the mid-1980s. Peter Renaday voiced Mickey in the 1980s Disney albums Yankee Doodle Mickey and Mickey Mouse Splashdance.[63][64] He also provided his voice for The Talking Mickey Mouse toy in 1986.[65][66]Quinton Flynn briefly filled in for Allwine as the voice of Mickey in a few episodes of the first season of Mickey Mouse Works whenever Allwine was unavailable to record.[67]
Bret Iwan, a former Hallmark greeting card artist, is the current official voice of Mickey. Iwan was originally cast as an understudy for Allwine due to the latter's declining health, but Allwine died before Iwan could get a chance to meet him and Iwan became the new official voice of the character at the time. Iwan's early recordings in 2009 included work for the Disney Cruise Line, Mickey toys, the Disney theme parks and the Disney on Ice: Celebrations! ice show.[68] He directly replaced Allwine as Mickey for the Kingdom Hearts video game series and the TV series Mickey Mouse Clubhouse. His first video game voice-over of Mickey Mouse can be heard in Kingdom Hearts: Birth by Sleep. Iwan also became the first voice actor to portray Mickey during Disney's rebranding of the character, providing the vocal effects of Mickey in Epic Mickey as well as his voice in Epic Mickey 2: The Power of Two and the remake of Castle of Illusion.
Merchandising
Since his early years, Mickey Mouse has been licensed by Disney to appear on many different kinds of merchandise. Mickey was produced as plush toys and figurines, and Mickey's image has graced almost everything from T-shirts to lunchboxes. Largely responsible for early Disney merchandising was Kay Kamen, Disney's head of merchandise and licensing from 1932 until his death in 1949, who was called a "stickler for quality". Kamen was recognized by The Walt Disney Company as having a significant part in Mickey's rise to stardom and was named a Disney Legend in 1998.[72] At the time of his 80th-anniversary celebration in 2008, Time declared Mickey Mouse one of the world's most recognized characters, even when compared against Santa Claus.[73] Disney officials have stated that 98% of children aged 3–11 around the world are at least aware of the character.[73]
Disney parks
As the official Walt Disney mascot, Mickey has played a central role in the Disney parks since the opening of Disneyland in 1955. As with other characters, Mickey is often portrayed by a non-speaking costumed actor. In this form, he has participated in ceremonies and countless parades, and poses for photographs with guests. As of the presidency of Barack Obama (who jokingly referred to him as "a world leader who has bigger ears than me")[74] Mickey has met every U.S. president since Harry Truman, with the exception of Lyndon B. Johnson.[42]
Mickey also features in several specific attractions at the Disney parks. Mickey's Toontown (Disneyland and Tokyo Disneyland) is a themed land which is a recreation of Mickey's neighborhood. Buildings are built in a cartoon style and guests can visit Mickey or Minnie's houses, Donald Duck's boat, or Goofy's garage. This is a common place to meet the characters.[75]
In addition to Mickey's overt presence in the parks, numerous images of him are also subtly included in sometimes unexpected places. This phenomenon is known as "Hidden Mickeys", involving hidden images in Disney films, theme parks, and merchandise.[77]
Watches and clock
Mickey was famously featured on wristwatches and alarm clocks, typically utilizing his hands as the actual hands on the face of the clock. The first Mickey Mouse watches were manufactured in 1933 by the Ingersoll Watch Company. The seconds were indicated by a turning disk below Mickey. The first Mickey watch was sold at the Century of Progress in Chicago, 1933 for $3.75 (equivalent to $85 in 2022). Mickey Mouse watches have been sold by other companies and designers throughout the years, including Timex, Elgin, Helbros, Bradley, Lorus, and Gérald Genta.[78] The fictional character Robert Langdon from Dan Brown's novels was said to wear a Mickey Mouse watch as a reminder "to stay young at heart."[79]
Other products
In 1989, Milton Bradley released the electronic talking game titled Mickey Says, with three modes featuring Mickey Mouse as its host. Mickey also appeared in other toys and games, including the Worlds of Wonder released The Talking Mickey Mouse.
Fisher-Price has produced a line of talking animatronic Mickey dolls including "Dance Star Mickey" (2010)[80] and "Rock Star Mickey" (2011).[81]
In total, approximately 40% of Disney's revenues for consumer products are derived from Mickey Mouse merchandise, with revenues peaking in 1997.[73]
Social impact
Use in politics
In the United States, protest votes are often made in order to indicate dissatisfaction with the slate of candidates presented on a particular ballot or to highlight the inadequacies of a particular voting procedure. Since most states' electoral systems do not provide for blank balloting or a choice of "None of the Above", most protest votes take the form of a clearly non-serious candidate's name entered as a write-in vote. Mickey Mouse is often selected for this purpose.[82][83] As an election supervisor in Georgia observed, "If Mickey Mouse doesn't get votes in our election, it's a bad election."[84] The earliest known mention of Mickey Mouse as a write-in candidate dates back to the 1932 New York City mayoral elections.[85]
Pejorative use of Mickey's name
"Mickey Mouse" is a slang expression meaning small-time, amateurish or trivial. In the United Kingdom and Ireland, it also means poor quality or counterfeit.[88] In Poland the phrase "mały Miki", which translates to "small Mickey", means something very simple and trivial - usually used in the comparison between two things.[89] However, in parts of Australia it can mean excellent or very good (rhyming slang for "grouse").[90] Examples of the negative usages include the following:
In The Godfather Part II, Fredo's justification of betraying Michael is that his orders in the family usually were "Send Fredo off to do this, send Fredo off to do that! Let Fredo take care of some Mickey Mouse nightclub somewhere!" as opposed to more meaningful tasks.
In an early episode of the 1978–82 sitcom Mork & Mindy, Mork stated that Pluto was "a Mickey Mouse planet", referring to the future dwarf planet having the same name as Mickey's pet dog Pluto.
On November 19, 1983, just after an ice hockey game in which Wayne Gretzky's Edmonton Oilers beat the New Jersey Devils 13–4, Gretzky was quoted as saying to a reporter, "Well, it's time they got their act together, they're ruining the whole league. They had better stop running a Mickey Mouse organization and put somebody on the ice". Reacting to Gretzky's comment, Devils fans wore Mickey Mouse apparel when the Oilers returned to New Jersey on January 15, 1984, despite a 5–4 Devils loss.[91]
In the 1996 Warner Bros. film Space Jam, Bugs Bunny derogatorily comments on Daffy Duck's idea for the name of their basketball team, asking: "What kind of Mickey Mouse organization would name a team 'The Ducks?'" (This also referenced the Mighty Ducks of Anaheim, an NHL team that was then owned by Disney, as well as the Disney-made The Mighty Ducks movie franchise. This was referencing the Disney/Warner Brothers rivalry.)
In schools a "Mickey Mouse course", "Mickey Mouse major", or "Mickey Mouse degree" is a class, college major, or degree where very little effort is necessary in order to attain a good grade (especially an A) or one where the subject matter of such a class is not of any importance in the labor market.[92]
Parodies and criticism
Mickey Mouse's global fame has made him both a symbol of The Walt Disney Company and of the United States itself. For this reason, Mickey has been used frequently in anti-American satire, such as the infamous underground cartoon "Mickey Mouse in Vietnam" (1969) and the Palestinian children's propaganda series Tomorrow's Pioneers where a Mickey Mouse-esque character named Farfour is used to promote Islamic extremism. There have been numerous parodies of Mickey Mouse, such as the two-page parody "Mickey Rodent" by Will Elder (published in Mad #19, 1955) in which the mouse walks around unshaven and jails Donald Duck out of jealousy over the duck's larger popularity.[95]The Simpsons would later become Disney property as its distributor Fox was acquired by Disney. In the Comedy Central series South Park, Mickey (voiced by Trey Parker) serves as one of the recurring antagonists, and is depicted as the sadistic, greedy, foul-mouthed boss of The Walt Disney Company, only interested in money. He also appears briefly with Donald Duck in the comic Squeak the Mouse by the Italian cartoonist Massimo Mattioli. Horst Rosenthal created a comic book, Mickey au Camp de Gurs (Mickey Mouse in the Gurs Internment Camp) while detained in the Gurs internment camp during the Second World War; he added "Publié Sans Autorisation de Walt Disney" ("Published without Walt Disney's Permission") to the front cover.[96]
In the fifth episode of the Japanese anime, Pop Team Epic, Popuko, one of the main characters, attempts an impression of Mickey, but does so poorly.
Legal issues
Like all major Disney characters, Mickey Mouse is not only copyrighted but also trademarked, which lasts in perpetuity as long as it continues to be used commercially by its owner. So, whether or not a particular Disney cartoon goes into the public domain, the characters themselves may not be used as trademarks without authorization.
Because of the Copyright Term Extension Act of the United States (sometimes called the 'Mickey Mouse Protection Act' because of extensive lobbying by the Disney corporation) and similar legislation within the European Union and other jurisdictions where copyright terms have been extended, works such as the early Mickey Mouse cartoons will remain under copyright until at least 2024. However, some copyright scholars argue that Disney's copyright on the earliest version of the character may be invalid due to ambiguity in the copyright notice for Steamboat Willie.[97]
The Walt Disney Company has become well known for protecting its trademark on the Mickey Mouse character—whose likeness is closely associated with the company—with particular zeal. In 1989, Disney threatened legal action against three daycare centers in the Orlando, Florida region (where Walt Disney World is a dominant employer) for having Mickey Mouse and other Disney characters painted on their walls. The characters were removed, and the newly opened rival Universal Studios Florida allowed the centers to use their own cartoon characters with their blessing, to build community goodwill.[98]
Walt Disney Productions v. Air Pirates
In 1971, a group of underground cartoonists calling themselves the Air Pirates, after a group of villains from early Mickey Mouse films, produced a comic called Air Pirates Funnies. In the first issue, cartoonist Dan O'Neill depicted Mickey and Minnie Mouse engaging in explicit sexual behavior and consuming drugs. As O'Neill explained, "The air pirates were...some sort of bizarre concept to steal the air, pirate the air, steal the media....Since we were cartoonists, the logical thing was Disney."[99] Rather than change the appearance or name of the character, which O'Neill felt would dilute the parody, the mouse depicted in Air Pirates Funnies looks like and is named "Mickey Mouse". Disney sued for copyright infringement, and after a series of appeals, O'Neill eventually lost and was ordered to pay Disney $1.9 million. The outcome of the case remains controversial among free-speech advocates. New York Law School professor Edward Samuels said, "The Air Pirates set parody back twenty years."[100][better source needed]
Copyright status
There have been multiple attempts to argue that certain versions of Mickey Mouse are in fact in the public domain. In the 1980s, archivist George S. Brown attempted to recreate and sell cels from the 1933 short "The Mad Doctor", on the theory that they were in the public domain because Disney had failed to renew the copyright as required by current law.[101] However, Disney successfully sued Brown to prevent such sale, arguing that the lapse in copyright for "The Mad Doctor" did not put Mickey Mouse in the public domain because of the copyright in the earlier films.[101] Brown attempted to appeal, noting imperfections in the earlier copyright claims, but the court dismissed his argument as untimely.[101]
In 1999, Lauren Vanpelt, a law student at Arizona State University, wrote a paper making a similar argument.[101][102] Vanpelt points out that copyright law at the time required a copyright notice specify the year of the copyright and the copyright owner's name. The title cards to early Mickey Mouse films "Steamboat Willie", "Plane Crazy", and "Gallopin' Gaucho" do not clearly identify the copyright owner, and also misidentify the copyright year. However, Vanpelt notes that copyright cards in other early films may have been done correctly, which could make Mickey Mouse "protected as a component part of the larger copyrighted films".[102]
A 2003 article by Douglas A. Hedenkamp in the Virginia Sports and Entertainment Law Journal analyzed Vanpelt's arguments, and concluded that she is likely correct.[101][103] Hedenkamp provided additional arguments, and identified some errors in Vanpelt's paper, but still found that due to imperfections in the copyright notice on the title cards, Walt Disney forfeited his copyright in Mickey Mouse. He concluded: "The forfeiture occurred at the moment of publication, and the law of that time was clear: publication without proper notice irrevocably forfeited copyright protection."[103]
Disney threatened to sue Hedenkamp for slander of title, but did not follow through.[101] The claims in Vanpelt and Hedenkamp's articles have not been tested in court.[citation needed]
Censorship
In 1930, the German Board of Film Censors prohibited any presentations of the 1929 Mickey Mouse cartoon The Barnyard Battle. The animated short, which features the mouse as a kepi-wearing soldier fighting cat enemies in German-style helmets, was viewed by censors as a negative portrayal of Germany.[104] It was claimed by the board that the film would "reawaken the latest anti-German feeling existing abroad since the War".[105] The Barnyard Battle incident did not incite wider anti-Mickey sentiment in Germany in 1930; however, after Adolf Hitler came to power several years later, the Nazi regime unambiguously propagandized against Disney. A mid-1930s German newspaper article read:
Mickey Mouse is the most miserable ideal ever revealed. Healthy emotions tell every independent young man and every honorable youth that the dirty and filth-covered vermin, the greatest bacteria carrier in the animal kingdom, cannot be the ideal type of animal. Away with Jewish brutalization of the people! Down with Mickey Mouse! Wear the Swastika Cross![106][107][108]
American cartoonist and writer Art Spiegelman would later use this quote on the opening page of the second volume of his graphic novel Maus.
In 1935 Romanian authorities also banned Mickey Mouse films from cinemas, purportedly fearing that children would be "scared to see a ten-foot mouse in the movie theatre".[109] In 1938, based on the Ministry of Popular Culture's recommendation that a reform was necessary "to raise children in the firm and imperialist spirit of the Fascist revolution", the Italian Government banned foreign children's literature[110] except Mickey; Disney characters were exempted from the decree for the "acknowledged artistic merit" of Disney's work.[111] Actually, Mussolini's children were fond of Mickey Mouse, so they managed to delay his ban as long as possible.[112] In 1942, after Italy declared war on the United States, fascism immediately forced Italian publishers to stop printing any Disney stories. Mickey's stories were replaced by the adventures of Tuffolino, a new human character that looked like Mickey, created by Federico Pedrocchi (script) and Pier Lorenzo De Vita (art). After the downfall of Italy's fascist government in 1945, the ban was removed.
On November 18, 1978, in honor of his 50th anniversary, Mickey became the first cartoon character to have a star on the Hollywood Walk of Fame. The star is located on 6925 Hollywood Blvd.[116]
Melbourne (Australia) runs the annual Moomba festival street procession and appointed Mickey Mouse as their King of Moomba (1977).[117]: 17–22 Although immensely popular with children, there was controversy with the appointment: some Melburnians wanted a 'home-grown' choice, e.g. Blinky Bill; when it was revealed that Patricia O'Carroll (from Disneyland's Disney on Parade show) was performing the mouse, Australian newspapers reported "Mickey Mouse is really a girl!"[117]: 19–20
^Frank Thomas, Ollie Johnston (2002). Walt Disney Treasures: Wave Two- Mickey Mouse in Black & White (DVD), Disc 1, Bonus Features: Frank and Ollie... and Mickey featurette (2002) (DVD). The Walt Disney Company. "There was an interesting bit of development there. They drew [Mickey Mouse] with black hands on the black arm against the black body and black feet. And if he said something in here (gestures in front of body), you couldn't see it and won't realize. Fairly early they had tried it on him, putting the white gloves on him here, and the white shoes, but it had to clear up." ~ Frank Thomas
^Thomas and Johnston, Frank and Ollie (1981). "The Principles of Animation". Disney Animation: The Illusion of Life (1995 ed.). Disney Publishing Worldwide. p. 56. ISBN0-7868-6070-7. The characters were in black and white with no shades of grey to soften the contrast or delineate a form. Mickey's body was black, his arms and his hands- all black. There was no way to stage an action except in silhouette. How else could there be any clarity? A hand in front of a chest would simply disappear.
^"Film music". BBC. Retrieved October 21, 2010. When the music is precisely synchronised with events on screen this is known as Mickey-Mousing, eg someone slipping on a banana skin could use a descending scale followed by a cymbal crash. Mickey-Mousing is often found in comedy films.
|
Mickey Mouse is an American cartoon character co-created in 1928 by Walt Disney and Ub Iwerks. The longtime icon and mascot of The Walt Disney Company, Mickey is an anthropomorphic mouse who typically wears red shorts, large yellow shoes, and white gloves. Inspired by such silent film personalities as Charlie Chaplin and Douglas Fairbanks, Mickey is traditionally characterized as a sympathetic underdog who gets by on pluck and ingenuity in the face of challenges bigger than himself.[2] The character’s depiction as a small mouse is personified through his diminutive stature and falsetto voice, the latter of which was originally provided by Disney. Mickey is one of the world's most recognizable and universally acclaimed fictional characters of all time.
Origin
Mickey Mouse was created as a replacement for Oswald the Lucky Rabbit, an earlier cartoon character that was created by the Disney studio but owned by Universal Pictures.[3]Charles Mintz served as a middleman producer between Disney and Universal through his company, Winkler Pictures, for the series of cartoons starring Oswald. Ongoing conflicts between Disney and Mintz and the revelation that several animators from the Disney studio would eventually leave to work for Mintz's company ultimately resulted in Disney cutting ties with Oswald. Among the few people who stayed at the Disney studio were animator Ub Iwerks, apprentice artist Les Clark, and Wilfred Jackson. On his train ride home from New York, Walt brainstormed ideas for a new cartoon character.
Mickey Mouse was conceived in secret while Disney produced the final Oswald cartoons he contractually owed Mintz. Disney asked Ub Iwerks to start drawing up new character ideas. Iwerks tried sketches of various animals, such as dogs and cats, but none of these appealed to Disney. A female cow and male horse were also rejected. (They would later turn up as Clarabelle Cow and Horace Horsecollar.)
|
yes
|
Animation
|
Did Walt Disney create Mickey Mouse?
|
yes_statement
|
"walt" disney "created" mickey mouse.. mickey mouse was "created" by "walt" disney.
|
https://www.npr.org/2021/07/07/1013645653/remembering-ub-iwerks-the-father-of-mickey-mouse
|
Remembering Ub Iwerks, The Father Of Mickey Mouse : NPR
|
Remembering Ub Iwerks, The Father Of Mickey Mouse
Walt Disney's close friend Ub Iwerks brought Mickey Mouse to life. Fifty years after his death, Iwerks' legacy is coming into focus.
NOEL KING, HOST:
Fifty years ago today, an animator named Ub Iwerks died. He was never a household name, but he is responsible for some of Disney's greatest special effects, and he designed Mickey Mouse. Mackenzie Martin of member station KCUR tells his story on the podcast A People's History Of Kansas City.
(SOUNDBITE OF ARCHIVED KCUR PODCAST)
MACKENZIE MARTIN, BYLINE: When you think about Mickey Mouse, one name comes to mind, Walt Disney. But here's the thing - Walt Disney didn't create Mickey Mouse alone. It was actually his best friend, Ub Iwerks, who designed the iconic cartoon in 1928.
JEFF RYAN: Mickey is basically the child of two dads.
MARTIN: Jeff Ryan is the author of "A Mouse Divided: How Ub Iwerks Became Forgotten, And Walt Disney Became Uncle Walt."
RYAN: He was the person who was doing most of the behind-the-scenes work, and when Walt was taking credit, Ub was the one who was denied credit.
MARTIN: It's not like Walt Disney wasn't integral to the success of Mickey Mouse. He certainly was. In addition to defining Mickey's personality, he literally voiced the character for years.
(SOUNDBITE OF ARCHIVED RECORDING)
WALT DISNEY: (As Mickey Mouse) He'll hear you.
MARTIN: But that doesn't erase the fact that for decades, the collaboration between Iwerks and Disney was mostly kept a secret.
RYAN: I think a lot of that has to do with the way that Disney over the years has controlled the Mickey Mouse narrative. They want people to think that Walt was responsible for more than he was actually responsible for.
MARTIN: The two first met as teens in 1919 at a commercial arts studio in Kansas City, Mo.
(SOUNDBITE OF MUSIC)
MARTIN: Though at the time, Ryan says Disney was going by the name Walter Dis (ph). It was actually Iwerks who was like, just go by Walt Disney. Together, the two friends taught themselves animation and embarked on a series of rather ill-conceived and failed business concepts.
(SOUNDBITE OF MUSIC)
MARTIN: Their first venture was as commercial artists. It lasted a month. Then in 1922, Disney and Iwerks opened their first animation studio.
BUTCH RIGBY: They were 21 years old, and they recruited these 18-year-olds with an ad in the paper that said, if you'd like to draw cartoons, come to the Laugh-O-Gram Studio.
MARTIN: Butch Rigby is the chairman of the Kansas City nonprofit that's currently restoring the old Laugh-O-Gram Studio.
RIGBY: Ub Iwerks is equally as important here. He was a partner in that company, and I think this building is the story of Ub Iwerks as much as Walt Disney.
MARTIN: When the Laugh-O-Gram Studio eventually went bankrupt, Disney took a train out to Hollywood. But not very much time had passed before he was begging Iwerks to come out too. He couldn't make his cartoons' success without him.
(SOUNDBITE OF MUSIC)
MARTIN: And that was where, in 1928, Ub Iwerks single-handedly animated "Plane Crazy," the first Mickey Mouse cartoon. After a record 700 drawings a day, Iwerks did in two weeks something that would have taken other animators months.
RIGBY: You know, Ub was quiet but a genius, and I mean literally a genius. And Walt recognized that.
MARTIN: In addition to being an extremely efficient and talented animator, Iwerks was able to solve literally any technical problem that was thrown his way. Disney, on the other hand, was an incredible storyteller. His characters were charming and lovable, and he knew how to get the best out of other people.
RYAN: And when you put Walt and Ub together, they were able to do just about anything.
MARTIN: In his 30-year career at Disney, Ub Iwerks went on to develop some of Disney's greatest special effects. We can thank him for iconic scenes in "Mary Poppins" and "Sleeping Beauty," in addition to Alfred Hitchcock's "The Birds." But he only started getting proper credit for his contributions to the world of animation after his death, when his granddaughter, Leslie Iwerks, made a documentary about him after realizing that what she read in animation history books didn't match up with the stories she had heard from her family growing up.
LESLIE IWERKS: I just wanted to clear that history, and I really wanted to also tell the story of Ub's contributions to Mickey Mouse.
MARTIN: In the end, the story of Mickey Mouse is a good reminder that everything is a team effort. Behind every powerful mouse, there might be a Walt, but behind every Walt, there's probably at least one Ub. For NPR News, I'm Mackenzie Martin.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.
|
Remembering Ub Iwerks, The Father Of Mickey Mouse
Walt Disney's close friend Ub Iwerks brought Mickey Mouse to life. Fifty years after his death, Iwerks' legacy is coming into focus.
NOEL KING, HOST:
Fifty years ago today, an animator named Ub Iwerks died. He was never a household name, but he is responsible for some of Disney's greatest special effects, and he designed Mickey Mouse. Mackenzie Martin of member station KCUR tells his story on the podcast A People's History Of Kansas City.
(SOUNDBITE OF ARCHIVED KCUR PODCAST)
MACKENZIE MARTIN, BYLINE: When you think about Mickey Mouse, one name comes to mind, Walt Disney. But here's the thing - Walt Disney didn't create Mickey Mouse alone. It was actually his best friend, Ub Iwerks, who designed the iconic cartoon in 1928.
JEFF RYAN: Mickey is basically the child of two dads.
MARTIN: Jeff Ryan is the author of "A Mouse Divided: How Ub Iwerks Became Forgotten, And Walt Disney Became Uncle Walt. "
RYAN: He was the person who was doing most of the behind-the-scenes work, and when Walt was taking credit, Ub was the one who was denied credit.
MARTIN: It's not like Walt Disney wasn't integral to the success of Mickey Mouse. He certainly was. In addition to defining Mickey's personality, he literally voiced the character for years.
(SOUNDBITE OF ARCHIVED RECORDING)
WALT DISNEY: (As Mickey Mouse) He'll hear you.
MARTIN: But that doesn't erase the fact that for decades, the collaboration between Iwerks and Disney was mostly kept a secret.
RYAN: I think a lot of that has to do with the way that Disney over the years has controlled the Mickey Mouse narrative. They want people to think that Walt was responsible for more than he was actually responsible for.
MARTIN:
|
no
|
Animation
|
Did Walt Disney create Mickey Mouse?
|
yes_statement
|
"walt" disney "created" mickey mouse.. mickey mouse was "created" by "walt" disney.
|
https://www.britannica.com/topic/Mickey-Mouse
|
Mickey Mouse | Cartoon, Creation, Disney, & Facts | Britannica
|
Who is Mickey Mouse?
Mickey Mouse is the most popular character of Walt Disney’s animated cartoons and arguably the most popular cartoon star in the world. Mickey is often presented as a cheerful and mischievous anthropomorphic rodent.
What was Mickey Mouse originally called?
Walt Disney named his first iteration of the character Mortimer Mouse. However, at the urging of Lillian Disney, his wife, the character was renamed Mickey Mouse; reportedly, Lillian disliked the name Mortimer for the mouse and suggested Mickey.
How was Mickey Mouse created?
Walt Disney began his first series of fully animated films in 1927, featuring the character Oswald the Lucky Rabbit. When his distributor appropriated the rights to the character, Disney altered Oswald’s appearance and created a new character that ultimately became Mickey Mouse. Mickey made his first appearance in 1928.
Who has voiced Mickey Mouse?
Mickey Mouse was voiced by just three voice actors between 1929 and 2009. Walt Disney lent his voice to Mickey beginning in 1929. Jimmy MacDonald took over in 1946. Wayne Allwine voiced Mickey beginning in 1983 and continued for the next 32 years, until his death in 2009. Bret Iwan then became the official voice for Mickey. Beginning in 2013, Chris Diamantopolous provided the voice for the eponymous Mickey Mouse television show.
Why is Mickey Mouse so popular?
Mickey Mouse’s enduring popularity can likely be credited to the way he ubiquitously represents wholesome values of kindness and innocence. It can also be credited to the Walt Disney Company’s unprecedented success in marketing his likeness through a variety of merchandise, which made Mickey Mouse one of the most lucrative cartoon characters in history.
Mickey Mouse, the most popular character of Walt Disney’s animated cartoons and arguably the most popular cartoon star in the world.
Walt Disney began his first series of fully animated films in 1927, featuring the character Oswald the Lucky Rabbit. When his distributor appropriated the rights to the character, Disney altered Oswald’s appearance and created a new character that he named Mortimer Mouse; at the urging of his wife, Disney rechristened him Mickey Mouse. Two silent Mickey Mouse cartoons—Plane Crazy (1928) and Gallopin’ Gaucho (1928)—were produced before Disney employed the novelty of sound for the third Mickey Mouse production, Steamboat Willie (1928), though Mickey did not utter his first words (“Hot dogs!”) until The Karnival Kid (1929). Steamboat Willie was an immediate sensation and led to the studio’s dominance in the animated market for many years.
During the early years, Mickey was drawn by noted animator Ub Iwerks, and Disney himself provided Mickey’s voice until 1947. Mickey was often joined by his girlfriend, Minnie Mouse, as well as an animated gang of friends that included Donald Duck, Goofy, and Pluto. Mickey was a cheerful and mischievous anthropomorphic rodent who starred in more than 100 cartoon shorts and became a worldwide cult figure. The Mickey Mouse Club was one of the most popular television shows for children in the United States in the 1950s, and the signature black cap with mouse ears worn by the show’s stars has become one of the most widely distributed items in merchandising history. In 1932 Disney was given a special award by the Academy of Motion Picture Arts and Sciences for the creation of Mickey Mouse.
|
Who is Mickey Mouse?
Mickey Mouse is the most popular character of Walt Disney’s animated cartoons and arguably the most popular cartoon star in the world. Mickey is often presented as a cheerful and mischievous anthropomorphic rodent.
What was Mickey Mouse originally called?
Walt Disney named his first iteration of the character Mortimer Mouse. However, at the urging of Lillian Disney, his wife, the character was renamed Mickey Mouse; reportedly, Lillian disliked the name Mortimer for the mouse and suggested Mickey.
How was Mickey Mouse created?
Walt Disney began his first series of fully animated films in 1927, featuring the character Oswald the Lucky Rabbit. When his distributor appropriated the rights to the character, Disney altered Oswald’s appearance and created a new character that ultimately became Mickey Mouse. Mickey made his first appearance in 1928.
Who has voiced Mickey Mouse?
Mickey Mouse was voiced by just three voice actors between 1929 and 2009. Walt Disney lent his voice to Mickey beginning in 1929. Jimmy MacDonald took over in 1946. Wayne Allwine voiced Mickey beginning in 1983 and continued for the next 32 years, until his death in 2009. Bret Iwan then became the official voice for Mickey. Beginning in 2013, Chris Diamantopolous provided the voice for the eponymous Mickey Mouse television show.
Why is Mickey Mouse so popular?
Mickey Mouse’s enduring popularity can likely be credited to the way he ubiquitously represents wholesome values of kindness and innocence. It can also be credited to the Walt Disney Company’s unprecedented success in marketing his likeness through a variety of merchandise, which made Mickey Mouse one of the most lucrative cartoon characters in history.
Mickey Mouse, the most popular character of Walt Disney’s animated cartoons and arguably the most popular cartoon star in the world.
|
yes
|
Animation
|
Did Walt Disney create Mickey Mouse?
|
yes_statement
|
"walt" disney "created" mickey mouse.. mickey mouse was "created" by "walt" disney.
|
https://www.cnn.com/2017/11/18/entertainment/mickey-mouse-fun-facts-trivia-trnd/index.html
|
Mickey Mouse's history explained in 6 facts | CNN
|
6 Mickey Mouse facts you probably didn’t know
Mickey Mouse first debuted in "Steamboat Willie" on November 18, 1928.
Walt Disney Productions
The Opry House in 1929 was the first time Mickey wore white gloves.
Walt Disney Productions
The 1929 film "The Karnival Kid" was the first time Mickey first spoke, exclaiming, "Hot dogs!"
Walt Disney Productions
Before Walt Disney created Mickey Mouse, he made Oswald the Lucky Rabbit, the main character in the 1927 film "The Ocean Hop."
Robert Winkler Productions
The first color short of Mickey Mouse was "The Band Concert" in 1935.
Walt Disney Productions
Walt Disney showcases Mickey Mouse.
Gamma-Keystone via Getty Images
"Fantasia" brought classical music and animation for a breathtaking cinematic experience.
Walt Disney Pictures
"The Simple Things" in 1953 was the last regular installment of the Mickey Mouse film series.
Walt Disney Productions
A person in a Mickey Mouse costume at the gate of the Magic Kingdom at the Disneyland theme park, Anaheim, California circa 1955.
Hulton Archive/Archive Photos/Getty Images
Donald Duck is one of Mickey's most loyal friends as seen in the 1983 movie "Mickey's Christmas Carol."
Walt Disney Productions
"House of Mouse" was an animated TV show that aired in 2001.
Disney Television Animation
While there was never a wedding in any film, Disney decided in the studio that Mickey and Minnie already were happily married.
Disney Television Animation
D23 - the Ultimate Disney Fan Event - brings together all the worlds of Disney under one roof for three packed days of presentations, pavilions, experiences, concerts, sneak peeks, shopping, and more.
Image Group LA/Disney via Getty Images
Mickey Mouse and some of his most iconic moments
CNN
—
Most parents would never let rodents near their kids – unless, of course, it’s Mickey Mouse.
Here are sixfacts about the world’s most iconic critter.
1. He started off as a rabbit
Before Walt Disney created Mickey Mouse, he made Oswald the Lucky Rabbit. But in a dispute with his business partner at Universal, Disney lost the rights to Oswald. The loss of his first character inspired the birth of the Mouse. If you look at the two characters, you can see the resemblance. Red shorts, big ears and wide eyes sound familiar?
2. He’s married to Minnie
Yes, married. While there was never a wedding in any film, Disney decided in the studio that the two mice already were happily hitched. Like any loving couple would want, Mickey and Minnie shared their big screen debut together in “Steamboat Willie” in 1928. Every year on November 18, they get to celebrate their birthdays together. How romantic is that?
Disney Television Animation
3. He’s silent for 8 films and then exclaims, ‘Hot dogs!’
Mickey Mouse is clearly a huge fan of hot dogs. He chose to reveal that to the world in 1929 in his ninth film, “The Karnival Kid,” and even did a hot dog dance. Sure, he had laughed and squealed before, but he didn’t show us he could utter words until this film.
Walt Disney Productions
4. His magic turned kids into stars
Remember how Britney Spears and Justin Timberlake got their big career breaks? Of course, the magic of the Mouse had something to do with their success. The child sensations starred in the revival of the 1950’s “Mickey Mouse Club.”
Zuma Press
5. He doesn’t wear white gloves for fashion
Mickey’s white gloves actually help distinguish his hands from the rest of his body. The first time we see him in the famous accessory is in the cartoon, “The Opry House,” in 1929.
Walt Disney Productions
6. He’s frequently a write-in candidate in elections
Unfortunately, votes for Mickey Mouse usually end up in the trash, along with those for Donald Duck. Voters are allowed to dream big though, right?
|
Walt Disney Productions
A person in a Mickey Mouse costume at the gate of the Magic Kingdom at the Disneyland theme park, Anaheim, California circa 1955.
Hulton Archive/Archive Photos/Getty Images
Donald Duck is one of Mickey's most loyal friends as seen in the 1983 movie "Mickey's Christmas Carol. "
Walt Disney Productions
"House of Mouse" was an animated TV show that aired in 2001.
Disney Television Animation
While there was never a wedding in any film, Disney decided in the studio that Mickey and Minnie already were happily married.
Disney Television Animation
D23 - the Ultimate Disney Fan Event - brings together all the worlds of Disney under one roof for three packed days of presentations, pavilions, experiences, concerts, sneak peeks, shopping, and more.
Image Group LA/Disney via Getty Images
Mickey Mouse and some of his most iconic moments
CNN
—
Most parents would never let rodents near their kids – unless, of course, it’s Mickey Mouse.
Here are sixfacts about the world’s most iconic critter.
1. He started off as a rabbit
Before Walt Disney created Mickey Mouse, he made Oswald the Lucky Rabbit. But in a dispute with his business partner at Universal, Disney lost the rights to Oswald. The loss of his first character inspired the birth of the Mouse. If you look at the two characters, you can see the resemblance. Red shorts, big ears and wide eyes sound familiar?
2. He’s married to Minnie
Yes, married. While there was never a wedding in any film, Disney decided in the studio that the two mice already were happily hitched. Like any loving couple would want, Mickey and Minnie shared their big screen debut together in “Steamboat Willie” in 1928. Every year on November 18, they get to celebrate their birthdays together. How romantic is that?
Disney Television Animation
3.
|
yes
|
Animation
|
Did Walt Disney create Mickey Mouse?
|
yes_statement
|
"walt" disney "created" mickey mouse.. mickey mouse was "created" by "walt" disney.
|
https://insidethemagic.net/2022/04/walt-disney-lied-creating-mickey-mouse-kc1/
|
Walt Disney Actually Lied About Creating Mickey Mouse - Inside the ...
|
Walt Disney Actually Lied About Creating Mickey Mouse
Anyone who is a Disney fan loves Walt Disney and Mickey Mouse. The duo went on to be household names of the Walt Disney Company. But what if we told you that Walt Disney wasn’t actually the one who created Mickey Mouse.
Animator Ub Iwerks was a Kansas City native known for some of the most iconic scenes of all time, including those in Disney’s Mary Poppins and Sleeping Beauty, as well as Alfred Hitchcock‘s The Birds. But what Iwerks doesn’t get credit for is creating the one and only Mickey Mouse.
That’s right, Kansas City animator Ub Iwerks is the one who designed Mickey Mouse in 1928 and single-handedly animated the first Mickey cartoon in Hollywood. Meaning all of those stories about how Mickey was inspired by a pet mouse that Walt Disney had in Kansas City at the Laugh-O-Gram Studios, or that Disney came up with the idea of Mickey Mouse while on a train from New York to California, are, in fact, false.
According to reports, Iwerks’s granddaughter Leslie Iwerks, said “[Ub Iwerks] said that it was not Walt creating the character on a train… So that was a very different story than the Disney company had put out or that Walt started telling after Mickey became successful.”
The real story of how Mickey Mouse was created is a simple one – the iconic mouse was born during an extremely tense, stressful moment. Walt Disney had just lost the rights to his first hit character, Oswald the Lucky Rabbit, and all of his animators had abandoned him. Everyone except the Oswald co-creator, Ub Iwerks, who is the real person behind Mickey Mouse’s creation.
But because Disney took the credit of creating Mickey Mouse, and did not give credit to Iwerks, it led to the friendship ending and Iwerks leaving Walt Disney Studios to start his own animation studio, citing “personal differences with Walt.”
Did you know that Walt Disney didn’t actually create Mickey Mouse single handedly? Let us know in the comments below.
Having been a Walt Disney World Annual Passholder since 2017, Kelly loves visiting the Most Magical Place on Earth where she can typically be found people watching on the Tomorrowland Transit Authority PeopleMover, relaxing at Baseline Tap House, snacking on Mickey-shaped foods, sipping on adult beverages on Sunset Boulevard, or hanging out with her foolish mortals at the Haunted Mansion.
Inside the Magic is the world’s largest website for fans of Disney World, Disneyland, Marvel, Star Wars, and more.
Created in 2005, what started as a tiny central Florida-based website and short weekly podcast that allowed our audience to visit Walt Disney World virtually has grown into the publishing company it is today. We focus on bringing you all things fun so you can plan your theme park vacation, enjoy Disney at home, and more.
Inside the Magic consists of multiple writers & videographers living near both Disneyland and Walt Disney World theme parks and around the world. This allows us to bring you the most interesting, entertaining, and unique entertainment experiences, covering theme parks, movies, TV, video games, and special events.
By using this site you agree to our privacy policy. The material on this site may not be reproduced, distributed, transmitted, cached, or otherwise used, except with the prior written permission of Inside the Magic.
|
Walt Disney Actually Lied About Creating Mickey Mouse
Anyone who is a Disney fan loves Walt Disney and Mickey Mouse. The duo went on to be household names of the Walt Disney Company. But what if we told you that Walt Disney wasn’t actually the one who created Mickey Mouse.
Animator Ub Iwerks was a Kansas City native known for some of the most iconic scenes of all time, including those in Disney’s Mary Poppins and Sleeping Beauty, as well as Alfred Hitchcock‘s The Birds. But what Iwerks doesn’t get credit for is creating the one and only Mickey Mouse.
That’s right, Kansas City animator Ub Iwerks is the one who designed Mickey Mouse in 1928 and single-handedly animated the first Mickey cartoon in Hollywood. Meaning all of those stories about how Mickey was inspired by a pet mouse that Walt Disney had in Kansas City at the Laugh-O-Gram Studios, or that Disney came up with the idea of Mickey Mouse while on a train from New York to California, are, in fact, false.
According to reports, Iwerks’s granddaughter Leslie Iwerks, said “[Ub Iwerks] said that it was not Walt creating the character on a train… So that was a very different story than the Disney company had put out or that Walt started telling after Mickey became successful.”
The real story of how Mickey Mouse was created is a simple one – the iconic mouse was born during an extremely tense, stressful moment. Walt Disney had just lost the rights to his first hit character, Oswald the Lucky Rabbit, and all of his animators had abandoned him. Everyone except the Oswald co-creator, Ub Iwerks, who is the real person behind Mickey Mouse’s creation.
But because Disney took the credit of creating Mickey Mouse, and did not give credit to Iwerks, it led to the friendship ending and Iwerks leaving Walt Disney Studios to start his own animation studio, citing “personal differences with Walt.”
Did you know that Walt Disney didn’t actually create Mickey Mouse single handedly?
|
no
|
Animation
|
Did Walt Disney create Mickey Mouse?
|
yes_statement
|
"walt" disney "created" mickey mouse.. mickey mouse was "created" by "walt" disney.
|
https://nypost.com/2018/06/30/walt-disney-stole-the-idea-for-mickey-mouse-off-his-friend/
|
Walt Disney stole the idea for Mickey Mouse off his friend
|
Contact The Author
Walt Disney gets sole credit for history's most famous mouse, but there's more to the story.
Getty Images
According to Walt Disney, the idea for Mickey Mouse suddenly popped into his head on a 1928 cross-country train ride “when the business fortunes of my brother Roy and myself were at [their] lowest ebb,” as he wrote in 1948.
Nice story. It’s become part of American lore. But it’s not true.
In reality, Mickey Mouse was created by an animator named Ub Iwerks — sketched in March 1928 on an ordinary piece of two-hole punch paper in less than an hour.
Iwerks has been largely forgotten by the general public, his place in creating the Disney brand downplayed. Walt made sure of that, says a new book, “A Mouse Divided” by Jeff Ryan (Post Hill Press), out Tuesday.
“After [Walt and Ub’s] acrimonious breakup, Walt started telling a story that he had made up Mickey solo, leaving Iwerks out of the equation,” Ryan tells The Post. “As he kept adding to it, people began to realize it wasn’t true. Walt knew what audiences wanted wasn’t the unglamorous truth but a legend, a myth.”
Walt and Ub, whose full name is Ubbe (pronounced “oob”) Iwwerks, started out as best friends. The two men met in Kansas City in 1919 while working at an art studio. They soon launched their own animation venture, producing cartoon shorts that were screened before feature films.
Their biggest success was Oswald the Lucky Rabbit, but the men lost control of the character after a disagreement with a distributor.
A replacement was needed.
Walt and Ub sat down and began brainstorming. Ub drew a horse, a cow, a frog and a dog, but none worked. Finally, Walt suggested a mouse.
Ub went to work and soon filled a piece of paper, divided into six panels, with various versions. One looked more rat-like, with a long, thin snout. Others were dressed in a shirt and necktie. Another was a female, with dramatic eyelashes and a skirt. The final choice, circled in Ub’s blue pencil, shows a crude version of Mickey with the familiar silhouette and two-button pants.
Iwerks soon got to work on Mickey’s first animated short, a May 1928 tribute to Charles Lindbergh called “Plane Crazy.” The animator drew every frame himself, cranking out an unheard-of 700 illustrations a day.
The film was shown at a single Hollywood theater and failed to secure a distributor. Mickey’s follow-up, “The Gallopin’ Gaucho,” also didn’t get picked up for theaters.
It wasn’t until November 1928’s seven-minute “Steamboat Willie,” the first Mickey cartoon with synchronized sound, that the character took off. After it was first screened for Walt on a bed sheet hung on the wall inside the Disney studio, the studio head declared, “This is it! We’ve got it.”
“Steamboat Willie” secured a two-week engagement at New York’s Colony Theater and became an immediate hit. It was so popular that it played before and after the feature film. Celebrity Productions picked up the national distribution rights.
Several more shorts followed. Audiences continued to adore the lovable little rodent. Mickey Mouse, “in a few months’ time, has become a star and worth a star’s billing,” film critic C.A. Lejeune wrote in 1929.
Mickey even blew up overseas, becoming so popular that a 1931 Nazi publication felt the need to condemn him as “filthy, dirt-caked vermin.”
Ub IwerksAlamy Stock Photo
But Ub and Walt’s relationship had begun to fray. Iwerks chafed under the bullying Disney, who treated him less like a partner and more like an employee.
According to Iwerks’ wife, the two men were out to lunch one day in 1930 when a young Mickey fan approached them. Walt asked Ub to whip up a quick sketch for the lad and promised to sign it.
“Draw your own goddamn Mickey,” Ub shot back.
So when Iwerks was offered his own animation studio in 1930, he bolted. For his 20 percent stake in the Disney studio, Iwerks got just $3,000.
Iwerks Studio began producing cartoons featuring new creations, including Flip the Frog. After Walt got wind of Iwerks’ new character, he commissioned a “Silly Symphonies” short with his own cartoon frogs. That production beat Iwerks’ cartoon to theaters by three weeks.
And when Iwerks tried playing Walt’s devious games, he couldn’t compete.
Clarence Nash was a traveling entertainer and impressionist, who Disney invited to do voice work after hearing Nash’s popular bit about a duck reciting “Mary Had a Little Lamb.”
Iwerks also lured Nash to voice a cartoon duck, but technical problems scuttled the recording. In the meantime, Nash phoned Disney and told him what Iwerks was planning. Walt ordered the actor “not to do a damned thing for [Iwerks].”
And that’s how Disney got Donald Duck.
Iwerks was a brilliant animator, but without Walt’s storytelling, his cartoons ultimately fell flat.
His studio went belly up in 1940 and Iwerks put “aside his manque pride” and wrote Walt a letter, seemingly seeking reconciliation. Iwerks soon found himself working for Disney again. It’s unclear if Walt simply took pity on him or if the awkwardness between the men had faded — although “it’s very probable,” said animator Grim Natwick, the two never went back to being friends.
Iwerks worked in various technical capacities at Disney for years, this time as a rank-and-file employee, not a partner.
As Iwerks faded into obscurity, his creation continued to explode — first in mountains of merchandise, then with television shows, theatrical films, a daily comic strip, a theme park and a star on the Hollywood Walk of Fame. Mickey Mouse was the first animated character to receive that honor.
Iwerks died in 1971, his legacy diminished by the not-so-wonderful world of Disney.
|
Contact The Author
Walt Disney gets sole credit for history's most famous mouse, but there's more to the story.
Getty Images
According to Walt Disney, the idea for Mickey Mouse suddenly popped into his head on a 1928 cross-country train ride “when the business fortunes of my brother Roy and myself were at [their] lowest ebb,” as he wrote in 1948.
Nice story. It’s become part of American lore. But it’s not true.
In reality, Mickey Mouse was created by an animator named Ub Iwerks — sketched in March 1928 on an ordinary piece of two-hole punch paper in less than an hour.
Iwerks has been largely forgotten by the general public, his place in creating the Disney brand downplayed. Walt made sure of that, says a new book, “A Mouse Divided” by Jeff Ryan (Post Hill Press), out Tuesday.
“After [Walt and Ub’s] acrimonious breakup, Walt started telling a story that he had made up Mickey solo, leaving Iwerks out of the equation,” Ryan tells The Post. “As he kept adding to it, people began to realize it wasn’t true. Walt knew what audiences wanted wasn’t the unglamorous truth but a legend, a myth.”
Walt and Ub, whose full name is Ubbe (pronounced “oob”) Iwwerks, started out as best friends. The two men met in Kansas City in 1919 while working at an art studio. They soon launched their own animation venture, producing cartoon shorts that were screened before feature films.
Their biggest success was Oswald the Lucky Rabbit, but the men lost control of the character after a disagreement with a distributor.
A replacement was needed.
Walt and Ub sat down and began brainstorming. Ub drew a horse, a cow, a frog and a dog, but none worked. Finally, Walt suggested a mouse.
Ub went to work and soon filled a piece of paper, divided into six panels, with various versions. One looked more rat-like, with a long, thin snout.
|
no
|
Animation
|
Did Walt Disney create Mickey Mouse?
|
yes_statement
|
"walt" disney "created" mickey mouse.. mickey mouse was "created" by "walt" disney.
|
https://americanhistory.si.edu/blog/mickey-mouse-turns-90
|
Mickey Mouse turns 90 | National Museum of American History
|
Membership & Giving
Blog
Mickey Mouse turns 90
It is hard to believe, but Mickey Mouse is celebrating his 90th birthday this year. For an old mouse, he still looks pretty spry! One of the world’s most universally recognized and enduring personalities, Mickey Mouse sailed into our lives on November 18, 1928, in the animated black-and-white film short Steamboat Willie that premiered at the Colony Theatre in New York City. This was a turning point in the history of animation, as Walt Disney introduced the new technique of “synchronized sound”—movements on the screen corresponded with the music and sound effects.
These are replica cels of scenes from “Steamboat Willie.” A cel, short for celluloid, is a transparent sheet on which objects are drawn and painted. Cels are used in the production of an animated film or cartoon. Gift of the Walt Disney Company through Roy E. Disney, Vice Chairman and Michael O. Eisner, Chairman
But did you know Mickey was not Walt Disney’s first cartoon character, nor was Steamboat Willie the first film made starring Mickey Mouse? In 1923, Walt and his brother Roy founded a small animation studio in Hollywood. Disney landed a deal with Universal Pictures through a distributor, creating a series of funny animal cartoons. One of his creations, Oswald the Lucky Rabbit, became an overnight sensation. The success of Oswald encouraged Disney to ask for a raise, but instead the distributor claimed Oswald as its own. Disney was out of a job.
Disappointed but not deterred, Disney, along with his friend and fellow animator Ub Iwerks, co-created a new cartoon character: Mickey Mouse. There are multiple stories about how Disney and Iwerks chose the name “Mickey” for their new character. One story is that the men originally chose the name Mortimer, but that Disney’s wife convinced him to change the name to Mickey. An arguably more believable story is that the men based the mouse on a wooden toy, patented in 1926 by Rene D. Grove for the Performo-Toy Co., Inc., that had the name “Micky” written in a red circle across its chest. Lesson learned from his experience with Oswald, Disney promptly registered his character with the U.S. Patent Office.
In May 1928, Disney produced his first silent cartoon short, Plane Crazy, starring his new anthropomorphic character, Mickey Mouse. The cartoon was not well received by the studio, so it was put aside. Six months later, Mickey Mouse finally made his public debut in the black-and-white film short Steamboat Willie.
One of six original story sheets created by Disney and Ub Iwerks for their first cartoon, “Plane Crazy,” featuring Mickey Mouse. This 9x12 sketch is drawn using graphite and red and blue colored pencil. Courtesy of Steve Geppi of Geppi's Entertainment Museum, Baltimore, Maryland.
The premiere of Steamboat Willie marked a breakthrough moment in animation history not just for the character, but for the introduction of sound. The film lasted a mere seven minutes and the plot was simple. Mickey is a deckhand on a steamship who causes trouble and chaos for the Captain. Minnie Mouse makes her debut when Mickey plucks her off the riverbank with a crane and drops her on the boat. Using makeshift instruments found aboard, such as garbage cans, pots and pans, barrels, and washboards, Mickey serenades his sweetheart, Minnie.
Four production drawings from "Steamboat Willie." These drawings were the prototypes from which the cels are created. Gift of the Walt Disney Company through Roy E. Disney, Vice Chairman and Michael O. Eisner, Chairman
The cartoon’s major innovation was synchronized sound—something we now take for granted. For the first time the soundtrack corresponded to the actions on the screen with the characters acting in cue with the voices and music. The music for the cartoon was provided by a 17-piece orchestra, including a harmonica player and three sound-effects men. While we cannot be certain, most of the animation was probably done by Iwerks, under the close supervision of Disney, who voiced all the characters. Steamboat Willie was a sensation after its premiere in New York City, and Mickey began to achieve worldwide recognition. Today, his likeness is one of the most widely used images for products and advertisements.
Over the years, Mickey Mouse has gone through several transformations to his physical appearance and personality. In his early years, the impish and mischievous Mickey looked more rat-like, with a long pointy nose, black eyes, a smallish body with spindly legs and a long tail. Parents wrote in expressing dismay at Mickey’s antics in the cartoons and complained that Mickey was no role model for children. Fred Moore, a Disney animator stepped in to refine Mickey’s physical image and his character. The change was gradual but significant; Mickey’s eyes were enlarged and pupils were added to make him more expressive and life like. His ears became rounder and more pronounced, his nose was shortened, and his physique took on a short, stocky build—more youthful and childlike. More importantly, Mickey dropped his insolent attitude and became a happy, funny, polite, and kindhearted mouse—a much more acceptable role model for his biggest fans: children. The rest is history. Today, Mickey Mouse is a universal and much-loved figure that is the heart and soul of the Disney organization.
|
In 1923, Walt and his brother Roy founded a small animation studio in Hollywood. Disney landed a deal with Universal Pictures through a distributor, creating a series of funny animal cartoons. One of his creations, Oswald the Lucky Rabbit, became an overnight sensation. The success of Oswald encouraged Disney to ask for a raise, but instead the distributor claimed Oswald as its own. Disney was out of a job.
Disappointed but not deterred, Disney, along with his friend and fellow animator Ub Iwerks, co-created a new cartoon character: Mickey Mouse. There are multiple stories about how Disney and Iwerks chose the name “Mickey” for their new character. One story is that the men originally chose the name Mortimer, but that Disney’s wife convinced him to change the name to Mickey. An arguably more believable story is that the men based the mouse on a wooden toy, patented in 1926 by Rene D. Grove for the Performo-Toy Co., Inc., that had the name “Micky” written in a red circle across its chest. Lesson learned from his experience with Oswald, Disney promptly registered his character with the U.S. Patent Office.
In May 1928, Disney produced his first silent cartoon short, Plane Crazy, starring his new anthropomorphic character, Mickey Mouse. The cartoon was not well received by the studio, so it was put aside. Six months later, Mickey Mouse finally made his public debut in the black-and-white film short Steamboat Willie.
One of six original story sheets created by Disney and Ub Iwerks for their first cartoon, “Plane Crazy,” featuring Mickey Mouse. This 9x12 sketch is drawn using graphite and red and blue colored pencil. Courtesy of Steve Geppi of Geppi's Entertainment Museum, Baltimore, Maryland.
The premiere of Steamboat Willie marked a breakthrough moment in animation history not just for the character, but for the introduction of sound.
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
yes_statement
|
"woodstock" "festival" "promoted" "peace" and "love".. "peace" and "love" were "promoted" at "woodstock" "festival".
|
https://about.usps.com/newsroom/national-releases/2019/0808-woodstock-rocks-on-forever.htm
|
USPS commemorates iconic music festival with Woodstock stamps ...
|
Woodstock Rocks On Forever
NEW YORKâThe U.S Postal Service celebrates the 50th anniversary of the Woodstock Festival with a colorful Woodstock Forever stamp designed to represent the peace and music of the festival. The stamp was dedicated today at a First Day of Issue event held at the Metropolitan Museum of Art in New York City.
“Woodstock was the most famous rock festival in history,” said Kevin McAdams, vice president, Delivery and Retail Operations, U.S. Postal Service. “The Postal Service commemorates the 50th anniversary of Woodstock by issuing a festive Forever stamp as we continue to remember significant events of the ’60s.”
“It’s an honor and an inspiration to be commemorated by the Postal Service. The USPS Woodstock Forever Stamp is an official acknowledgment of something we have felt for 50 years: Woodstock is ‘Forever,’” said Rosenman.
Lang also shared his thanks to the Postal Service “for helping to deliver Peace, Love and Music.”
Background
In August 1969, approximately 500,000 people gathered for the Woodstock Festival in the small farming community of Bethel, NY. Woodstock was the most famous rock festival in history and a dramatic expression of the youth counterculture of the 1960s. Promoted as “Three Days of Peace and Music,” the Woodstock festival came to symbolize a generation.
Music business promoters and entrepreneurs â Lang, Rosenman, Artie Kornfeld, and John Roberts â met and discussed the idea for a unique music festival in January 1969. To bring it about, they formed a company, Woodstock Ventures. The four producers promoted the festival as a weekend gathering of the younger generation away from the hassles of everyday life. Previous large concerts had typically attracted, on the high side, an audience of tens of thousands. But for their one-of-a-kind Woodstock festival, the promoters hoped to draw 50,000 people and, as a precaution, drew up plans to accommodate up to 100,000. For the audience, it was going to be three days away from civilization, which meant the promoters would have to provide campsites, food, toilets, medical care, security and other necessities for living together for the entire weekend.
The festival featured more than 30 performers, an unprecedented assembly of musical talent. Some, such as Joan Baez; The Band; The Grateful Dead; The Who; and Blood, Sweat & Tears, were already well known. Others, including folk singer and acoustic guitarist Richie Havens, guitarist Carlos Santana, and British singer Joe Cocker essentially made their national debuts. Jimi Hendrix performed an electrifying rendition of “The Star-Spangled Banner” that became legendary.
An Academy Award-winning documentary film by Michael Wadleigh and a popular song about the festival written by Joni Mitchell extended the communal spirit of the “Woodstock experience” to an audience of millions.
Much more than just a massive outdoor concert, the Woodstock festival was a defining event that promoted peace and love through music.
News of the stamp is being shared on social media using the hashtags #WoodstockStamps and #PeaceForeverStamps. Followers of the Postal Service’s Facebook page can view the ceremony live at facebook.com/usps.
The Woodstock (50th Anniversary) stamp is being issued as a Forever stamp, meaning it will always be equal in value to the current First-Class Mail 1-ounce price.
Stamp Art
The stamp art features an image of a dove along with the words “3 Days of Peace and Music,” evoking the original promotional poster for the festival. In the iconic 1969 poster, designed by graphic artist Arnold Skolnick, the dove was perched on the neck of a guitar. In the stamp art, the words are stacked in the background in brilliant colors along with the year 1969, USA, and Forever. The white dove stands in the foreground. Art director Antonio Alcalá designed the stamp.
Postal Products
Customers may purchase stamps and other philatelic products through The Postal Store at usps.com/shop, by calling 800-STAMP24 (800-782-6724), by mail through USA Philatelic, or at Post Office locations nationwide. Forever stamps will always be equal in value to the current First-Class Mail 1–ounce price. A video of the ceremony will be available on facebook.com/usps.
Information on ordering first-day-of-issue postmarks and covers is at usps.com/shop under “Collectors.”
The Postal Service receives no tax dollars for operating expenses and relies on the sale of postage, products and services to fund its operations.
|
500,000 people gathered for the Woodstock Festival in the small farming community of Bethel, NY. Woodstock was the most famous rock festival in history and a dramatic expression of the youth counterculture of the 1960s. Promoted as “Three Days of Peace and Music,” the Woodstock festival came to symbolize a generation.
Music business promoters and entrepreneurs â Lang, Rosenman, Artie Kornfeld, and John Roberts â met and discussed the idea for a unique music festival in January 1969. To bring it about, they formed a company, Woodstock Ventures. The four producers promoted the festival as a weekend gathering of the younger generation away from the hassles of everyday life. Previous large concerts had typically attracted, on the high side, an audience of tens of thousands. But for their one-of-a-kind Woodstock festival, the promoters hoped to draw 50,000 people and, as a precaution, drew up plans to accommodate up to 100,000. For the audience, it was going to be three days away from civilization, which meant the promoters would have to provide campsites, food, toilets, medical care, security and other necessities for living together for the entire weekend.
The festival featured more than 30 performers, an unprecedented assembly of musical talent. Some, such as Joan Baez; The Band; The Grateful Dead; The Who; and Blood, Sweat & Tears, were already well known. Others, including folk singer and acoustic guitarist Richie Havens, guitarist Carlos Santana, and British singer Joe Cocker essentially made their national debuts. Jimi Hendrix performed an electrifying rendition of “The Star-Spangled Banner” that became legendary.
An Academy Award-winning documentary film by Michael Wadleigh and a popular song about the festival written by Joni Mitchell extended the communal spirit of the “Woodstock experience” to an audience of millions.
Much more than just a massive outdoor concert, the Woodstock festival was a defining event that promoted peace and love through music.
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
yes_statement
|
"woodstock" "festival" "promoted" "peace" and "love".. "peace" and "love" were "promoted" at "woodstock" "festival".
|
https://en.wikipedia.org/wiki/Woodstock_%2794
|
Woodstock '94 - Wikipedia
|
Woodstock '94 was an American music festival held in 1994 to commemorate the 25th anniversary of the original Woodstock festival of 1969.[1][2] It was promoted as "2 More Days of Peace and Music". The poster used to promote the first concert was revised to feature two doves perched on the neck of an electric guitar, instead of the original one dove on an acoustic guitar.
The 1994 concert was scheduled for the weekend of August 13–14,[3] with a third day (Friday, August 12) added later. Tickets to the festival cost $135 each.[4] The weather was hot and dry on Friday but by early Saturday afternoon storms rolled in. The rains turned much of the field into mud.[1][2]
The event took place on Winston Farm, just west of Saugerties, New York-about 100 miles (160 km) north of New York City and 70 miles (110 km) northeast of the original 1969 festival site near Bethel, which had 12,000 on hand to celebrate the silver anniversary.[5]
Though only 164,000 tickets were sold,[6] the crowd at Woodstock '94 was estimated at 350,000.[7] The size of the crowd was larger than concert organizers had planned for and by the second night many of the event policies were logistically unenforceable. The major issues related to security, when attendees arrived, left or returned to the site, and the official concert food and beverage vendor policy which initially restricted attendees from entering with supplies of food, drinks, and above all alcohol. With the concert site mostly enclosed by simple chain link fences, there was hardly any difficulty for many attendees to enter freely with beer and other banned items. The security staff, along with the entrance and exit staff, could not continue reasonable monitoring of the increasing number of people entering and exiting while at the same time maintaining safety, security, and a peaceful atmosphere.
Three deaths at the festival were confirmed. An unidentified 45-year-old male died on Saturday of suspected diabetes complications. On Sunday, 20-year-old Edward Chatfield died of a ruptured spleen. Organizers also confirmed 5,000 were treated at medical tents and 800 were taken to hospitals.[8]
Jackyl took the stage early on Friday. Lead singer Jesse James Dupree took the stage with a bottle of whiskey and poured alcohol onto the crowd. He then started smoking marijuana and on a close up he shotgunned the joint into the camera, with copious amounts of smoke filling the screens and the stage, at which point the crowd cheered. Dupree then lit a stool on fire onstage and cut it up with a chainsaw. He also pulled out a rifle and started firing into the air but cut his hand, which started bleeding. As Dupree wiped his forehead, a streak of blood was left across his head.[13]
Aphex Twin's performance was cut short when promoters "disconnected" him mid-show for signing a fake name on a contract, which would forfeit PolyGram's rights to his performance.[14]
Blind Melon frontman Shannon Hoon took the stage in his girlfriend's dress and appeared to be tripping on acid during the band's performance and post-show interview with MTV.[16][17]
Nine Inch Nails had the largest crowd density at the event, overshadowing many of the other performers. Just before going on stage, they wrestled each other in the mud and went on to perform completely wet and covered in mud.[18] In the post-performance interview, Nine Inch Nails frontman Trent Reznor claimed he thought his band's performance was "terrible" due to technical difficulties on stage.[19] Reznor admitted that while he disliked playing at such a large show, it was done for the money: "To be quite frank, it's basically to offset the cost of the tour we're doing right now."[20] Their performance of "Happiness in Slavery" at the festival won the Grammy Award for Best Metal Performance in 1996.[21]
Aerosmith's Joey Kramer, Joe Perry, and Steven Tyler were all attendees at the original Woodstock festival in 1969.[22][23] Aerosmith performed around 3 to 4 a.m., right after an extensive fireworks display from Metallica. Tyler said on the liner notes for the album during their set: "It rained like a cow pissing on a flat rock".
During Primus' performance of the song "My Name Is Mud", the audience responded by pelting the band with mud, which singer and bassist Les Claypool ended by telling the crowd that "when you throw things on stage, it's a sign of small and insignificant genitalia". Twenty years after the show, Claypool claimed to still have some of the mud stuck in the bass cabinets he used at the event.[24] Another memorable moment from Primus' set at Woodstock '94 was when Jerry Cantrell, the guitarist and vocalist of Alice in Chains, joined Primus onstage during their performance of the song "Harold of the Rocks".
Woodstock '94 has also been referred to as Mudstock or Mudstock '94, partly due to the rainy weather that resulted in mud pits and the aforementioned performances of Nine Inch Nails and Primus. This culminated with Green Day's performance, during which guitarist and lead vocalist Billie Joe Armstrong started a mudfight with the crowd during their song "Paper Lanterns". In the documentary VH1 Behind the Music: Green Day, bassist Mike Dirnt was mistaken for one of the fans jumping on stage and was spear-tackled by a security guard, knocking out one of his teeth. It was this incident that caused Dirnt to need emergency orthodontia. A gag order was put in place regarding this incident. Due to the now-infamous mudfight and Dirnt's injury, Woodstock quickly propelled Green Day's then recently released album Dookie into success.
After being injured in a traffic accident in 1966 and his subsequent disappearance from the popular music scene, Bob Dylan declined to go to the original Woodstock Festival of 1969, even though he lived in the area at the time. He set off for the Isle of Wight Festival the day the Woodstock festival started and performed at Woodside Bay on August 31, 1969. Dylan, however, did accept an invitation to perform at Woodstock '94 and was introduced with the phrase: "We waited twenty-five years to hear this. Ladies and gentlemen, Mr. Bob Dylan".[25] Although he was an hour and a half late to his performance,[26] his set was considered one of the greater moments of the festival by various critics and represented the beginning of another new phase in his lengthy career.[citation needed] Uncharacteristically for the time, Dylan played lead guitar in a more rock-oriented electric set.
Guns N' Roses were asked to appear at the festival but the band declined due to internal problems, as well as feeling the concert was too "commercial." However, lead guitarist Slash made an appearance with Paul Rodgers.
The Woodstock '94 festival was shot using the early analog HD 1125-line Hi-Vision system in a 16:9 aspect ratio. The footage would be used for later home packages and a planned theatrical documentary about the event. The HD footage was mixed live into standard definition 4:3 NTSC for cable TV broadcast.[36]
The Woodstock '94 festival was broadcast live on MTV via pay-per-view in the U.S. and Canada. In the UK, audio from the event was broadcast on BBC Radio 1.
Highlights from the concert were later released as a double album set on November 4, 1994 on CD and cassette. The film about the event, directed by Bruce Gowers, was also released direct-to-video the same year on VHS and Laserdisc. Currently, there is no DVD, Blu-ray, or digital media release.
Since the release of the official album, various recordings of songs performed have been released officially; however, complete performances of entire sets have only been released unofficially as bootlegs. In 2019, a limited edition vinyl-only release of Green Day's performance was released for Record Store Day, making this one of the first official releases of an entire Woodstock '94 set.
^Cultice, Joseph (March 5, 2019). "MUD, PISS, CATHARSIS: INSIDE NINE INCH NAILS' ICONIC PERFORMANCE AT WOODSTOCK '94". revolver.com. Retrieved June 26, 2021. I remember saying to the guys, it would be great if you guys were like the mud men, like all the crazy kids out there covered in mud. So time passes, and then the stage manager was like, How do we get these guys muddy? They started getting these [ice buckets] and filling them up with mud from around the dressing room trailers. Sitting right across from us while they were doing that were like Henry Rollins and the guys from Alice in Chains. And [Nine Inch Nails] were all like, "Those guys are gonna totally know what we are doing." So in between all that someone found a mud pit at the edge of the stage. So we all got in a sixteen-passenger van and went down to the stage and the band jumped in the mud. It was this big cathartic thing, and then they went onstage.
^Harrington, Richard (August 26, 1994). "AEROSMITH DREAMS ON". washingtonpost.com. Retrieved June 26, 2021. Aerosmith was still a year away -- Tyler was still in Top 40 New York City club bands like the Chain Reaction and William Proud -- and it's at Woodstock he met Joey Kramer, who would become the band's drummer.
^Fisher, Marc (August 14, 1994). "CHAOS RAINS AT WOODSTOCK". washingtonpost.com. Retrieved June 26, 2021. State police spokesman Lt. James O'Donnell said it will take at least 20 to 25 hours to clear the site after Peter Gabriel's last song sometime after midnight Monday morning.
^Goggin, David; Stone, Chris (November 1, 2012). "Epilogue: Woodstock '94—A Major Pro Audio Case History". Audio Recording for Profit. Burlington, Massachusetts: Focal Press. ISBN9780240803869. Friday morning, everybody got up shaking because the downbeat of the festival was at 11 AM. It was like going into battle—very detailed preparations had been made, and now we had to do it. All working personnel were delivered early to the site because the first act was due to start that morning on the North Stage. Twenty acts followed, including unsigned local bands and a "rave" that began at midnight and continued until 6:30 AM the next morning.
^Spevak, Jeff (August 12, 1994). "Revolving stage set to rock 'n' roll". Democrat and Chronicle. Rochester, New York – via newspapers.com. Mostly unsigned, unknown bands from the Saugerties area open at 11 a.m., bearing names such as Lunch Meat and Futu Futu.
^Leopold, Jason (August 11, 1994). "Local band to be part of Woodstock '94". The Daily Item. Port Chester, New York. p. 3 – via Newspapers.com. The ignored: Six unsigned bands - one of them from Westchester. They call themselves Straight Wired, and they're a group of unknowns who only last week played for 100 people in a Hoboken bar.
^Associated Press (August 7, 1994). "Johnny Cash Says He's Out". The New York Times (Press release). The New York Times. Retrieved July 12, 2021. Johnny Cash, the country and western singer, says he has decided to withdraw from Woodstock '94 next weekend because of disagreements with the festival's promoters.
^"Philips Media celebrates music and multimedia at Woodstock '94" (Press release). PR Newswire. July 20, 1994. Retrieved June 26, 2021 – via trconnection.com. The Philips Multimedia Village will feature a variety of interactive experiences, including a multiple-screen, multimedia show that highlights Philips Media's hot new software titles; a 90-station CD-i play tent where visitors can experience those titles hands-on, guided by cyberpunk arcade "gamers"; and the "Todd Pod," where multimedia musician Todd Rundgren will perform five live shows daily.
|
Woodstock '94 was an American music festival held in 1994 to commemorate the 25th anniversary of the original Woodstock festival of 1969.[1][2] It was promoted as "2 More Days of Peace and Music". The poster used to promote the first concert was revised to feature two doves perched on the neck of an electric guitar, instead of the original one dove on an acoustic guitar.
The 1994 concert was scheduled for the weekend of August 13–14,[3] with a third day (Friday, August 12) added later. Tickets to the festival cost $135 each.[4] The weather was hot and dry on Friday but by early Saturday afternoon storms rolled in. The rains turned much of the field into mud.[1][2]
The event took place on Winston Farm, just west of Saugerties, New York-about 100 miles (160 km) north of New York City and 70 miles (110 km) northeast of the original 1969 festival site near Bethel, which had 12,000 on hand to celebrate the silver anniversary.[5]
Though only 164,000 tickets were sold,[6] the crowd at Woodstock '94 was estimated at 350,000.[7] The size of the crowd was larger than concert organizers had planned for and by the second night many of the event policies were logistically unenforceable. The major issues related to security, when attendees arrived, left or returned to the site, and the official concert food and beverage vendor policy which initially restricted attendees from entering with supplies of food, drinks, and above all alcohol. With the concert site mostly enclosed by simple chain link fences, there was hardly any difficulty for many attendees to enter freely with beer and other banned items. The security staff, along with the entrance and exit staff, could not continue reasonable monitoring of the increasing number of people entering and exiting while at the same time maintaining safety, security, and a peaceful atmosphere.
Three deaths at the festival were confirmed.
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
yes_statement
|
"woodstock" "festival" "promoted" "peace" and "love".. "peace" and "love" were "promoted" at "woodstock" "festival".
|
https://www.thecollector.com/hippie-counterculture-movement-1960s-1970s/
|
The Counterculture Hippie Movement of the 1960s and 1970s
|
The Counterculture Hippie Movement of the 1960s and 1970s
A new identity was born at the start of the counterculture movement in the late 1960s. This youth movement criticized consumerism, promoted peace, and yearned for individualism. The 1960s and ‘70s revolutionized pop culture and encouraged social reform. This 20-year period was a turning point in history that influenced future decades, and still has an impact on the present day.
The counterculture movement involved youths who rejected mainstream American culture and societal norms. The “American dream” was no longer a goal for this new generation. Prior to the 1950s, the ideal woman was a housewife who cared for the children, cooked, and cleaned the home. Men were expected to find a steady job and be the provider for the family. Counterculture began to boil up in the late 1940s and seeped into the 1950s with the beat movement. This movement involved literary “hipsters” who rejected social norms, often referred to as beatniks.
The beat movement was the foundation of the counterculture movement that emerged in the late 1960s. Beat poetry began in New York City in the 1940s and made its way to San Francisco a decade later. Beatniks focused on topics that clashed with mainstream culture and ideas. These perspectives carried into a slightly younger group in their teens to mid-20s.
In the latter half of the 1960s, San Francisco became a hotspot for tens of thousands of youths who shared the common desire for peace and freedom. Haight-Ashbury was the most notable San Francisco neighborhood that drew in almost 100,000 youths during the summer of 1967, who soon became the heart and soul of the counterculture movement. This summer of youth migration became known as the Summer of Love, which marked the prominence of a movement that would impact decades to come.
1950s Consumerism Fuels Anti-Materialistic Perspectives
1950s family enjoying their new television in the post-war Consumer Era by Doug White, 1956, via New York Historical Society Museum and Library
Get the latest articles delivered to your inbox
Sign up to our Free Weekly Newsletter
Please check your inbox to activate your subscription
Thank you!
Consumerism was at an all time high in the 1950s. World War II encouraged production of goods, provided an abundance of jobs, and motivated those on the home front to support their nation by spending. The economy finally felt relief for the first time since the booming age of the Roaring Twenties, before the Great Depression collapsed it all. People were focused on building families, working a steady job, and buying homes. Appliances, cars, and TVs were at the top of consumers’ list to modernize their homes. Additionally, consumer credit became a popular way for people to afford more things.
The counterculture movement rejected most things that were praised by the government. This included consumerism. The hippie-style clothing worn was often hand-me-downs bought at flea markets, yard sales, or second-hand shops. This was a purposeful effort to avoid buying from major brand-name stores and contributing to mainstream consumerist habits. Most of the counterculture movement youths were children of the middle and upper-middle class. They opposed everything that the previous decades were all about: wartime support, materialism, and work.
The Hippie Identity of the Counterculture Movement
Youth International Party gathering with leading Yippie activist Dana Beal (second from right) on stage in front of the White House, via World of Cannabis Museum
Not everyone involved in the counterculture movement was involved in the hippie movement. The two merged together because of matching perspectives. The hippie identity wasn’t actually accepted by hippies themselves at the time. Many preferred to be called a “freak” or “love child.” The term “hippie” was coined by local media outlets in San Francisco.
Hippie stuck as a derogatory identifier of rebellious youths participating in counterculture. It later manifested in a much lighter sense. It is generally no longer viewed as an insult to the modern-day hippie. Individuals who referred to people as hippies in the ‘60s and ‘70s were called “straights.” This phrase referred to anyone who didn’t support the counterculture movement. It described people who followed the traditional and “square” ways of life.
There were a few different types of hippies, including visionaries, freaks and heads, and plastic hippies. Although all youths who identified as love children were against much of the social and political norms of the times, many weren’t activists or protestors. Some groups fit the general description of a hippie but were more politically active and involved in protests. Examples of these groups included the “Diggers” and “Yippies.” Both groups emerged in the latter half of the ‘60s. Yippies stemmed from the Youth International Party. Diggers and Yippies were viewed as radical leftists who were anti-war socialism supporters with anarchist-like points of view.
Man rolling marijuana joints at the Hog Farm Commune in New Mexico by Lisa Law, 1968, via National Museum of American History, Washington DC
Visionary hippies closely resembled the intellectual beatniks of the previous decades. They were the original hippies with anti-conventional values that rejected the ways of the generation before them. The freaks and heads were the hippies who sought freedom through spiritual connections using hallucinogenic drugs, such as lysergic acid diethylamide (LSD). Plastic hippies took on the classic hippie fashion, dabbled in drug use, and enjoyed the atmosphere the hippie movement brought. They didn’t fully resonate with the actual roots of the movement and essentially just scratched the surface of what it meant to be a love child at the time.
Hippies were the baby boomer generation. There was a 14.5% population increase between 1940 and 1950. As a result, tens of millions of individuals came of age in the 1960s and ‘70s. This created a vast, rebellious generation that became the main focus for two decades. As with many youths coming of age, taking on rebelling perspectives and defying the common order wasn’t unheard of. However, the number of youths spread across the nation allowed the counterculture movement to expand exponentially.
Anti-War & the Rejection of Mainstream Society
Anti-Vietnam War march from downtown San Francisco to Golden Gate Park by Lisa Law, 1967, via National Museum of American History, Washington DC
The “American dream” was in full motion for many in the late 1940s and ‘50s. People felt a sense of patriotism. Many supported the first few years of US involvement in the Vietnam War to stop the spread of communism. This was especially apparent for those who lived through the first and second waves of Communist paranoia, known as the Red Scare. Counterculture activists were disappointed in the US government’s involvement in the Vietnam War.
The anti-war movement was a big part of counterculture. Just as Americans were experiencing relief from the Great Depression and the peace of post-WWII, the US entered the Vietnam War. More than two million American men were drafted. Some counterculturists took the opportunity to show their contempt for the war by burning their draft cards. Hippies who were especially against the war were known as “flower children” and advocated for peace and love. The peace sign, created by British artist Gerald Holtom, became an anti-war symbol and iconic representation of the counterculture hippie movement. It was originally designed as a logo for Nuclear Disarmament in 1958.
There were also other movements taking place within the counterculture movement. The Civil Rights Movement waged on from the mid-1950s to the late ‘60s. The Women’s Rights Movement emerged alongside counterculture. People were tired of oppression and discrimination. Youths were yearning for individuality, and many refused to carry on the bad habits of the generations that preceded them.
Counterculture Revolutionizes Pop Culture
Woman at a Love-In gathering at Elysian Park in Los Angeles, California by Lisa Law, 1968, via National Museum of American History, Washington DC
Perhaps one of the counterculture movement’s most significant impacts was its influence on pop culture. Fashion, music, and media were all affected. The iconic styles that emerged from the counterculture movement were bright, flamboyant, and less conventional. Comfortability and individuality conquered over conservative wear. Twiggy, Cher, and Janis Joplin are just a few women who influenced the fashion scene of the late ‘60s and early ‘70s. Bold colors, patterns, and the free-spirited bohemian aesthetic were in full swing. Part of men’s fashion was heavily influenced by the rock ‘n roll scene that bloomed in the late 1950s. Long hair, bell-bottoms, and vibrant patterns were common among male youths.
Rockabilly, which stemmed from jazz, blues, and gospel from previous decades, had a heavy influence on the counterculture movement. Different subgenres of rock emerged, such as psychedelic, folk, soft, and pop rock. Psychedelic rock fit the counterculture hippie movement scene, making sex, drugs, and rock ‘n roll a common identifier of the ‘60s and ‘70s. These subgenres would influence the punk rock and hair metal scene of the 1980s.
Janis Joplin (center) with Big Brother and The Holding Company band mates by Lisa Law, 1967, via National Museum of American History, Washington DC
Some of the most iconic and influential singers and musicians popped up in the ‘60s and ‘70s. One of the most defining events of the movement was the 1969 Woodstock Music and Art Fair that took place in a muddy farm field in Bethel, New York. Hundreds of thousands of people attended, well over the estimated number. It was an unorganized mess but so successful it became the epitome of the counterculture hippie movement. People traveled far and wide to attend and indulge in music and drugs. More than 150 musicians attended with 32 musical acts. Some of the most notable performers of the time played at the event, such as Janis Joplin, Creedence Clearwater Revival, and Jimi Hendrix.
People gathered at the 1969 Woodstock Music and Art Fair in Bethel, New York, via University of Georgia
Media played a significant role in not only pinning love children and freaks as hippies with a negative connotation but also in romanticizing the movement. After the Vietnam War ended in the mid-1970s, the counterculture movement died down. However, the media continued to idolize the hippie scene. Even today, the hippie movement is often missed by those who desire to live in a more “free” society. However, not everything was as joyous as it seemed.
The large influx of people coming into the Haight-Ashbury neighborhood turned it into a poverty-stricken area that wasn’t well-kept. This led to a lot of crime and changed the scene from a safe haven for artists, intellectuals, and those alike to a dangerous and unsanitary place. The image of peace, love, and freedom from the movement stuck around thanks to the media, but the more bleak truths of the two decades were kept in the shadows.
Memories of the Counterculture Movement Live On
People dancing at the Woodstock Music and Art Fair, 1969, via Woodstock.com
The counterculture movement of the 1960s and ‘70s was arguably one of the most influential time periods in modern American history. A more individualized identity was sought after by coming-of-age outsiders that took over the nation due to the baby boom. Nonconformists emerged and publicly rejected traditional social norms. The anti-war perspective encouraged an idealistic peace and love movement that made the decades somewhat euphoric.
Pop culture was forever changed, with fashion and music taking on revolutionary forms. The bohemian aesthetic is still appreciated and reappears in fashion in waves. The media romanticized the movement so much that it would forever be remembered as a time when people felt the most free, which holds some truth to a certain extent. The defiance of mainstream culture helped push other movements forward, such as the Civil Rights and Women’s Rights Movements. It was truly one of the most captivating and transformative moments in social and cultural history.
By Amy HayesBA History w/ English minorAmy is a contributing writer with a passion for historical research and the written word. She holds a BA in history from Old Dominion University with a concentration in English. Amy grew up in the historic state of Virginia and quickly became fascinated by the intricate details of how people, places, and things came to be. She specializes in topics on American history, Ancient and Medieval England, law, and the environment.
|
Janis Joplin (center) with Big Brother and The Holding Company band mates by Lisa Law, 1967, via National Museum of American History, Washington DC
Some of the most iconic and influential singers and musicians popped up in the ‘60s and ‘70s. One of the most defining events of the movement was the 1969 Woodstock Music and Art Fair that took place in a muddy farm field in Bethel, New York. Hundreds of thousands of people attended, well over the estimated number. It was an unorganized mess but so successful it became the epitome of the counterculture hippie movement. People traveled far and wide to attend and indulge in music and drugs. More than 150 musicians attended with 32 musical acts. Some of the most notable performers of the time played at the event, such as Janis Joplin, Creedence Clearwater Revival, and Jimi Hendrix.
People gathered at the 1969 Woodstock Music and Art Fair in Bethel, New York, via University of Georgia
Media played a significant role in not only pinning love children and freaks as hippies with a negative connotation but also in romanticizing the movement. After the Vietnam War ended in the mid-1970s, the counterculture movement died down. However, the media continued to idolize the hippie scene. Even today, the hippie movement is often missed by those who desire to live in a more “free” society. However, not everything was as joyous as it seemed.
The large influx of people coming into the Haight-Ashbury neighborhood turned it into a poverty-stricken area that wasn’t well-kept. This led to a lot of crime and changed the scene from a safe haven for artists, intellectuals, and those alike to a dangerous and unsanitary place. The image of peace, love, and freedom from the movement stuck around thanks to the media, but the more bleak truths of the two decades were kept in the shadows.
Memories of the Counterculture Movement Live On
People dancing at the Woodstock Music and Art Fair,
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
yes_statement
|
"woodstock" "festival" "promoted" "peace" and "love".. "peace" and "love" were "promoted" at "woodstock" "festival".
|
https://www.rollingstone.com/music/music-news/woodstock-michael-lang-dead-1281681/
|
Woodstock Impresario Michael Lang Dead at 77 – Rolling Stone
|
Woodstock Impresario Michael Lang Dead at 77
Michael Lang, the concert impresario who helped conceive the landmark, generation-defining 1969 music festival Woodstock, died Saturday night at Sloan Kettering hospital in New York. He was 77.
Michael Pagnotta, a rep for Lang and longtime family friend, confirmed the promoter’s death to Rolling Stone, adding that the cause was a rare form of non-Hodgkin’s lymphoma.
Alongside businessmen John Roberts and Joel Rosenman and music industry promoter Artie Kornfeld, Lang, who had previously promoted the 1968 Pop and Underground Festival in Miami, co-created the Woodstock Music and Art Fair the following year. Famously billed as “Three Days of Peace and Music,” the upstate New York festival drew up to 400,000 people to Max Yasgur’s farm in Bethel, NY from August 15-18, 1969 and featured dozens of rock’s biggest names, including Santana, Creedence Clearwater Revival, the Who, Jimi Hendrix and Crosby, Stills, Nash and Young.
“There’s a moment when Michael Lang changed the world,” the Lovin’ Spoonful frontman John Sebastian tells Rolling Stone. “At Woodstock I was standing next to him when one of his minions way in the distance came running toward the stage and we thought, “This can’t be good.’ He gets to Michael and says, ‘The fence is down. Folks are coming over the top.’ And Mike takes this long look over the whole scenario and almost to himself he says, ‘Well, I guess we now have a free festival.’ It was the original, ‘What could possibly go wrong,’ but he could pivot and see the light.”
Carlos Santana said in a statement to Rolling Stone Sunday, “Michael Lang was a divine architect of unity & harmony. He gave birth to Woodstock, the festival that manifested 3 glorious days of peace & freedom. He will no doubt be orchestrating another celestial event in Heaven. Thank you Maestro. You and Bill Graham are now united in the light of our divinity and are supreme love.”
Lang was only 24 when he helped conceive the festival, which would go on to become a massively influential counterculture touchstone, thanks in part to a documentary on the event released the following year. Over the years, Lang’s name became synonymous with the Woodstock brand, as the promoter helped helm subsequent iterations of the festival in 1994 and 1999. (When Pollstar asked Lang in 2019 what it’s like to be the “Woodstock poster child for eternity,” he replied, “Life is full of experiences, and not everything works out. But you keep trying or nothing works out … That’s always been my attitude.”) A 50th anniversary concert in 2019 was mired in controversy and legal issues and was canceled before it could go on.
Editor’s picks
Lang, a native New Yorker, moved to Coconut Grove, FL in the late 1960s and opened a head shop. “The climate is perfect, people are into a stimulating variety of artistic things and there was no place for them to get together,” Lang said in author Ellen Sanders’ 1973 book Trips: Rock Life in the Sixties. He applied that same ethos to music festivals, starting with the Pop and Underground Festival in May 1968. The festival, attended by 25,000 people, featured sets by Jimi Hendrix, John Lee Hooker, Chuck Berry and the Mothers of Invention, among others.
After moving back to New York, Lang met Kornfeld, then a vice president of Capitol Records, and started Woodstock Ventures with Roberts and Rosenman. After a series of planned locations fell through, the quartet was famously able to organize the festival at the 600-acre farm of Max Yasgur, a dairy farmer in Bethel, NY immortalized in Crosby, Stills, Nash and Young’s 1970 cover of Joni Mitchell’s “Woodstock.” In 2004, the event earned a spot on Rolling Stone‘s “50 Moments That Changed the History of Rock and Roll.”
“Woodstock came at a really dark moment in America,” Lang told Rolling Stone in 2009. “An unpopular war, a government that was unresponsive, lots of human rights issues — things were starting to edge toward violence for people to make their points. And along came Woodstock, which was this moment of hope.”
“We thought we were all individual, scattered hippies,” David Crosby told Rolling Stone in 2004. “When we got there, we said, ‘Wait a minute, this is a lot bigger than we thought.’ We flew in there by helicopter and saw the New York State Thruway at a dead stop for 20 miles and a gigantic crowd of at least half a million people. You couldn’t really wrap your mind around how many people were there. It had never happened before, and it was sort of like having aliens land.”
Related Stories
“Everybody was crazy,” Joan Baez, who played the festival, told Rolling Stone in 2009. “I guess the collective memories that people have, I have in a sense. It’s the mud and the cops roasting hot dogs and people wandering around in the nude. And the fact that, looking back, it was in fact a huge deal. It was like a perfect storm and I realized that Woodstock was like the eye of the hurricane because it was different. It was this weekend of love and intimacy and attempts at beauty and at caring and at being political.”
The original Woodstock was, in Kornfield’s words, “a financial disaster.” However, in Michael Wadleigh’s 1970 documentary that captured the fest, Kornfield and Lang are shown smiling when discussing the festival with a news reporter. “Look at what you got there. You couldn’t buy that for anything,” Lang said.
Graham Nash said in a statement to Rolling Stone Sunday, “This man played an important role in the development of the ‘Musical Festival.’ His part in bringing the Woodstock nation to the forefront of American music is well known and he will be missed by his family and friends.”
After Woodstock, Lang was recruited at the last minute to help assist with what became the infamous Altamont Free Concert in California in December 1969, where audience member Meredith Hunter was stabbed to death during the Rolling Stones’ set. “My opinion of Altamont is that it was a missed opportunity and the result of a lack of planning,” Lang said in a Reddit Q&A in 2014. “It was thrown together at the last minute, it had to move at the last minute, and really wasn’t thought throughout. There really wasn’t any security, and the Hells Angels were pressed into a role they weren’t suited for. And so what could have been a great day of music degenerated into a horror show.”
Lang later became the manager for Rickie Lee Jones and Joe Cocker — the latter delivering one of Woodstock’s legendary performances — and founded Just Sunshine Records, which released albums by Karen Dalton and Betty Davis, among others.
Twenty-five years after the iconic festival, Lang returned to Woodstock — albeit in Saugerties, New York, and not Yasgur’s farm — to produce Woodstock ’94, billed as “2 More Days of Peace and Music.” The 1994 lineup was a “bridge” between the original fest and more contemporary music, Lang said, with original Woodstock acts like Cocker, Santana, Crosby, Stills & Nash, members of the Grateful Dead and the Band, Country Joe McDonald and more joining Red Hot Chili Peppers, Green Day, Aerosmith, Metallica, Nine Inch Nails and Bob Dylan.
While not as historic as its predecessor, the success of the 1994 fest — coupled with its MTV broadcast and the rise of music festivals in general — led Lang and his fellow producers to host another fest five years later. That was Woodstock ’99, which ultimately hewed closer to the chaos of Altamont than the peace and love of the previous two installments.
“My takeaways from Woodstock ’99 are a bit complicated,” Lang said in the Reddit Q&A. “A lot of people had an amazing time. There was lots of amazing music. It was unfortunately an incredibly hot weekend, and being on that air force base where the heat was reflected from that tarmac was really problematic. Without the rain in all that heat was a problem. And frankly, as I said earlier, a lot of the music was kind of angry, and the audience was young and of the same headspace, so I’m a little bit conflicted about Woodstock ’99.”
In the mid-2010s, after the aftermath of the maligned Woodstock ’99 had largely dissipated, Lang began envisioning a 50th anniversary festival, with Watkins Glen, New York — site of festivals like 1973’s Summer Jam and Phish’s Magnaball in 2015 — the intended site. The fest’s cross-generational lineup was announced in March 2019, bringing together Jay-Z, Miley Cyrus and the Killers with Woodstock vets like Santana, Dead & Company, David Crosby and John Fogerty, whose Creedence Clearwater Revival was the first band booked for the original Woodstock.
“I was looking forward to seeing how it would get reworked 50 years later,” John Fogerty, who had played Woodstock with Creedence Clearwater Revival, told Rolling Stone. “What the young people would think about it and what the younger artists would think. It’s not every day you get to go back to a 50-year reunion.”
“I thought he bore that burden remarkably well,” Sebastian says of Lang carrying the Woodstock name. “We would do these various Woodstock events, telling stories, and he had that smile – not tension, but a kind of sadness that’s part of knowing about life. I would see that now and then.”
Following the cancelation of Woodstock 50, Lang was asked whether he was worried that the failed fest had tarnished the legacy of the brand. “It’s not something I consider,” Lang told Rolling Stone in 2019. “What we did in 1969 was in 1969 and that’s what has endured and will continue to endure. We’re not going away.”
|
Ellen Sanders’ 1973 book Trips: Rock Life in the Sixties. He applied that same ethos to music festivals, starting with the Pop and Underground Festival in May 1968. The festival, attended by 25,000 people, featured sets by Jimi Hendrix, John Lee Hooker, Chuck Berry and the Mothers of Invention, among others.
After moving back to New York, Lang met Kornfeld, then a vice president of Capitol Records, and started Woodstock Ventures with Roberts and Rosenman. After a series of planned locations fell through, the quartet was famously able to organize the festival at the 600-acre farm of Max Yasgur, a dairy farmer in Bethel, NY immortalized in Crosby, Stills, Nash and Young’s 1970 cover of Joni Mitchell’s “Woodstock.” In 2004, the event earned a spot on Rolling Stone‘s “50 Moments That Changed the History of Rock and Roll.”
“Woodstock came at a really dark moment in America,” Lang told Rolling Stone in 2009. “An unpopular war, a government that was unresponsive, lots of human rights issues — things were starting to edge toward violence for people to make their points. And along came Woodstock, which was this moment of hope.”
“We thought we were all individual, scattered hippies,” David Crosby told Rolling Stone in 2004. “When we got there, we said, ‘Wait a minute, this is a lot bigger than we thought.’ We flew in there by helicopter and saw the New York State Thruway at a dead stop for 20 miles and a gigantic crowd of at least half a million people. You couldn’t really wrap your mind around how many people were there. It had never happened before, and it was sort of like having aliens land.”
Related Stories
“Everybody was crazy,” Joan Baez, who played the festival, told Rolling Stone in 2009. “I guess the collective memories that people have, I have in a sense.
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
yes_statement
|
"woodstock" "festival" "promoted" "peace" and "love".. "peace" and "love" were "promoted" at "woodstock" "festival".
|
https://www.brandstorytelling.tv/single-post/three-things-i-learned-about-brand-storytelling-from-netflix-s-trainwreck-woodstock-99
|
Three Things I Learned About Brand Storytelling from Netflix's ...
|
Three Things I Learned About Brand Storytelling from Netflix’s “Trainwreck: Woodstock ’99”
Netflix recently released a limited series about the epic failure that was Woodstock 1999, aptly titled Trainwreck. For those of you who don’t remember, Woodstock ’99 was a Millennial reimagining of the legendary music festival from 1969. Watching the series inspired me to take a deep dive into Woodstock mythos, and think up a few ways the ’99 festival can serve as a cautionary tale for brands.
The 1969 festival rocketed Woodstock into the global consciousness as one of the foremost culture brands of its era. And sadly, mismanagement and greed on the part of Michael Lang (initial co-founder) and John Scher (supposedly New Jersey’s most successful concert promoter), the two organizers of the ‘99 event, took a blowtorch to that legacy. But before we get to the trainwreck, let’s talk about the OG Woodstock.
The Woodstock Music and Art Fair was promoted as three days of peace and love meant to build a bridge between rock culture and the movement to end the war in Vietnam. It was equal parts cultural flashpoint and spiritual celebration that kicked off at a time of great upheaval in the United States. In the shadows of the Manson Murders, COINTELPRO, and the civil rights movement, young people were pushing against the status quo in droves. And so, on August 15th 1969, roughly 500,000 people made the pilgrimage to Max Yasgur’s idyllic 600 acre dairy farm in Bethel, NY to bask in the magic of Joan Baez, Jimi Hendrix, Santana, Ravi Shankar, Richie Havens, Janis Joplin and a dream line up of Rock n’ Roll royalty. It was the most famous free concert of all time. There were no acts of violence. No sexual assaults. Woodstock was an almost utopian playground, where masses of people could express themselves freely, without fear of judgment.
And while the event itself didn’t make a profit, a 1970 feature documentary by Michael Wadleigh grossed over $50 million at the box office, helping pay off all outstanding debts from the festival. The film served as the foundation of the Woodstock brand and became the engine behind its commercial legacy. Soon afterward, the organizers established Woodstock Ventures, which would create products, license content, and put on events, in the same vein as the Woodstock festival. Rhino even sold a 38 disc box set of all the original 1969 performances for a bargain price of $799.00, which is the perfect segue to the shit-storm that was Woodstock ’99.
If Woodstock ‘69 was peace and love, the reboot was all about violence and anger. A byproduct of MTV’s commercialism and bro culture run amok. Long gone were the world-renowned, anti-war artists singing about consciousness and unity. No, this version was dominated by hyper-aggressive nu-metal bands whipping the 400,000-person crowd into a rabid frenzy. But more on that later.
Honor your audience and bring value to their lives:
Woodstock ’69 galvanized young people, and for many, served as a peak, life-defining weekend. And I bet their emotional bond with the brand only deepened over the years. For them, Woodstock would always serve as a North Star of what was possible when people came together in harmony to dream big dreams. Why couldn’t the organizers bring this audience back for another three-day festival in ‘99? Maybe even feature a few of the bands from ‘69 or the iconic performers who missed the original: Paul Simon, Led Zeppelin, Pink Floyd, Fleetwood Mac--the list goes on. Looking at the lineup from ‘99, you’ll see a world of difference, long gone were the anti-war songs. Now, Limp Bizkit, Korn, Kid Rock, The Offspring, Megadeath, and Buckcherry reigned supreme.
In opting for a hard rock/metal focus, the organizers shifted the tectonic plates of the festival, and in the process, alienated their core audience. Sure, there’s an argument to be made that Woodstock ‘99 was an opportunity to bring in a new, younger demo for the brand. I get it, but don’t try to expand your audience while undermining the people who have supported your company from day one. Woodstock ‘99 needed its elders (badly), they’re important culture keepers, and if the organizers had included them in ’99, maybe all those Millennials could have gotten a master class on how to properly show up at the festival.
When crafting sincere brand storytelling, it’s important that we start with crystal clarity around our core audience. Who are they? What are their needs/wants/values? What does your brand experience conjure up for them emotionally? And when you figure this out, be unbending in how you reinforce this emotional connection, and bring value to their lives. These folks are your company's wellspring.
Don’t just talk about your values, live them:
Perhaps the most striking difference between Woodstock ‘69 and ‘99 was the location. Griffiss Air Force base in Rome, NY is a sprawling 3500-acre slab of flat land, essentially tarmac, concrete and unkempt grass, with none of the tranquil vibes that made Yasgur’s farm so special. It was an ode to the might of the military industrial complex, a far cry from the anti-war, counter cultural force that unified the original festival goers under a banner of peace.
When those hippies walked barefoot across lush grass and past majestic trees on Yasgur’s farm in 1969, you better believe they tapped into a sense of deep peace. The location was instrumental in reinforcing the brand’s core values. On the flip side, in ‘99 attendees were bombarded with a 102-fahrenheit heat wave, littered garbage, bland concrete/asphalt as far as the eye could see, and a 1.5 mile long hike between the stages, which made things unbearable. If the organizers really wanted them to feel peace, then they should have picked a venue that brought that feeling to life.
Another foundational brand value was love. Probably the most game changing part of ’66 was the almost familial level of support felt by the attendees. Sure, there were some half million people there, but there was also a radical sense of intimacy and harmony that permeated into all the things. People showed each other compassion, shared food (and LSD), and even though many of them were nude, there was no air of objectification or sexualization. The community showed up to care for one another and you could feel it everywhere. Woodstock was the ultimate safe space.
This couldn’t have been further from ’99. Sexual assault and the objectification of women, were pervasive. Throughout the festival, women emcees and musicians were berated with shouts of “show us your tits” from the crowd. And while nudity was once again a cornerstone of the festival, the culture around it was aggressive and predatory in nature. This extended all the way to the Woodstock website where the event was being live-streamed and photos of nude female festival-goers were posted without their consent. The comments section was terrible. Sadly and predictably, multiple women were raped and assaulted at ‘99. And this number only skyrocketed during Limp Bizkit’s set, after Fred Durst incited the crowd into a melee with “Break Some Shit.” Truly horrific.
All in all, some 44 people were arrested, but only one was charged with sexual assault (not surprising for a crime that almost always goes unpunished). For an organization that prides itself on “peace and love,” it’s critical that every touchpoint of the Woodstock experience, from the event itself to the website, to advertising, concessions, and the musical line-up, be rooted in these values. Without this, there can be no sincerity or social impact or real, long-lasting emotional connection with the audience. BS your customers and you run the risk of losing them for good. Or even worse, they may unleash their wrath. This leads to my third takeaway.
Never put profit before customer experience:
Throughout the lifecycle of ‘99, Lang and Scher made terrible decision after terrible decision, always putting profit first. When they chose an air force base as their venue, it was because the existing infrastructure helped save on construction and facilities costs. When they partnered with vendors that charged $4.00 for a bottle of water and $12.00 for a slice of pizza (in 1999), they forced attendees into an angry and desperate state. When they cut costs by hiring outsourced providers, it resulted in subpar services at every level: layers of garbage covered the festival grounds, porta-pottieswere overflowing with waste, and even the drinking water was contaminated with feces. Security was underfunded and incapable of handling the waves of injured attendees that flowed in nonstop throughout the festival.
But the pièce de résistance of dumb ideas came from Lang on the final night of the festival. He ordered festival staff to hand out 100,000 candles during the final performance in hopes that they’d create the largest candlelight vigil in history. What took place afterward was everything you can imagine 400,000 angry, dehydrated young people would do after three days of living in a Lord of the Flies-like environment, amidst garbage, poo water, and hyper-aggressive music….the candles were used to start bonfires throughout the crowd, which then led to looting, and an all-out riot. Cars were flipped over and lit on fire. The era of peace, love, and music crashed and burned on the sweltering tarmac of an Air Force base in Rome, NY.
Making money is an important part of any business. But when Lang and Scher prioritized profit before their customer’s health, wellness, safety, and overall experience, they were telling them, in so many words, that they didn’t value them as customers. And I believe this made it easy for some of the attendees to strike back at the corporate entity that was Woodstock and really “Break Some Shit.” Empathy and humanity are everything. Especially for a brand founded on altruistic and socially impactful values like Woodstock Ventures. And especially during times of economic devastation or recession.
I believe it’s the brands that value their audience as human beings and take the time to invest positively in their lives, these are the companies that thrive in the short term while also maintaining customers for a lifetime--which Lang and Scher managed to bungle in every way possible. I’d like to close with something hopeful, and what’s in my opinion the most impactful performance from ‘69. I’ll close with the incomparable Richie Havens, as he performs “Freedom,” which he created on stage during his opening set at Woodstock. The song served as a rallying cry for the festival and ended up defining his career.
About Farhoud Meybodi
Farhoud Meybodi is an award-winning writer, director, and executive producer focused on storytelling projects that inspire mass culture change. Over the past decade, he has collaborated on a variety of television and digital projects that have been seen over two billion times, raised millions of dollars for terminal illness research, and even helped overturn an unjust Presidential Executive Order. At his core, Farhoud believes in the power of mainstream storytelling to entertain and help heal the political-social divide of the present day.
|
Art Fair was promoted as three days of peace and love meant to build a bridge between rock culture and the movement to end the war in Vietnam. It was equal parts cultural flashpoint and spiritual celebration that kicked off at a time of great upheaval in the United States. In the shadows of the Manson Murders, COINTELPRO, and the civil rights movement, young people were pushing against the status quo in droves. And so, on August 15th 1969, roughly 500,000 people made the pilgrimage to Max Yasgur’s idyllic 600 acre dairy farm in Bethel, NY to bask in the magic of Joan Baez, Jimi Hendrix, Santana, Ravi Shankar, Richie Havens, Janis Joplin and a dream line up of Rock n’ Roll royalty. It was the most famous free concert of all time. There were no acts of violence. No sexual assaults. Woodstock was an almost utopian playground, where masses of people could express themselves freely, without fear of judgment.
And while the event itself didn’t make a profit, a 1970 feature documentary by Michael Wadleigh grossed over $50 million at the box office, helping pay off all outstanding debts from the festival. The film served as the foundation of the Woodstock brand and became the engine behind its commercial legacy. Soon afterward, the organizers established Woodstock Ventures, which would create products, license content, and put on events, in the same vein as the Woodstock festival. Rhino even sold a 38 disc box set of all the original 1969 performances for a bargain price of $799.00, which is the perfect segue to the shit-storm that was Woodstock ’99.
If Woodstock ‘69 was peace and love, the reboot was all about violence and anger. A byproduct of MTV’s commercialism and bro culture run amok. Long gone were the world-renowned, anti-war artists singing about consciousness and unity.
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
yes_statement
|
"woodstock" "festival" "promoted" "peace" and "love".. "peace" and "love" were "promoted" at "woodstock" "festival".
|
https://digilab.libs.uga.edu/exhibits/exhibits/show/civil-rights-digital-history-p/counterculture
|
Counterculture Movement · Civil Rights Digital History Project · exhibits
|
Counterculture Movement
Introduction
The counterculture movement, from the early 1960s through the 1970s, categorized a group of people known as "hippies" who opposed the war in Vietnam, commercialism and overall establishment of societal norms. Those included in this movement sought a happier and more peaceful life and often did so by experimenting with marijuana and LSD.
The music choice of the counterculture movement stemmed from the anti-establishment aspects of psychedelic rock. During the counterculture movement, attendance at psychedilic rock shows exploded in numbers. Psychedelic rock shows began to become more elaborate as the number of atendees increased. Hippie fashion was often present at these shows.
One of the most memorable music festivals during this time was the Woodstock Music and Art Festival. This highly disorganized three-day-long concert was the epitome of counterculture--from the clothes attendees wore to the anti-war messages performed by the singers.
Much of hippie fashion came from their opposition to commercialism. Most of the clothing that hippies wore was not purchased from major stores, but instead from yard sales or flea markets. Their fashion choices distinguished them from the rest of society because they wore bright colors and things that others would not wear. Their fashion was often a statement of who they were and what they believed.
The counterculture movement largely was in support of the antiwar movement. They organized protests while brandishing signs promoting peace, love, and drugs. Burning draft cards were also a symbol of the movement and became iconic of the anti-war movement.
Music
During the late 1960’s and early 1970’s, the genre of Psychedelic rock emerged as the popular type of music for participants of the Counterculture movement. The Psychedelic genre was famous for its musical replication of the experience of mind-altering drugs by invoking the three core effects of LSD: depersonalization, dechronization, and dynamization; all of which detach the user from reality. Some of the psychedelic rock musicians like Janis Joplin and Joan Baez were based in jazz, blues and folk, while others like the Beatles and Jimi Hendrix incorporated non-Western instruments with electric guitars and looping studio effects. While the era of psychedelic rock essentially ended in the late 1970’s with the birth of the disco era, the peak years were between 1966 and 1969 and included milestone events like the 1967 Summer of Love and 1969’s Woodstock Music Festival.
Although his mainstream career only spanned four years, Jimi Hendrix is considered to be one of the most influential and celebrated artists of the 20th century. His experimental use of the over-amplified electric guitar helped to popularize the previously undesired technique. This clip features Hendrix’ arguably most popular performance: the Star Spangled Banner at the 1969 Woodstock Music Festival. Hendrix used this performance on a national stage to protest the national anthem and the violence carried out under the flag. The musical performance was a literal interpretation of the lyrics “bombs bursting in air,” and “the rockets red glare.” The rumbles and wails of the amplified guitar make subtly unpleasant pitches to represent Hendrix’s protest of the ugliness behind the American glory that the national anthem is meant to represent.
Janis Joplin was an influential singer during the Counterculture movement who was famous for her raw, powerful vocals and emotional songs. Joplin rose to fame in 1967 during an appearance at the Monterey Pop Festival, as the lead singer for the San-Francisco psychedelic rock band Big Brother and the Holding Company. After releasing two albums with the band, she left to pursue a solo career. Joplin made many appearances at music festivals including the Woodstock Music Festival and the Festival Express train tour. Joplin died of an accidental drug overdose at the age of 27 after releasing three albums. Recorded a few days after her death, her 1971 hit, “Me and My Bobby Magee”, quickly rose to the top of the U.S singles chart.
Joan Baez was a psychedelic artist who represented the folk and blues side of the genre during the early 1970’s. Baez’s contemporary folk music often included messages of protest or social justice that reflected the views of the Counterculture movement. She quickly rose to fame in the 1960’s and released her first three albums that continuously stayed on the Billboard Top 100 chart. Her 1975 hit, “Diamonds & Rust”, was a Top-40 hit for Baez on the U.S. pop singles chart.
After leading the “British Invasion” into the United States pop market in 1964, the Beatles quickly became the top-selling band in the United States and currently still hold the title of the top-selling band in history. The Beatles built their reputation through “Beatlemania” in the early 1960’s, but the band’s music grew in sophistication, led by songwriters Lennon and McCartney, and the band eventually was perceived to be the embodiment of the ideals of the Counterculture movement. The Beatles are known to be pioneers of the use of non-Western instruments, looping effects, and lyrics to detach listeners from reality in their later psychedelic albums. Their 1967 album, “Sgt. Pepper’s Lonely Hearts Club Band,” was extremely popular during the Counterculture movement. Arguably one of the most popular songs on the album, “Lucy in the Sky with Diamonds,” is a classic example of the Beatles’ psychedelic music.
Woodstock Music Festival
Advertised as “three days of peace and music,” Woodstock Music Festival was one of the largest gatherings for those part of the counterculture movement. The music festival took place between August 15-18 1969 concluding on Monday the 18th with Jimi Hendrix’s memorable performance of "The Star Spangled Banner."
The four men behind Woodstock, Mike Lang, John Roberts, Joel Rosenman and Artie Kornfield created the concert for two reasons: to raise funds to create a music studio in the artist colony of Woodstock, NY and also to have a never-before-done music festival. Lang and Kornfield came up with the idea around Christmas time 1968 at a "pot party." After failing to secure a spot for the festival, the Lang worked out a deal with a farmer by the name of Max Yasgur to hold the festival on his farmland in Bethel, NY.
Over 100,000 tickets were sold for this concert, but when Lang arrived to the festival site the Friday morning there were 30,000 kids camping out in front of the stage. Already highly disorganized, that moment set the stage for how the rest of the three-day-long festival would go. Eventually, over 400,000 people showed up to the unfinished stage. Roberts said in a New York Times article that everything was stolen from the merchandise, to the walkie-talkies--even the Jeeps the organizers drove went missing.
The three-day-long music festival included 33 performances by groups like Jefferson Airplane, The Who, the Grateful Dead, Sly and the Family Stone, Janis Joplin, Jimi Hendrix, and Creedence Clearwater Revival just to name a few.
Despite the musicians discussing/performing about controversial issues--opposition of the Vietnam war--the music festival was peaceful. Only two people died, one from a drug overdose and the other was accidentally run over by a tractor. With the lack of organization, this music festival could have been a disaster, but instead it is remembered fondly with many trying to recreate it--although no one has been successful.
Janis Joplin during her performance that went from the late hours of the second day into the third day of the festival.
Building of the stage where the 33 acts performed. The stage was "incomplete" by the time of the festival since there was no roof.
Aerial view shows the massive crowd during the Woodstock Music Festival.
The crowd of Woodstock was relatively calm and peaceful throughout the festival with many of the audience members under the influence of drugs.
Jimi Hendrix during his performance which ended the weekend festival.
One of the early music festival posters before the location of Woodstock was moved.
Here is another image of the line up poster for Woodstock with one of the locations listed before the venue was moved to Bethel, NY.
This is one of the advertisements the four creators placed in newspapers to advertise for the festival.
Anti-War Movement
While anti-war protests were almost commonplace during the Vietnam War, the Counterculture Movement was one of the most iconic groups making their voices heard. Images of teens burning their draft cards and waving Viet Cong flags have burned their way into the collective memory of the United States. Their uses of guerilla theatre and community activism played invaluable roles in threatening the support of the Vietnam War. Central to the Counterculture Movement were the Hippies, who promoted peace over war and protested conscription. They held rallies and protests which were characterized by music, sex, drugs, vulgar language and nudity. Even other anti-war groups tried to delegitimize the Hippies because they did not fit the moral standards of the other groups. War sympathizers and anti-war protestors enacted a 'regime of visuality' that measured one's loyalty to the state by an evaluation of one's appearance. Essentially, visual deviance was seen as opposition to war.
Although the Counterculture Movement was loosely connected by common beliefs, there was no central organization to the movement. Rather, individual groups of people organized marches and demonstrations. Many were peaceful, although the social and moral reputations of those involved often led to conflict with local law enforcement. None was so clear as the Kent State shootings, were four students were killed by the Ohio National Guard for protesting the Vietnam War.
Selective Service cards were symbols of the hated war- and the draft accompanying it.
Although the burning of draft cards became synonomous with the Counterculture Movement, it was not widely practiced.
Seemingly vulgar signs merely showed that Hippies were unafraid of taboo topics and furthered their notion of free love.
Drugs were greatly used by members of the Counterculture Movement as they fought against war and for freedom of expression.
Music provided an outlet for the anti-war movement, with large protests often featuring sex, drugs, and music.
Flower Power is a term used to describe the Hippie Movement, and was fueled by their clothing and use of flowers at protests.
Student protests were seen as part of the Counterculture movement, though this came with deadly consequences.
While Americans were inundated with war propaganda, Hippies produced their own posters promoting peace.
Fashion/Hair
Fashion during the counterculture period can be best represented through the photographs that were taken at the time. These photographs were taken at concerts, music festivals, and other events occurring alongside the Civil Rights Movement. Hippies believed that fashion was the ideal way to express who you are, which is why their fashion and hair statements are iconic. Hippie fashion included many trends, such as: bright colors, "raggedy" clothing, beads, fringe, afros, and sandals. Many more fashion trends also define the counterculture movement. Their fashion distinguished hippies from the rest of society. Most clothing was purchased from yard sales, thrift shops, and flea markets in order to fight commercialism.
“Remember When Jimi Hendrix Protested the National Anthem on a National Stage?”. SPIN. September 12, 2016. Accessed on March 30, 2017. http://www.spin.com/2016/09/remember-when-jimi-hendrix-protested-the-national-anthem-on-a-national-stage/
"The Other Vietnam Syndrome: The Cultural Politics of Corporeal Patriotism and Visual Resistance." CME: An International E-Journal for Critical Geographies. Accessed March 23, 2017. http://proxy-remote.galib.uga.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=116840207&site=eds-live
"The Anti-War Movement in the United States." Modern America Poetry. Accessed March 23, 2017. http://www.english.illinois.edu/maps/vietnam/antiwar.html
|
Only two people died, one from a drug overdose and the other was accidentally run over by a tractor. With the lack of organization, this music festival could have been a disaster, but instead it is remembered fondly with many trying to recreate it--although no one has been successful.
Janis Joplin during her performance that went from the late hours of the second day into the third day of the festival.
Building of the stage where the 33 acts performed. The stage was "incomplete" by the time of the festival since there was no roof.
Aerial view shows the massive crowd during the Woodstock Music Festival.
The crowd of Woodstock was relatively calm and peaceful throughout the festival with many of the audience members under the influence of drugs.
Jimi Hendrix during his performance which ended the weekend festival.
One of the early music festival posters before the location of Woodstock was moved.
Here is another image of the line up poster for Woodstock with one of the locations listed before the venue was moved to Bethel, NY.
This is one of the advertisements the four creators placed in newspapers to advertise for the festival.
Anti-War Movement
While anti-war protests were almost commonplace during the Vietnam War, the Counterculture Movement was one of the most iconic groups making their voices heard. Images of teens burning their draft cards and waving Viet Cong flags have burned their way into the collective memory of the United States. Their uses of guerilla theatre and community activism played invaluable roles in threatening the support of the Vietnam War. Central to the Counterculture Movement were the Hippies, who promoted peace over war and protested conscription. They held rallies and protests which were characterized by music, sex, drugs, vulgar language and nudity. Even other anti-war groups tried to delegitimize the Hippies because they did not fit the moral standards of the other groups. War sympathizers and anti-war protestors enacted a 'regime of visuality' that measured one's loyalty to the state by an evaluation of one's appearance.
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
yes_statement
|
"woodstock" "festival" "promoted" "peace" and "love".. "peace" and "love" were "promoted" at "woodstock" "festival".
|
https://www.uticaod.com/story/news/2021/08/05/state-sen-joseph-griffo-former-rome-mayor-reacts-hbo-woodstock-99-documentary/5418242001/
|
State Sen. Joseph Griffo reacts to HBO Woodstock 99 documentary
|
ROME – State Sen. Joseph Griffo said the people behind the recent HBO documentary on Woodstock ‘99 had a story they wanted to tell.
The senator did not fault them for this, but said there were several inaccuracies in the documentary and it had a storytelling lens he disagreed with.
“It was a story told from a particular point of view,” said Griffo, R-Rome.
"Woodstock 99: Peace, Love and Rage" is available through HBO and first aired Friday, July 23. HBO said the documentary is the first film to debut as part of Music Box, a collection of documentaries exploring pivotal moments in the music world.
According to HBO, the documentary tells the story of Woodstock ‘99, a music festival promoted to echo the idealism of the original 1969 concert, but would devolve into riots, looting and sexual assaults.
Griffo, Rome's mayor when Woodstock ‘99 took place in his city, watched the documentary at the Capitol Theatre in Rome, which showcased it the first night it aired on HBO.
One of Griffo’s takeaways was that those affiliated with the film used a 2021 lens to portray an event that took place in 1999. Griffo said the music industry journalists interviewed as part of the documentary were the ones particularly focusing through this lens.
Griffo said one of the areas the music journalists used the lens to look back on the festival was for demographic purposes.
The crowd at Woodstock ‘99 was predominantly white, with Griffo saying the tickets were available for whoever wanted to purchase them.
The documentary used footage of the late rapper DMX getting the crowd to chant back the N word during his performance as a way to showcase the racial divide. Commentators in the documentary said it must have been hard for the few Black attendees to witness this and possibly hear their friends use the word.
Transportation issues and how much money the festival brought to Rome were among the inaccuracies highlighted by Griffo.
The documentary highlighted traffic issues at the festival, including footage of cars backed up on the roads to Griffiss. The footage showed festival goers partying in the road and on top of their cars as they waited in traffic.
Griffo said he was interviewed for about six hours for the documentary and noted he had told producers that there was no issue with traffic.
“That was one of the things we’re most proud of,” Griffo said of traffic to and from the event.
Showcased on a placard as the movie ended, the documentary noted Rome had only made $200,000 from Woodstock ‘99 after all needed fees had been factored in.
Griffo said the monetary amount was true, but added the documentary took the liberty of not including such things as sales tax spent in the community.
The documentary highlights the events of the three-day festival, which culminated in the riots, vandalism and violence on Sunday. The fires and looting on the festival’s final night — Sunday, July 25, 1999 — left a black mark on Woodstock ’99 for many people.
The fires started shortly after the Red Hot Chili Peppers came back on stage for an encore and played its Jimi Hendrix cover of “Fire.” The arson and looting began shortly thereafter.
More than 700 state police gathered and began crowd control, a response that rivaled that of the 1971 Attica Prison riots — where more than 1,000 state police and National Guardsmen put down the bloodiest prison riot in history, according to an Observer-Dispatch article in 2019, highlighting the 20th anniversary of Woodstock ‘99. Police reported at least 15 arrests, one death and Mohawk Valley rape counselors received multiple sexual assault claims, archives show. Medical personnel also dealt with multiple instances of heat-related illnesses.
“Violence is unacceptable,” Griffo said. “There should be consequences for that.”
Griffo would go on to defend the decision to hold Woodstock ‘99 in Rome.
He said the event helped with short-term economics, exposure for the city and experience.
“I made the right decision at the right time,” Griffo said.
Ed Harris is the Oneida County reporter for the Observer-Dispatch. Email Ed Harris at EHarris1@gannett.com.
Opening night
Rome’s Capitol Theatre was able to showcase the HBO documentary Woodstock 99: Peace, Love and Rage on Friday, July 23, the night it premiered.
"I have been talking with HBO for the past 18 months,” Lewis said. “I will have much footage in the movie and traveled around the city of Rome with their camera person who was acquiring "today in Rome" footage.”
“During that time, I asked for permission to screen the movie when it opens.”
|
ROME – State Sen. Joseph Griffo said the people behind the recent HBO documentary on Woodstock ‘99 had a story they wanted to tell.
The senator did not fault them for this, but said there were several inaccuracies in the documentary and it had a storytelling lens he disagreed with.
“It was a story told from a particular point of view,” said Griffo, R-Rome.
"Woodstock 99: Peace, Love and Rage" is available through HBO and first aired Friday, July 23. HBO said the documentary is the first film to debut as part of Music Box, a collection of documentaries exploring pivotal moments in the music world.
According to HBO, the documentary tells the story of Woodstock ‘99, a music festival promoted to echo the idealism of the original 1969 concert, but would devolve into riots, looting and sexual assaults.
Griffo, Rome's mayor when Woodstock ‘99 took place in his city, watched the documentary at the Capitol Theatre in Rome, which showcased it the first night it aired on HBO.
One of Griffo’s takeaways was that those affiliated with the film used a 2021 lens to portray an event that took place in 1999. Griffo said the music industry journalists interviewed as part of the documentary were the ones particularly focusing through this lens.
Griffo said one of the areas the music journalists used the lens to look back on the festival was for demographic purposes.
The crowd at Woodstock ‘99 was predominantly white, with Griffo saying the tickets were available for whoever wanted to purchase them.
The documentary used footage of the late rapper DMX getting the crowd to chant back the N word during his performance as a way to showcase the racial divide. Commentators in the documentary said it must have been hard for the few Black attendees to witness this and possibly hear their friends use the word.
Transportation issues and how much money the festival brought to Rome were among the inaccuracies highlighted by Griffo.
|
no
|
Festivals
|
Did Woodstock festival promote peace and love?
|
no_statement
|
"woodstock" "festival" did not "promote" "peace" and "love".. "peace" and "love" were not "promoted" at "woodstock" "festival".
|
https://www.americanyawp.com/text/28-the-unraveling/
|
28. The Unraveling | THE AMERICAN YAWP
|
I. Introduction
On December 6, 1969, an estimated three hundred thousand people converged on the Altamont Motor Speedway in Northern California for a massive free concert headlined by the Rolling Stones and featuring some of the era’s other great rock acts.1 Only four months earlier, Woodstock had shown the world the power of peace and love and American youth. Altamont was supposed to be “Woodstock West.”2
But Altamont was a disorganized disaster. Inadequate sanitation, a horrid sound system, and tainted drugs strained concertgoers. To save money, the Hells Angels biker gang was paid $500 in beer to be the show’s “security team.” The crowd grew progressively angrier throughout the day. Fights broke out. Tensions rose. The Angels, drunk and high, armed themselves with sawed-off pool cues and indiscriminately beat concertgoers who tried to come on the stage. The Grateful Dead refused to play. Finally, the Stones came on stage.3
The crowd’s anger was palpable. Fights continued near the stage. Mick Jagger stopped in the middle of playing “Sympathy for the Devil” to try to calm the crowd: “Everybody be cool now, c’mon,” he pleaded. Then, a few songs later, in the middle of “Under My Thumb,” eighteen-year-old Meredith Hunter approached the stage and was beaten back. Pissed off and high on methamphetamines, Hunter brandished a pistol, charged again, and was stabbed and killed by an Angel. His lifeless body was stomped into the ground. The Stones just kept playing.4
If the more famous Woodstock music festival captured the idyll of the sixties youth culture, Altamont revealed its dark side. There, drugs, music, and youth were associated not with peace and love but with anger, violence, and death. While many Americans in the 1970s continued to celebrate the political and cultural achievements of the previous decade, a more anxious, conservative mood grew across the nation. For some, the United States had not gone nearly far enough to promote greater social equality; for others, the nation had gone too far, unfairly trampling the rights of one group to promote the selfish needs of another. Onto these brewing dissatisfactions, the 1970s dumped the divisive remnants of a failed war, the country’s greatest political scandal, and an intractable economic crisis. It seemed as if the nation was ready to unravel.
II. The Strain of Vietnam
Vietnam War protestors at the March on the Pentagon. Lyndon B. Johnson Library via Wikimedia.
Perhaps no single issue contributed more to public disillusionment than the Vietnam War. As the war deteriorated, the Johnson administration escalated American involvement by deploying hundreds of thousands of troops to prevent the communist takeover of the south. Stalemates, body counts, hazy war aims, and the draft catalyzed an antiwar movement and triggered protests throughout the United States and Europe. With no end in sight, protesters burned draft cards, refused to pay income taxes, occupied government buildings, and delayed trains loaded with war materials. By 1967, antiwar demonstrations were drawing hundreds of thousands. In one protest, hundreds were arrested after surrounding the Pentagon.5
Vietnam was the first “living room war.”6 Television, print media, and open access to the battlefield provided unprecedented coverage of the conflict’s brutality. Americans confronted grisly images of casualties and atrocities. In 1965, CBS Evening News aired a segment in which U.S. Marines burned the South Vietnamese village of Cam Ne with little apparent regard for the lives of its occupants, who had been accused of aiding Vietcong guerrillas. President Johnson berated the head of CBS, yelling over the phone, “Your boys just shat on the American flag.”7
While the U.S. government imposed no formal censorship on the press during Vietnam, the White House and military nevertheless used press briefings and interviews to paint a deceptive image of the war. The United States was winning the war, officials claimed. They cited numbers of enemies killed, villages secured, and South Vietnamese troops trained. However, American journalists in Vietnam quickly realized the hollowness of such claims (the press referred to afternoon press briefings in Saigon as “the Five o’Clock Follies”).8 Editors frequently toned down their reporters’ pessimism, often citing conflicting information received from their own sources, who were typically government officials. But the evidence of a stalemate mounted.
Stories like CBS’s Cam Ne piece exposed a credibility gap, the yawning chasm between the claims of official sources and the increasingly evident reality on the ground in Vietnam.9 Nothing did more to expose this gap than the 1968 Tet Offensive. In January, communist forces attacked more than one hundred American and South Vietnamese sites throughout South Vietnam, including the American embassy in Saigon. While U.S. forces repulsed the attack and inflicted heavy casualties on the Vietcong, Tet demonstrated that despite the repeated claims of administration officials, the enemy could still strike at will anywhere in the country, even after years of war. Subsequent stories and images eroded public trust even further. In 1969, investigative reporter Seymour Hersh revealed that U.S. troops had raped and/or massacred hundreds of civilians in the village of My Lai.10 Three years later, Americans cringed at Nick Ut’s wrenching photograph of a naked Vietnamese child fleeing a South Vietnamese napalm attack. More and more American voices came out against the war.
Reeling from the war’s growing unpopularity, on March 31, 1968, President Johnson announced on national television that he would not seek reelection.11 Eugene McCarthy and Robert F. Kennedy unsuccessfully battled against Johnson’s vice president, Hubert Humphrey, for the Democratic Party nomination (Kennedy was assassinated in June). At the Democratic Party’s national convention in Chicago, local police brutally assaulted protesters on national television.
For many Americans, the violent clashes outside the convention hall reinforced their belief that civil society was unraveling. Republican challenger Richard Nixon played on these fears, running on a platform of “law and order” and a vague plan to end the war. Well aware of domestic pressure to wind down the war, Nixon sought, on the one hand, to appease antiwar sentiment by promising to phase out the draft, train South Vietnamese forces to assume more responsibility for the war effort, and gradually withdraw American troops. Nixon and his advisors called it “Vietnamization.”12 At the same time, Nixon appealed to the so-called silent majority of Americans who still supported the war (and opposed the antiwar movement) by calling for an “honorable” end to U.S. involvement—what he later called “peace with honor.”13 He narrowly edged out Humphrey in the fall’s election.
Public assurances of American withdrawal, however, masked a dramatic escalation of conflict. Looking to incentivize peace talks, Nixon pursued a “madman strategy” of attacking communist supply lines across Laos and Cambodia, hoping to convince the North Vietnamese that he would do anything to stop the war.14 Conducted without public knowledge or congressional approval, the bombings failed to spur the peace process, and talks stalled before the American-imposed November 1969 deadline. News of the attacks renewed antiwar demonstrations. Police and National Guard troops killed six students in separate protests at Jackson State University in Mississippi, and, more famously, Kent State University in Ohio in 1970.
Another three years passed—and another twenty thousand American troops died—before an agreement was reached.15 After Nixon threatened to withdraw all aid and guaranteed to enforce a treaty militarily, the North and South Vietnamese governments signed the Paris Peace Accords in January 1973, marking the official end of U.S. force commitment to the Vietnam War. Peace was tenuous, and when war resumed North Vietnamese troops quickly overwhelmed southern forces. By 1975, despite nearly a decade of direct American military engagement, Vietnam was united under a communist government.
The Vietnam War profoundly influenced domestic politics. Moreover, it poisoned many Americans’ perceptions of their government and its role in the world. And yet, while the antiwar demonstrations attracted considerable media attention and stand today as a hallmark of the sixties counterculture, many Americans nevertheless continued to regard the war as just. Wary of the rapid social changes that reshaped American society in the 1960s and worried that antiwar protests threatened an already tenuous civil order, a growing number of Americans turned to conservatism.
III. Racial, Social, and Cultural Anxieties
Los Angeles police violently arrest a man during the Watts riot on August 12, 1965. Wikimedia.
The civil rights movement looked dramatically different at the end of the 1960s than it had at the beginning. The movement had never been monolithic, but prominent, competing ideologies had fractured the movement in the 1970s. The rise of the Black Power movement challenged the integrationist dreams of many older activists as the assassinations of Martin Luther King Jr. and Malcolm X fueled disillusionment and many alienated activists recoiled from liberal reformers.
The political evolution of the civil rights movement was reflected in American culture. The lines of race, class, and gender ruptured American “mass” culture. The monolith of popular American culture, pilloried in the fifties and sixties as exclusively white, male-dominated, conservative, and stifling, finally shattered and Americans retreated into ever smaller, segmented subcultures. Marketers now targeted particular products to ever smaller pieces of the population, including previously neglected groups such as African Americans.16 Subcultures often revolved around certain musical styles, whether pop, disco, hard rock, punk rock, country, or hip-hop. Styles of dress and physical appearance likewise aligned with cultures of choice.
If the popular rock acts of the sixties appealed to a new counterculture, the seventies witnessed the resurgence of cultural forms that appealed to a white working class confronting the social and political upheavals of the 1960s. Country hits such as Merle Haggard’s “Okie from Muskogee” evoked simpler times and places where people “still wave Old Glory down at the courthouse” and they “don’t let our hair grow long and shaggy like the hippies out in San Francisco.” (Haggard would claim the song was satirical, but it nevertheless took hold.) A popular television sitcom, All in the Family, became an unexpected hit among “middle America.” The show’s main character, Archie Bunker, was designed to mock reactionary middle-aged white men, but audiences embraced him. “Isn’t anyone interested in upholding standards?” he lamented in an episode dealing with housing integration. “Our world is coming crumbling down. The coons are coming!”17
As Bunker knew, African Americans were becoming much more visible in American culture. While Black cultural forms had been prominent throughout American history, they assumed new popular forms in the 1970s. Disco offered a new, optimistic, racially integrated pop music. Musicians such as Aretha Franklin, Andraé Crouch, and “fifth Beatle” Billy Preston brought their background in church performance to their own recordings as well as to the work of white artists like the Rolling Stones, with whom they collaborated. By the end of the decade, African American musical artists had introduced American society to one of the most significant musical innovations in decades: the Sugarhill Gang’s 1979 record, Rapper’s Delight. A lengthy paean to Black machismo, it became the first rap single to reach the Top 40.18
Just as rap represented a hypermasculine Black cultural form, Hollywood popularized its white equivalent. Films such as 1971’s Dirty Harry captured a darker side of the national mood. Clint Eastwood’s titular character exacted violent justice on clear villains, working within the sort of brutally simplistic ethical standard that appealed to Americans anxious about a perceived breakdown in “law and order.” (“The film’s moral position is fascist,” said critic Roger Ebert, who nevertheless gave it three out of four stars.19)
Perhaps the strongest element fueling American anxiety over “law and order” was the increasingly visible violence associated with the civil rights movement. No longer confined to the antiblack terrorism that struck the southern civil rights movement in the 1950s and 1960s, publicly visible violence now broke out among Black Americans in urban riots and among whites protesting new civil rights programs. In the mid-1970s, for instance, protests over the use of busing to overcome residential segregation and truly integrate public schools in Boston washed the city in racial violence. Stanley Forman’s Pulitzer Prize–winning photo, The Soiling of Old Glory, famously captured a Black civil rights attorney, Ted Landsmark, being attacked by a mob of anti-busing protesters, one of whom wielded an American flag as a weapon.20
Urban riots, though, rather than anti-integration violence, tainted many white Americans’ perception of the civil rights movement and urban life in general. Civil unrest broke out across the country, but the riots in Watts/Los Angeles (1965), Newark (1967), and Detroit (1967) were the most shocking. In each, a physical altercation between white police officers and African Americans spiraled into days of chaos and destruction. Tens of thousands participated in urban riots. Many looted and destroyed white-owned business. There were dozens of deaths, tens of millions of dollars in property damage, and an exodus of white capital that only further isolated urban poverty.21
In 1967, President Johnson appointed the Kerner Commission to investigate the causes of America’s riots. Their report became an unexpected best seller.22 The commission cited Black frustration with the hopelessness of poverty as the underlying cause of urban unrest. As the head of the Black National Business League testified, “It is to be more than naïve—indeed, it is a little short of sheer madness—for anyone to expect the very poorest of the American poor to remain docile and content in their poverty when television constantly and eternally dangles the opulence of our affluent society before their hungry eyes.”23 A Newark rioter who looted several boxes of shirts and shoes put it more simply: “They tell us about that pie in the sky but that pie in the sky is too damn high.”24 But white conservatives blasted the conclusion that white racism and economic hopelessness were to blame for the violence. African Americans wantonly destroying private property, they said, was not a symptom of America’s intractable racial inequalities but the logical outcome of a liberal culture of permissiveness that tolerated—even encouraged—nihilistic civil disobedience. Many white moderates and liberals, meanwhile, saw the explosive violence as a sign that African Americans had rejected the nonviolence of the earlier civil rights movement.
The unrest of the late sixties did, in fact, reflect a real and growing disillusionment among African Americans with the fate of the civil rights crusade. In the still-moldering ashes of Jim Crow, African Americans in Watts and other communities across the country bore the burdens of lifetimes of legally sanctioned discrimination in housing, employment, and credit. Segregation survived the legal dismantling of Jim Crow. The perseverance into the present day of stark racial and economic segregation in nearly all American cities destroyed any simple distinction between southern de jure segregation and nonsouthern de facto segregation. Black neighborhoods became traps that too few could escape.
Political achievements such as the 1964 Civil Rights Act and the 1965 Voting Rights Act were indispensable legal preconditions for social and political equality, but for most, the movement’s long (and now often forgotten) goal of economic justice proved as elusive as ever. “I worked to get these people the right to eat cheeseburgers,” Martin Luther King Jr. supposedly said to Bayard Rustin as they toured the devastation in Watts some years earlier, “and now I’ve got to do something . . . to help them get the money to buy it.”25 What good was the right to enter a store without money for purchases?
IV. The Crisis of 1968
To Americans in 1968, the country seemed to be unraveling. Martin Luther King Jr. was killed on April 4, 1968. He had been in Memphis to support striking sanitation workers. (Prophetically, he had reflected on his own mortality in a rally the night before. Confident that the civil rights movement would succeed without him, he brushed away fears of death. “I’ve been to the mountaintop,” he said, “and I’ve seen the promised land.”). The greatest leader in the American civil rights movement was lost. Riots broke out in over a hundred American cities. Two months later, on June 6, Robert F. Kennedy was killed campaigning in California. He had represented the last hope of liberal idealists. Anger and disillusionment washed over the country.
As the Vietnam War descended ever deeper into a brutal stalemate and the Tet Offensive exposed the lies of the Johnson administration, students shut down college campuses and government facilities. Protests enveloped the nation.
Protesters converged on the Democratic National Convention in Chicago at the end of August 1968, when a bitterly fractured Democratic Party gathered to assemble a passable platform and nominate a broadly acceptable presidential candidate. Demonstrators planned massive protests in Chicago’s public spaces. Initial protests were peaceful, but the situation quickly soured as police issued stern threats and young people began to taunt and goad officials. Many of the assembled students had protest and sit-in experiences only in the relative safe havens of college campuses and were unprepared for Mayor Richard Daley’s aggressive and heavily armed police force and National Guard troops in full riot gear. Attendees recounted vicious beatings at the hands of police and Guardsmen, but many young people—convinced that much public sympathy could be won via images of brutality against unarmed protesters—continued stoking the violence. Clashes spilled from the parks into city streets, and eventually the smell of tear gas penetrated the upper floors of the opulent hotels hosting Democratic delegates. Chicago’s brutality overshadowed the convention and culminated in an internationally televised, violent standoff in front of the Hilton Hotel. “The whole world is watching,” the protesters chanted. The Chicago riots encapsulated the growing sense that chaos now governed American life.
For many sixties idealists, the violence of 1968 represented the death of a dream. Disorder and chaos overshadowed hope and progress. And for conservatives, it was confirmation of all of their fears and hesitations. Americans of 1968 turned their back on hope. They wanted peace. They wanted stability. They wanted “law and order.”
V. The Rise and Fall of Richard Nixon
Richard Nixon campaigns in Philadelphia during the 1968 presidential election. National Archives.
Beleaguered by an unpopular war, inflation, and domestic unrest, President Johnson opted against reelection in March 1968—an unprecedented move in modern American politics. The forthcoming presidential election was shaped by Vietnam and the aforementioned unrest as much as by the campaigns of Democratic nominee Vice President Hubert Humphrey, Republican Richard Nixon, and third-party challenger George Wallace, the infamous segregationist governor of Alabama. The Democratic Party was in disarray in the spring of 1968, when senators Eugene McCarthy and Robert Kennedy challenged Johnson’s nomination and the president responded with his shocking announcement. Nixon’s candidacy was aided further by riots that broke out across the country after the assassination of Martin Luther King Jr. and the shock and dismay experienced after the slaying of Robert Kennedy in June. The Republican nominee’s campaign was defined by shrewd maintenance of his public appearances and a pledge to restore peace and prosperity to what he called “the silent center; the millions of people in the middle of the political spectrum.” This campaign for the “silent majority” was carefully calibrated to attract suburban Americans by linking liberals with violence and protest and rioting. Many embraced Nixon’s message; a September 1968 poll found that 80 percent of Americans believed public order had “broken down.”
Meanwhile, Humphrey struggled to distance himself from Johnson and maintain working-class support in northern cities, where voters were drawn to Wallace’s appeals for law and order and a rejection of civil rights. The vice president had a final surge in northern cities with the aid of union support, but it was not enough to best Nixon’s campaign. The final tally was close: Nixon won 43.3 percent of the popular vote (31,783,783), narrowly besting Humphrey’s 42.7 percent (31,266,006). Wallace, meanwhile, carried five states in the Deep South, and his 13.5 percent (9,906,473) of the popular vote constituted an impressive showing for a third-party candidate. The Electoral College vote was more decisive for Nixon; he earned 302 electoral votes, while Humphrey and Wallace received only 191 and 45 votes, respectively. Although Republicans won a few seats, Democrats retained control of both the House and Senate and made Nixon the first president in 120 years to enter office with the opposition party controlling both houses.
Once installed in the White House, Richard Nixon focused his energies on American foreign policy, publicly announcing the Nixon Doctrine in 1969. On the one hand, Nixon asserted the supremacy of American democratic capitalism and conceded that the United States would continue supporting its allies financially. However, he denounced previous administrations’ willingness to commit American forces to Third World conflicts and warned other states to assume responsibility for their own defense. He was turning America away from the policy of active, anticommunist containment, and toward a new strategy of détente.26
Promoted by national security advisor and eventual secretary of state Henry Kissinger, détente sought to stabilize the international system by thawing relations with Cold War rivals and bilaterally freezing arms levels. Taking advantage of tensions between communist China and the Soviet Union, Nixon pursued closer relations with both in order to de-escalate tensions and strengthen the United States’ position relative to each. The strategy seemed to work. In 1972, Nixon became the first American president to visit communist China and the first since Franklin Roosevelt to visit the Soviet Union. Direct diplomacy and cultural exchange programs with both countries grew and culminated with the formal normalization of U.S.-Chinese relations and the signing of two U.S.-Soviet arms agreements: the antiballistic missile (ABM) treaty and the Strategic Arms Limitations Treaty (SALT I). By 1973, after almost thirty years of Cold War tension, peaceful coexistence suddenly seemed possible.
Soon, though, a fragile calm gave way again to Cold War instability. In November 1973, Nixon appeared on television to inform Americans that energy had become “a serious national problem” and that the United States was “heading toward the most acute shortages of energy since World War II.”27 The previous month Arab members of the Organization of the Petroleum Exporting Countries (OPEC), a cartel of the world’s leading oil producers, embargoed oil exports to the United States in retaliation for American intervention in the Middle East. The embargo launched the first U.S. energy crisis. By the end of 1973, the global price of oil had quadrupled.28 Drivers waited in line for hours to fill up their cars. Individual gas stations ran out of gas. American motorists worried that oil could run out at any moment. A Pennsylvania man died when his emergency stash of gasoline ignited in his trunk and backseat.29 OPEC rescinded its embargo in 1974, but the economic damage had been done. The crisis extended into the late 1970s.
Like the Vietnam War, the oil crisis showed that small countries could still hurt the United States. At a time of anxiety about the nation’s future, Vietnam and the energy crisis accelerated Americans’ disenchantment with the United States’ role in the world and the efficacy and quality of its leaders. Furthermore, government scandals in the 1970s and early 1980s sapped trust in America’s public institutions. In 1971, the Nixon administration tried unsuccessfully to sue the New York Times and the Washington Post to prevent the publication of the Pentagon Papers, a confidential and damning history of U.S. involvement in Vietnam commissioned by the Defense Department and later leaked. The papers showed how presidents from Truman to Johnson repeatedly deceived the public on the war’s scope and direction.30 Nixon faced a rising tide of congressional opposition to the war, and Congress asserted unprecedented oversight of American war spending. In 1973, it passed the War Powers Resolution, which dramatically reduced the president’s ability to wage war without congressional consent.
However, no scandal did more to unravel public trust than Watergate. On June 17, 1972, five men were arrested inside the offices of the Democratic National Committee (DNC) in the Watergate Complex in downtown Washington, D.C. After being tipped off by a security guard, police found the men attempting to install sophisticated bugging equipment. One of those arrested was a former CIA employee then working as a security aide for the Nixon administration’s Committee to Re-elect the President (lampooned as “CREEP”).
While there is no direct evidence that Nixon ordered the Watergate break-in, he had been recorded in conversation with his chief of staff requesting that the DNC chairman be illegally wiretapped to obtain the names of the committee’s financial supporters. The names could then be given to the Justice Department and the Internal Revenue Service (IRS) to conduct spurious investigations into their personal affairs. Nixon was also recorded ordering his chief of staff to break into the offices of the Brookings Institution and take files relating to the war in Vietnam, saying, “Goddammit, get in and get those files. Blow the safe and get it.”31
Whether or not the president ordered the Watergate break-in, the White House launched a massive cover-up. Administration officials ordered the CIA to halt the FBI investigation and paid hush money to the burglars and White House aides. Nixon distanced himself from the incident publicly and went on to win a landslide election victory in November 1972. But, thanks largely to two persistent journalists at the Washington Post, Bob Woodward and Carl Bernstein, information continued to surface that tied the burglaries ever closer to the CIA, the FBI, and the White House. The Senate held televised hearings. Citing executive privilege, Nixon refused to comply with orders to produce tapes from the White House’s secret recording system. In July 1974, the House Judiciary Committee approved a bill to impeach the president. Nixon resigned before the full House could vote on impeachment. He became the first and only American president to resign from office.32
Vice President Gerald Ford was sworn in as his successor and a month later granted Nixon a full presidential pardon. Nixon disappeared from public life without ever publicly apologizing, accepting responsibility, or facing charges.
VI. Deindustrialization and the Rise of the Sunbelt
American workers had made substantial material gains throughout the 1940s and 1950s. During the so-called Great Compression, Americans of all classes benefited from postwar prosperity. Segregation and discrimination perpetuated racial and gender inequalities, but unemployment continually fell and a highly progressive tax system and powerful unions lowered general income inequality as working-class standards of living nearly doubled between 1947 and 1973.
But general prosperity masked deeper vulnerabilities. Perhaps no case better illustrates the decline of American industry and the creation of an intractable urban crisis than Detroit. Detroit boomed during World War II. When auto manufacturers like Ford and General Motors converted their assembly lines to build machines for the American war effort, observers dubbed the city the “arsenal of democracy.”
After the war, however, automobile firms began closing urban factories and moving to outlying suburbs. Several factors fueled the process. Some cities partly deindustrialized themselves. Municipal governments in San Francisco, St. Louis, and Philadelphia banished light industry to make room for high-rise apartments and office buildings. Mechanization also contributed to the decline of American labor. A manager at a newly automated Ford engine plant in postwar Cleveland captured the interconnections between these concerns when he glibly noted to United Automobile Workers (UAW) president Walter Reuther, “You are going to have trouble collecting union dues from all of these machines.”33 More importantly, however, manufacturing firms sought to reduce labor costs by automating, downsizing, and relocating to areas with “business friendly” policies like low tax rates, anti-union right-to-work laws, and low wages.
Detroit began to bleed industrial jobs. Between 1950 and 1958, Chrysler, which actually kept more jobs in Detroit than either Ford or General Motors, cut its Detroit production workforce in half. In the years between 1953 and 1960, East Detroit lost ten plants and over seventy-one thousand jobs.34 Because Detroit was a single-industry city, decisions made by the Big Three automakers reverberated across the city’s industrial landscape. When auto companies mechanized or moved their operations, ancillary suppliers like machine tool companies were cut out of the supply chain and likewise forced to cut their own workforce. Between 1947 and 1977, the number of manufacturing firms in the city dropped from over three thousand to fewer than two thousand. The labor force was gutted. Manufacturing jobs fell from 338,400 to 153,000 over the same three decades.35
Industrial restructuring decimated all workers, but deindustrialization fell heaviest on the city’s African Americans. Although many middle-class Black Detroiters managed to move out of the city’s ghettos, by 1960, 19.7 percent of Black autoworkers in Detroit were unemployed, compared to just 5.8 percent of whites.36 Overt discrimination in housing and employment had for decades confined African Americans to segregated neighborhoods where they were forced to pay exorbitant rents for slum housing. Subject to residential intimidation and cut off from traditional sources of credit, few could afford to follow industry as it left the city for the suburbs and other parts of the country, especially the South. Segregation and discrimination kept them stuck where there were fewer and fewer jobs. Over time, Detroit devolved into a mass of unemployment, crime, and crippled municipal resources. When riots rocked Detroit in 1967, 25 to 30 percent of Black residents between ages eighteen and twenty-four were unemployed.37
Deindustrialization in Detroit and elsewhere also went hand in hand with the long assault on unionization that began in the aftermath of World War II. Lacking the political support they had enjoyed during the New Deal years, labor organizations such as the CIO and the UAW shifted tactics and accepted labor-management accords in which cooperation, not agitation, was the strategic objective.
This accord held mixed results for workers. On the one hand, management encouraged employee loyalty through privatized welfare systems that offered workers health benefits and pensions. Grievance arbitration and collective bargaining also provided workers official channels through which to criticize policies and push for better conditions. At the same time, bureaucracy and corruption increasingly weighed down unions and alienated them from workers and the general public. Union management came to hold primary influence in what was ostensibly a “pluralistic” power relationship. Workers—though still willing to protest—by necessity pursued a more moderate agenda compared to the union workers of the 1930s and 1940s. Conservative politicians meanwhile seized on popular suspicions of Big Labor, stepping up their criticism of union leadership and positioning themselves as workers’ true ally.
While conservative critiques of union centralization did much to undermine the labor movement, labor’s decline also coincided with ideological changes within American liberalism. Labor and its political concerns undergirded Roosevelt’s New Deal coalition, but by the 1960s, many liberals had forsaken working-class politics. More and more saw poverty as stemming not from structural flaws in the national economy, but from the failure of individuals to take full advantage of the American system. Roosevelt’s New Deal might have attempted to rectify unemployment with government jobs, but Johnson’s Great Society and its imitators funded government-sponsored job training, even in places without available jobs. Union leaders in the 1950s and 1960s typically supported such programs and philosophies.
Internal racism also weakened the labor movement. While national CIO leaders encouraged Black unionization in the 1930s, white workers on the ground often opposed the integrated shop. In Detroit and elsewhere after World War II, white workers participated in “hate strikes” where they walked off the job rather than work with African Americans. White workers similarly opposed residential integration, fearing, among other things, that Black newcomers would lower property values.38
By the mid-1970s, widely shared postwar prosperity leveled off and began to retreat. Growing international competition, technological inefficiency, and declining productivity gains stunted working- and middle-class wages. As the country entered recession, wages decreased and the pay gap between workers and management expanded, reversing three decades of postwar contraction. At the same time, dramatic increases in mass incarceration coincided with the deregulation of prison labor to allow more private companies access to cheaper inmate labor, a process that, whatever its aggregate impact, impacted local communities where free jobs were moved into prisons. The tax code became less progressive and labor lost its foothold in the marketplace. Unions represented a third of the workforce in the 1950s, but only one in ten workers belonged to one as of 2015.39
Geography dictated much of labor’s fall, as American firms fled pro-labor states in the 1970s and 1980s. Some went overseas in the wake of new trade treaties to exploit low-wage foreign workers, but others turned to anti-union states in the South and West stretching from Virginia to Texas to Southern California. Factories shuttered in the North and Midwest, leading commentators by the 1980s to dub America’s former industrial heartland the Rust Belt. With this, they contrasted the prosperous and dynamic Sun Belt.”
Urban decay confronted Americans of the 1960s and 1970s. As the economy sagged and deindustrialization hit much of the country, Americans increasingly associated major cities with poverty and crime. In this 1973 photo, two subway riders sit amid a graffitied subway car in New York City. National Archives (8464439).
Coined by journalist Kevin Phillips in 1969, the term Sun Belt refers to the swath of southern and western states that saw unprecedented economic, industrial, and demographic growth after World War II.40 During the New Deal, President Franklin D. Roosevelt declared the American South “the nation’s No. 1 economic problem” and injected massive federal subsidies, investments, and military spending into the region. During the Cold War, Sun Belt politicians lobbied hard for military installations and government contracts for their states.41
The South attracted business but struggled to share their profits. Middle-class whites grew prosperous, but often these were recent transplants, not native southerners. As the cotton economy shed farmers and laborers, poor white and Black southerners found themselves mostly excluded from the fruits of the Sun Belt. Public investments were scarce. White southern politicians channeled federal funding away from primary and secondary public education and toward high-tech industry and university-level research. The Sun Belt inverted Rust Belt realities: the South and West had growing numbers of high-skill, high-wage jobs but lacked the social and educational infrastructure needed to train native poor and middle-class workers for those jobs.
Regardless, more jobs meant more people, and by 1972, southern and western Sun Belt states had more electoral votes than the Northeast and Midwest. This gap continues to grow.42 Though the region’s economic and political ascendance was a product of massive federal spending, New Right politicians who constructed an identity centered on “small government” found their most loyal support in the Sun Belt. These business-friendly politicians successfully synthesized conservative Protestantism and free market ideology, creating a potent new political force. Housewives organized reading groups in their homes, and from those reading groups sprouted new organized political activities. Prosperous and mobile, old and new suburbanites gravitated toward an individualistic vision of free enterprise espoused by the Republican Party. Some, especially those most vocally anticommunist, joined groups like the Young Americans for Freedom and the John Birch Society. Less radical suburban voters, however, still gravitated toward the more moderate brand of conservatism promoted by Richard Nixon.
VII. The Politics of Love, Sex, and Gender
Demonstrators opposed to the Equal Rights Amendment protest in front of the White House in 1977. Library of Congress.
The sexual revolution continued into the 1970s. Many Americans—feminists, gay men, lesbians, and straight couples—challenged strict gender roles and rejected the rigidity of the nuclear family. Cohabitation without marriage spiked, straight couples married later (if at all), and divorce levels climbed. Sexuality, decoupled from marriage and procreation, became for many not only a source of personal fulfillment but a worthy political cause.
At the turn of the decade, sexuality was considered a private matter yet rigidly regulated by federal, state, and local law. Statutes typically defined legitimate sexual expression within the confines of patriarchal, procreative marriage. Interracial marriage, for instance, was illegal in many states until 1967 and remained largely taboo long after. Same-sex intercourse and cross-dressing were criminalized in most states, and gay men, lesbians, and transgender people were vulnerable to violent police enforcement as well as discrimination in housing and employment.
Two landmark legal rulings in 1973 established the battle lines for the “sex wars” of the 1970s. First, the Supreme Court’s 7–2 ruling in Roe v. Wade (1973) struck down a Texas law that prohibited abortion in all cases when a mother’s life was not in danger. The Court’s decision built on precedent from a 1965 ruling that, in striking down a Connecticut law prohibiting married couples from using birth control, recognized a constitutional “right to privacy.”43 In Roe, the Court reasoned that “this right of privacy . . . is broad enough to encompass a woman’s decision whether or not to terminate her pregnancy.”44 The Court held that states could not interfere with a woman’s right to an abortion during the first trimester of pregnancy and could only fully prohibit abortions during the third trimester.
Other Supreme Court rulings, however, found that sexual privacy could be sacrificed for the sake of “public” good. Miller v. California (1973), a case over the unsolicited mailing of sexually explicit advertisements for illustrated “adult” books, held that the First Amendment did not protect “obscene” material, defined by the Court as anything with sexual appeal that lacked, “serious literary, artistic, political, or scientific value.”45 The ruling expanded states’ abilities to pass laws prohibiting materials like hard-core pornography. However, uneven enforcement allowed pornographic theaters and sex shops to proliferate despite whatever laws states had on the books. Americans debated whether these represented the pinnacle of sexual liberation or, as poet and lesbian feminist Rita Mae Brown suggested, “the ultimate conclusion of sexist logic.”46
Of more tangible concern for most women, though, was the right to equal employment access. Thanks partly to the work of Black feminists like Pauli Murray, Title VII of the 1964 Civil Rights Act banned employment discrimination based on sex, in addition to race, color, religion, and national origin. “If sex is not included,” she argued in a memorandum sent to members of Congress, “the civil rights bill would be including only half of the Negroes.”47 Like most laws, Title VII’s full impact came about slowly, as women across the nation cited it to litigate and pressure employers to offer them equal opportunities compared to those they offered to men. For one, employers in the late sixties and seventies still viewed certain occupations as inherently feminine or masculine. NOW organized airline workers against a major company’s sexist ad campaign that showed female flight attendants wearing buttons that read, “I’m Debbie, Fly Me” or “I’m Cheryl, Fly Me.” Actual female flight attendants were required to wear similar buttons.48 Other women sued to gain access to traditionally male jobs like factory work. Protests prompted the Equal Employment Opportunity Commission (EEOC) to issue a more robust set of protections between 1968 and 1971. Though advancement came haltingly and partially, women used these protections to move eventually into traditional male occupations, politics, and corporate management.
The battle for sexual freedom was not just about the right to get into places, though. It was also about the right to get out of them—specifically, unhappy households and marriages. Between 1959 and 1979, the American divorce rate more than doubled. By the early 1980s, nearly half of all American marriages ended in divorce.49 The stigma attached to divorce evaporated and a growing sense of sexual and personal freedom motivated individuals to leave abusive or unfulfilling marriages. Legal changes also promoted higher divorce rates. Before 1969, most states required one spouse to prove that the other was guilty of a specific offense, such as adultery. The difficulty of getting a divorce under this system encouraged widespread lying in divorce courts. Even couples desiring an amicable split were sometimes forced to claim that one spouse had cheated on the other even if neither (or both) had. Other couples temporarily relocated to states with more lenient divorce laws, such as Nevada.50 Widespread recognition of such practices prompted reforms. In 1969, California adopted the first no-fault divorce law. By the end of the 1970s, almost every state had adopted some form of no-fault divorce. The new laws allowed for divorce on the basis of “irreconcilable differences,” even if only one party felt that he or she could not stay in the marriage.51
Gay men and women, meanwhile, negotiated a harsh world that stigmatized homosexuality as a mental illness or an immoral depravity. Building on postwar efforts by gay rights organizations to bring homosexuality into the mainstream of American culture, young gay activists of the late sixties and seventies began to challenge what they saw as the conservative gradualism of the “homophile” movement. Inspired by the burgeoning radicalism of the Black Power movement, the New Left protests of the Vietnam War, and the counterculture movement for sexual freedom, gay and lesbian activists agitated for a broader set of sexual rights that emphasized an assertive notion of liberation rooted not in mainstream assimilation but in pride of sexual difference.
Perhaps no single incident did more to galvanize gay and lesbian activism than the 1969 uprising at the Stonewall Inn in New York City’s Greenwich Village. Police regularly raided gay bars and hangouts. But when police raided the Stonewall in June 1969, the bar patrons protested and sparked a multiday street battle that catalyzed a national movement for gay liberation. Seemingly overnight, calls for homophile respectability were replaced with chants of “Gay Power!”52
The window under the Stonewall Inn sign reads: “We homosexuals plead with our people to please help maintain peaceful and quiet conduct on the streets of the Village–Mattachine.” Photograph 1969. Wikimedia.
In the following years, gay Americans gained unparalleled access to private and public spaces. Gay activists increasingly attacked cultural norms that demanded they keep their sexuality hidden. Citing statistics that sexual secrecy contributed to stigma and suicide, gay activists urged people to come out and embrace their sexuality. A step towards the normalization of homosexuality occurred in 1973, when the American Psychiatric Association stopped classifying homosexuality as a mental illness. Pressure mounted on politicians. In 1982, Wisconsin became the first state to ban discrimination based on sexual orientation. More than eighty cities and nine states followed suit over the following decade. But progress proceeded unevenly, and gay Americans continued to suffer hardships from a hostile culture.
Like all social movements, the sexual revolution was not free of division. Transgender people were often banned from participating in Gay Pride rallies and lesbian feminist conferences. They, in turn, mobilized to fight the high incidence of rape, abuse, and murder of transgender people. A 1971 newsletter denounced the notion that transgender people were mentally ill and highlighted the particular injustices they faced in and out of the gay community, declaring, “All power to Trans Liberation.”53
As events in the 1970s broadened sexual freedoms and promoted greater gender equality, so too did they generate sustained and organized opposition. Evangelical Christians and other moral conservatives, for instance, mobilized to reverse gay victories. In 1977, activists in Dade County, Florida, used the slogan “Save Our Children” to overturn an ordinance banning discrimination based on sexual orientation.54 A leader of the ascendant religious right, Jerry Falwell, said in 1980, “It is now time to take a stand on certain moral issues. . . . We must stand against the Equal Rights Amendment, the feminist revolution, and the homosexual revolution. We must have a revival in this country.”55
Much to Falwell’s delight, conservative Americans did, in fact, stand against and defeat the Equal Rights Amendment (ERA), their most stunning social victory of the 1970s. Versions of the amendment—which declared, “Equality of rights under the law shall not be denied or abridged by the United States or any state on account of sex”—were introduced to Congress each year since 1923. It finally passed amid the upheavals of the sixties and seventies and went to the states for ratification in March 1972.56 With high approval ratings, the ERA seemed destined to pass swiftly through state legislatures and become the Twenty-Seventh Amendment. Hawaii ratified the amendment the same day it cleared Congress. Within a year, thirty states had done so. But then the amendment stalled. It took years for more states to pass it. In 1977, Indiana became the thirty-fifth and final state to ratify.57
By 1977, anti-ERA forces had successfully turned the political tide against the amendment. At a time when many women shared Betty Friedan’s frustration that society seemed to confine women to the role of homemaker, Phyllis Schlafly’s STOP ERA organization (“Stop Taking Our Privileges”) trumpeted the value and advantages of being a homemaker and mother.58 Marshaling the support of evangelical Christians and other religious conservatives, Schlafly worked tirelessly to stifle the ERA. She lobbied legislators and organized counter-rallies to ensure that Americans heard “from the millions of happily married women who believe in the laws which protect the family and require the husband to support his wife and children.”59 The amendment needed only three more states for ratification. It never got them. In 1982, the time limit for ratification expired—and along with it, the amendment.60
The failed battle for the ERA uncovered the limits of the feminist crusade. And it illustrated the women’s movement’s inherent incapacity to represent fully the views of 50 percent of the country’s population, a population riven by class differences, racial disparities, and cultural and religious divisions.
VIII. The Misery Index
Supporters rally with pumpkins carved in the likeness of President Jimmy Carter in Polk County, Florida, in October 1980. State Library and Archives of Florida via Flickr.
Although Nixon eluded prosecution, Watergate continued to weigh on voters’ minds. It netted big congressional gains for Democrats in the 1974 midterm elections, and Ford’s pardon damaged his chances in 1976. Former Georgia governor Jimmy Carter, a nuclear physicist and peanut farmer who represented the rising generation of younger, racially liberal “New South” Democrats, captured the Democratic nomination. Carter did not identify with either his party’s liberal or conservative wing; his appeal was more personal and moral than political. He ran on no great political issues, letting his background as a hardworking, honest, southern Baptist navy man ingratiate him to voters around the country, especially in his native South, where support for Democrats had wavered in the wake of the civil rights movement. Carter’s wholesome image was painted in direct contrast to the memory of Nixon, and by association with the man who pardoned him. Carter sealed his party’s nomination in June and won a close victory in November.61
When Carter took the oath of office on January 20, 1977, however, he became president of a nation in the midst of economic turmoil. Oil shocks, inflation, stagnant growth, unemployment, and sinking wages weighed down the nation’s economy. Some of these problems were traceable to the end of World War II when American leaders erected a complex system of trade policies to help rebuild the shattered economies of Western Europe and Asia. After the war, American diplomats and politicians used trade relationships to win influence and allies around the globe. They saw the economic health of their allies, particularly West Germany and Japan, as a crucial bulwark against the expansion of communism. Americans encouraged these nations to develop vibrant export-oriented economies and tolerated restrictions on U.S. imports.
This came at great cost to the United States. As the American economy stalled, Japan and West Germany soared and became major forces in the global production for autos, steel, machine tools, and electrical products. By 1970, the United States began to run massive trade deficits. The value of American exports dropped and the prices of its imports skyrocketed. Coupled with the huge cost of the Vietnam War and the rise of oil-producing states in the Middle East, growing trade deficits sapped the United States’ dominant position in the global economy.
American leaders didn’t know how to respond. After a series of negotiations with leaders from France, Great Britain, West Germany, and Japan in 1970 and 1971, the Nixon administration allowed these rising industrial nations to continue flouting the principles of free trade. They maintained trade barriers that sheltered their domestic markets from foreign competition while at the same time exporting growing amounts of goods to the United States. By 1974, in response to U.S. complaints and their own domestic economic problems, many of these industrial nations overhauled their protectionist practices but developed even subtler methods (such as state subsidies for key industries) to nurture their economies.
The result was that Carter, like Ford before him, presided over a hitherto unimagined economic dilemma: the simultaneous onset of inflation and economic stagnation, a combination popularized as stagflation.”62 Neither Ford nor Carter had the means or ambition to protect American jobs and goods from foreign competition. As firms and financial institutions invested, sold goods, and manufactured in new rising economies like Mexico, Taiwan, Japan, Brazil, and elsewhere, American politicians allowed them to sell their often cheaper products in the United States.
As American officials institutionalized this new unfettered global trade, many American manufacturers perceived only one viable path to sustained profitability: moving overseas, often by establishing foreign subsidiaries or partnering with foreign firms. Investment capital, especially in manufacturing, fled the United States looking for overseas investments and hastened the decline in the productivity of American industry.
During the 1976 presidential campaign, Carter had touted the “misery index,” the simple addition of the unemployment rate to the inflation rate, as an indictment of Gerald Ford and Republican rule. But Carter failed to slow the unraveling of the American economy, and the stubborn and confounding rise of both unemployment and inflation damaged his presidency.
Just as Carter failed to offer or enact policies to stem the unraveling of the American economy, his idealistic vision of human rights–based foreign policy crumbled. He had not made human rights a central theme in his campaign, but in May 1977 he declared his wish to move away from a foreign policy in which “inordinate fear of communism” caused American leaders to “adopt the flawed and erroneous principles and tactics of our adversaries.” Carter proposed instead “a policy based on constant decency in its values and on optimism in our historical vision.”63
Carter’s human rights policy achieved real victories: the United States either reduced or eliminated aid to American-supported right-wing dictators guilty of extreme human rights abuses in places like South Korea, Argentina, and the Philippines. In September 1977, Carter negotiated the return to Panama of the Panama Canal, which cost him enormous political capital in the United States.64 A year later, in September 1978, Carter negotiated a peace treaty between Israeli prime minister Menachem Begin and Egyptian president Anwar Sadat. The Camp David Accords—named for the president’s rural Maryland retreat, where thirteen days of secret negotiations were held—represented the first time an Arab state had recognized Israel, and the first time Israel promised Palestine self-government. The accords had limits, for both Israel and the Palestinians, but they represented a major foreign policy coup for Carter.65
And yet Carter’s dreams of a human rights–based foreign policy crumbled before the Cold War and the realities of American politics. The United States continued to provide military and financial support for dictatorial regimes vital to American interests, such as the oil-rich state of Iran. When the President and First Lady Rosalynn Carter visited Tehran, Iran, in January 1978, the president praised the nation’s dictatorial ruler, Shah Reza Pahlavi, and remarked on the “respect and the admiration and love” Iranians had for their leader.66 When the shah was deposed in November 1979, revolutionaries stormed the American embassy in Tehran and took fifty-two Americans hostage. Americans not only experienced another oil crisis as Iran’s oil fields shut down, they watched America’s news programs, for 444 days, remind them of the hostages and America’s new global impotence. Carter couldn’t win their release. A failed rescue mission only ended in the deaths of eight American servicemen. Already beset with a punishing economy, Carter’s popularity plummeted.
Carter’s efforts to ease the Cold War by achieving a new nuclear arms control agreement disintegrated under domestic opposition from conservative Cold War hawks such as Ronald Reagan, who accused Carter of weakness. A month after the Soviets invaded Afghanistan in December 1979, a beleaguered Carter committed the United States to defending its “interests” in the Middle East against Soviet incursions, declaring that “an assault [would] be repelled by any means necessary, including military force.” The Carter Doctrine not only signaled Carter’s ambivalent commitment to de-escalation and human rights, it testified to his increasingly desperate presidency.67
The collapse of American manufacturing, the stubborn rise of inflation, the sudden impotence of American foreign policy, and a culture ever more divided: the sense of unraveling pervaded the nation. “I want to talk to you right now about a fundamental threat to American democracy,” Jimmy Carter said in a televised address on July 15, 1979. “The threat is nearly invisible in ordinary ways. It is a crisis of confidence. It is a crisis that strikes at the very heart and soul and spirit of our national will.”
IX. Conclusion
Though American politics moved right after Lyndon Johnson’s administration, Nixon’s 1968 election was no conservative counterrevolution. American politics and society remained in flux throughout the 1970s. American politicians on the right and the left pursued relatively moderate courses compared to those in the preceding and succeeding decades. But a groundswell of anxieties and angers brewed beneath the surface. The world’s greatest military power had floundered in Vietnam and an American president stood flustered by Middle Eastern revolutionaries. The cultural clashes from the sixties persisted and accelerated. While cities burned, a more liberal sexuality permeated American culture. The economy crashed, leaving America’s cities prone before poverty and crime and its working class gutted by deindustrialization and globalization. American weakness was everywhere. And so, by 1980, many Americans—especially white middle- and upper-class Americans—felt a nostalgic desire for simpler times and simpler answers to the frustratingly complex geopolitical, social, and economic problems crippling the nation. The appeal of Carter’s soft drawl and Christian humility had signaled this yearning, but his utter failure to stop the unraveling of American power and confidence opened the way for a new movement, one with new personalities and a new conservatism—one that promised to undo the damage and restore the United States to its own nostalgic image of itself.
X. Primary Sources
Riots rocked American cities in the mid-late sixties. Hundreds died, thousands were injured, and thousands of buildings were destroyed. Many communities never recovered. In 1967, devastating riots, particularly in Detroit, Michigan, and Newark, New Jersey, captivated national television audiences. President Lyndon Johnson appointed an 11-person commission, chaired by Illinois Governor Otto Kerner, to explain the origins of the riots and recommend policies to prevent them in the future.
On April 23, 1971, a young Vietnam veteran named John Kerry spoke on behalf of the Vietnam Veterans Against the War before the Senate Committee of Foreign Relations. Kerry, later a Massachusetts Senator and 2004 presidential contender, articulated a growing disenchantment with the Vietnam War and delivered a blistering indictment of the reasoning behind its prosecution.
Richard Nixon, who built his political career on anti-communism, worked from the first day of his presidency to normalize relations with the communist People’s Republic of China. In 1971, Richard Nixon announced that he would make an unprecedented visit there to advance American-Chinese relations. Here, he explains his intentions.
On July 12, 1976, Texas Congresswoman Barbara Jordan delivered the keynote address at the Democratic National Convention. As Americans sensed a fracturing of American life in the 1970s, Jordan called for Americans to commit themselves to a “national community” and the “common good.” Jordan began by noting she was the first Black woman to ever deliver a keynote address at a major party convention and that such a thing would have been almost impossible even a decade earlier.
On July 15, 1979, amid stagnant economic growth, high inflation, and an energy crisis, Jimmy Carter delivered a televised address to the American people. In it, Carter singled out a pervasive “crisis of confidence” preventing the American people from moving the country forward. A year later, Ronald Reagan would frame his optimistic political campaign in stark contrast to the tone of Carter’s speech, which would be remembered, especially by critics, as the “malaise speech.”
The first Congressional hearing on the equal rights amendment (ERA) was held in 1923, but the push for the amendment stalled until the 1960s, when a revived women’s movement thrust it again into the national consciousness. Congress passed and sent to the states for ratification the ERA on March 22, 1972. But it failed, stalling just three states short of the required three-fourths needed for ratification. Despite popular support for the amendment, activists such as Phyllis Schlafly outmaneuvered the amendment’s supporters. In 1970, author Gloria Steinem argued that such opposition was rooted in outmoded ideas about gender.
In November 1969, Native American activists occupied Alcatraz Island and held it for nineteen months to bring attention to past injustices and contemporary issues confronting Native Americans, as state in this proclamation, drafted largely by Adam Fortunate Eagle of the Ojibwa Nation.
“Urban Decay” confronted Americans of the 1960s and 1970s. As the economy sagged and deindustrialization hit much of the country, many Americans associated major cities with poverty and crime. In this 1973 photo, two subway riders sit amid a graffitied subway car in New York City.
In the 1970s, conservative Americans defeated the Equal Rights Amendment (ERA). With high approval ratings, the ERA–which declared, “Equality of rights under the law shall not be denied or abridged by the United States or any state on account of sex”—seemed destined to pass swiftly through state legislatures and become the Twenty-Seventh Amendment, but conservative opposition stopped the Amendment just short of ratification.
Lewy, America in Vietnam, 164–169; Henry Kissinger, Ending the Vietnam War: A History of America’s Involvement in and Extrication from the Vietnam War (New York: Simon and Schuster, 2003), 81–82. [↩]
Richard Nixon, “Address to the Nation Announcing Conclusion of an Agreement on Ending the War and Restoring Peace in Vietnam,” January 23, 1973, American Presidency Project, http://www.presidency.ucsb.edu/ws/?pid=3808. [↩]
“Executive Privilege,” in John J. Patrick, Richard M. Pious, and Donald A. Ritchie, The Oxford Guide to the United States Government (New York: Oxford University Press, 2001), 227; Schulman, The Seventies, 44–48. [↩]
|
I. Introduction
On December 6, 1969, an estimated three hundred thousand people converged on the Altamont Motor Speedway in Northern California for a massive free concert headlined by the Rolling Stones and featuring some of the era’s other great rock acts.1 Only four months earlier, Woodstock had shown the world the power of peace and love and American youth. Altamont was supposed to be “Woodstock West. ”2
But Altamont was a disorganized disaster. Inadequate sanitation, a horrid sound system, and tainted drugs strained concertgoers. To save money, the Hells Angels biker gang was paid $500 in beer to be the show’s “security team.” The crowd grew progressively angrier throughout the day. Fights broke out. Tensions rose. The Angels, drunk and high, armed themselves with sawed-off pool cues and indiscriminately beat concertgoers who tried to come on the stage. The Grateful Dead refused to play. Finally, the Stones came on stage.3
The crowd’s anger was palpable. Fights continued near the stage. Mick Jagger stopped in the middle of playing “Sympathy for the Devil” to try to calm the crowd: “Everybody be cool now, c’mon,” he pleaded. Then, a few songs later, in the middle of “Under My Thumb,” eighteen-year-old Meredith Hunter approached the stage and was beaten back. Pissed off and high on methamphetamines, Hunter brandished a pistol, charged again, and was stabbed and killed by an Angel. His lifeless body was stomped into the ground. The Stones just kept playing.4
If the more famous Woodstock music festival captured the idyll of the sixties youth culture, Altamont revealed its dark side. There, drugs, music, and youth were associated not with peace and love but with anger, violence, and death. While many Americans in the 1970s continued to celebrate the political and cultural achievements of the previous decade, a more anxious, conservative mood grew across the nation.
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
no_statement
|
"woodstock" "festival" did not "promote" "peace" and "love".. "peace" and "love" were not "promoted" at "woodstock" "festival".
|
https://vocal.media/beat/woodstock-and-the-vietnam-war
|
Woodstock and the Vietnam War | Beat
|
Woodstock and the Vietnam War
How and to what extent did Woodstock influence the anti-war movement in the United States particularly during the Vietnam War?
On August 15th, 1969, four hundred thousand Americans gathered around Max Yasgur’s dairy farm in White Lake, New York. Fashioned to familiarize the concepts of free love, radical hippie movements, and drug culture, America presented one of the most inspiring and liberating music festivals of its time, Woodstock. The festival brought a great deal of noted musical artists to take part in the counterculture’s strategy in putting an end to the Vietnam War. The festival was able to flock together the American youths who had opposed views towards war. Woodstock furnished an alternative community for those who promoted peace and free-love known as the “hippie” community. With the Vietnam War happening half the world away, people’s stress towards the loss of great numbers of soldiers in the war grew and found no reason in their country’s inclusion in the mishap, especially the youths of the era due to their rebellious ideologies. This investigation will answer the question “How and to what extent did Woodstock influence the anti-war movement in the United States particularly during the Vietnam War between 1969 and 1975?” Society was slowly becoming segregated between people who supported it and those who opposed it. This gave Americans an initiative to bring the anti-war movement to light by taking control over mass media and altering people’s views regarding the Vietnam War. Throughout its development, Woodstock was argued to be just a group of people listening to music and did not create any effect in aiding the countercultural anti-war movement. Tom Wells states that the countercultural movement would have been more efficient and succeeded in ways more than one if it hadn’t focused so much on developing an attractive media-driven coating to attract youth.
The importance of Woodstock was that it was viewed as a fundamental symbol to the progressing youth movements. The youth of society began to recognize that they could span their voice greatly. With the ability to disperse its differing views from conventional society, people began to recognize the effect of youth and question the role and power of authority in the United States. Drawn by many artists blending with one another in one place, youths from across the United States attended the Woodstock festival not only to promote free love and music, but also to amplify their voices of peace and unity. Woodstock left a mark in American history to be of the most influential forms of societal power over government. The event has rippled a diverse breed of countercultural activity and propels itself as an example of a successful youth movement antagonistic to war. Today, Woodstock sits at one of the highest pedestals regarding anti-war strategies and counterculture.
The Vietnam War’s inception brought with it two opposing views regarding America’s role in the war. One view was that the United States took part in the war due to the country’s security and economic desires. The second view was that America participated in the war to apply its view of opposing the lengthening effect of communism. With the American people freshly beginning to trust the media coverage from their televisions, journalists were given the freedom of publishing all of the news that was occurring in Vietnam. By spreading coverage of American soldiers dying in the war as well as the unimaginably unbearable living conditions, people across the United States began to withdraw their support of the war. The more the war proceeded, the more displeased the nation grew. The progression of the Vietnam war abroad made Americans become hesitant and questionable towards the United States’ involvement in the war. Mark Barringer analyzes the youth’s perspectives developing exponentially from a verbal effort to a violent effort. At the time, a great number of American youths were drafted to Vietnam on account of the losses occurring abroad . However, Tom Wells argues that the slowly evolving countercultural movement financially sustained Americans chose to defy the adequate and undecorated strategy of avoiding participation in the war, thus creating what was known to be “draft evasion”. American youth understood that the war itself was extremely costly and risked the country’s economy by an extremely parlous margin.
With America’s failures increasing in Vietnam, people began to question the country’s association with the war. The Vietnam War held the highest fraction of African-Americans to ever serve in a war and also serve in the front line while less than half of white, male Americans were put on the front line. After learning this, white Americans who opposed the inequitable treatment to the African American demographic gave themselves more of a reason not to participate with the war. With a great amount of youth being journalists and linked to media coverage in general, people began to learn more of how corrupt the situation is developing abroad in Vietnam and protests began underway. According to historian Arthur M. Schlesinger, the Vietnam War was a prodigious mistake due to the fact that it brought with it economic and social deficits. From Barringer’s standpoint, the Vietnam War made Americans lack the ability to trust public institutions and media coverage. His article of potent analysis regarding the Vietnam War’s effect on the counterculture’s anti-war movement stresses on how the American youth began to input a physical rather than verbal effort to put an end to the war and spread countercultural views across the nation.
Counterculture was an innermost and underlying cultural movement which sprouted in 1964 and lasted for almost an entire decade. The unabridged concept of developing a counterculture was to reject a specific social norm(s). Between the years of 1965 and 1975, a developing counterculture meant that there would be a complete rejection towards the Vietnam War and cultural standards which originated in generations before specifically the 1950s. Throughout the entire decade of the 1960s, frictions erupted in American society such as free speech, racial segregations, women’s rights, the concept of the American Dream; which was an array of morals which centered around how freedom is the fuel for affluence and sustenance, and most specifically, the Vietnam War. On account of the majority of the counterculture’s participants being young, middle-class, white Americans, economic sustenance was established by the movement and used this to amplify its goals to bring an end to the war abroad and show the hardship of war to the American people. According to George McKay and Lee Anderson, people who followed a countercultural lifestyle blended ideals of peace, harmony, love, and most specifically, music and an appreciation for art. The rejection of this mainstream culture motivated people to take part in drug usage, plan public political gatherings and promote obscenity in media sources. With the help of artists such as Bob Dylan, Jimi Hendrix, The Grateful Dead, and Jefferson Airplane, it became evident that American music was greatly influenced by the movement and began to exemplify new genres which were unparalleled to those at the time such as psychedelic rock and folk. With new genres, people were able to construct a prime addition to the movement: music festivals.
commondreams.org
It is obvious that the movement created a dissection in the American society. People viewed the counterculture movement as a social advancement in equality, free speech, and global restfulness. Others saw the movement as an unavailing effort to put an end to the war. George McKay argues that a great deal of American citizens characterized counterculture to be unpatriotic and insubordinate as well as a total strike on the United States’ moral statuses. With these differing views regarding the movement, authorities attempted to diminish its effects and put an end to it by banning drug use, the organization of public political gatherings and obscenity in publishing. By cultivating limitations and restrictions in society, the counterculture movement welcomed a new form of protest known as the anti-war movement.
The anti-war movement began during the start of the mishap in Vietnam which Americans of different social statuses united to promote peace and to end the war completely. One of the most important and effective movements towards the issue was the “Freedom Speech Movement”(FSM), conducted at the University of California at Berkley as an inspiration to fight for the struggle of civil rights in the United States. The FSM gave students the initiative to serve as societal illustrations of how the youth of a nation is able to create change through organizations. The FSM and SDS (Students for a Democratic Society) began to receive national acclaim and learned, with the help of national media coverage, that there were many other advocates and partisans in other corners of the country. Mark Barringer claims that members of the SDS and FSM held “teach-ins”, which were non-violent forms of protest to spread awareness of the mishap in Vietnam, in universities to educate people of the occurrences in Vietnam as well as the political and moral involvement of the United States. As the effects of the youth towards this matter began to receive high praise, students constructed networks to disperse information about the occurrences in Vietnam and share multifarious protest methods to help promote the cause; this is done with the help of the Underground Press Syndicate and Liberation News Service. The anti-war movement expanded at a parallel rate with the Vietnam War.
The Johnson administration, an administration with the occurrences in Vietnam, sent public speakers to campuses to fuel pro-war states of mind in the youths and also arrested thousands of anti-war activists. In 1965, most of the country’s demographic supported the happenings in Vietnam; by 1967, 35% of it supported. Research which targeted the anti-war sentiment found out that the socioeconomic level of people affected their perspective towards the Vietnam War. People who took part in media and the legislation greatly attacked and discredited the movement as a whole. The movement reached its epitome of success during Nixon’s administrative control over the country. In November 1969, around 650,000 people in Washington D.C and San Francisco held demonstrations. The movement adopted new protest methods such as militant protest. Nixon and his administration attempted hindering the effects of the movement by deteriorating the supporters and tracking the movement across the nation. The movement faced problems such as people bailing its participation due to doubts of its effectiveness. In 1969, counterculture took part in protests as a form of anti-war tactics. Artists such as Bob Dylan, Joan Baez, and Jimi Hendrix served as tethers to hold the chasms of the old and new demographics. With their support, the anti-war movement became well known across the nation and gained its fame alongside the notable musicians. Artists became drawn to the effect of their music and the countercultural society was looking for alternatives to unearth the grey area between media and society; thus, the formation of Woodstock.
John Roberts, Joel Rosenman, Artie Kornfeld, and Mike Lang were the essential four organizers of the Woodstock festival. Initially, the festival was planned by both Roberts and Rosenman as a strategy to spend money to make more money. Kornfeld and Lang’s main plan was to establish a recording studio in New York as a “retreat” for rock musicians based in the city. Though, the idea changed into orchestrating a concert for around 50,000 people to raise money for the studio. Sprouting during the Vietnam War in the late 1960’s, a tremendous amount of influential artists grew an appetite to facilitate the counterculture’s effects of peace and love as public disapproval of the United States taking part in the Vietnam War grew. The organizers of the festival began looking for potential artists who were able to perform at the festival. Lang proposed that he did not care at all for the fees and knew that Woodstock needed three major acts from notable artists and they were: The Who, Jefferson Airplane, and Creedence Clearwater Revival. Lang spent around $180,000 only on signing bands. He knew that the audience would have been a great amount of young Americans and was willing to spend enough for what they deserved for a show. Initially, the organizers of the festival didn't think more than 100,000 people would be attending and had only constructed the set of the festival worth 50,000 people.This brought several issues with the venue of the festival due to the fact that there were around 450,000 people taking part in it. White Lake, a town in southeast New York presented four men with the perfect spot to hold a concert worth a hundred thousand people. At this time, the role for Woodstock was to exemplify an influential generation and thus creating the slogan of the festival: “Three Days of Peace and Music”. This appealed to a great amount of amicably rebellious youth by targeting the goal of counterculture’s anti-war movement.
With the effective strategies of the anti-war movement in the 1960’s, artists were drawn by its goals and its popularity nationwide. Artists were affected most by the concept of peace and put their acoustic abilities to use to support the anti-war movement and support the civil rights of the people. Numerous modernly influential artists such as Jefferson Airplane, Joan Baez, and Bob Dylan entreated themselves to heartening the counterculture’s stress on spreading peace and love nation-wide. One of the most influential musicians during the anti-war movement was the highly acclaimed and notable Bob Dylan. With his recording of “A Times They Are A-Changin”, Dylan impelled hundreds of musicians and other artists to follow the counterculture movement and reach out to the goals of the anti-war movement and social clashes such as racial equality and free speech. The song’s beauty was its simplicity and only encompassed Dylan himself, his guitar, and his harmonica. However, what brought people’s attention to the song was his lyrics. The song contains the lyrics: “Come writers and critics/Who prophesize with your pen/And keep your eyes wide/The chance wont come again”. By this, Dylan is reaching out to the media of the United States and how they must cover every story they can about Vietnam to share. Also, by “chance”, he is hinting at the war in Vietnam and how nothing as large and as serious to ever come across the American society. As well, by mentioning the lyrics: “There’s a battle outside/And it’s ragin’/It’ll soon shake your windows/And rattle your walls/Come mothers and fathers/Throughout the land/And don't criticize/What you don't understand”, Dylan captured the American society’s perspective on the war. With most of the families having their sons and fathers being sent to the war, a great deal of people were angered and frustrated. Another prominent artist who shone his light on his and the society’s personal views of the Vietnam War was guitar god, Jimi Hendrix. “Machine Gun”, a recording of Hendrix where he strived to capture the terror of war, was always played as dedication to the soldiers fighting for their lives in Vietnam. In the song, Hendrix uses vast guitar skills playing to make the instrument resemble the sound of a firing machine gun and capture one of the most horrifying sounds found in the environments of the soldiers in Vietnam. Another of Jimi’s tracks which Americans in Vietnam and back home found musically captivating was “Purple Haze”. Hendrix aimed to exemplify the haze which he mentions in the song to the purple smoke used in the war by M-18 American forces. The lyrics mention: “Purple haze all in my brain/Lately things just don't seem the same/Acting funny but I don't know why/Excuse me while I kiss the sky”. These lyrics brought attention not only to the concept of the haze itself, but also to the fact that soldiers were dying as they knew not of what was happening around them. Most soldiers found a liking for Hendrix’s music and style due to his anger to the American government was equal to of the society’s anger towards the Vietnam mishap. As well, having served in the military in the early 1960’s, Hendrix knew exactly how the soldiers were being treated on either side of the battle. With the counterculture movement spurring at his time, Hendrix took advantage of the opposing attitudes towards the war and recognized that the anti-war movement needed a push from the media. Lee Andresen, the author of Battlenotes: Music of The Vietnam War, analyzes the music released to the American public and its effect on counterculture. His study of the type of music people desired at the time as well as the addition of people’s remarks from the era makes his source highly valuable. Andersen stresses mostly on the how and why the music was produced and what intention the artist had. The limitation of the source is that it only analyzes lyrical depth towards the Vietnam War and offers no relation between song lyrics and social perspectives to war.
During the late 60’s, Rock n’ Roll was what amplified the achievements of counterculture and the anti-war movement with other artists such as The Doors, Crosby, Stills and Nash, and Janis Joplin; her being one of the guest performers at Woodstock. By backing the confused and rebelliously minded youth in American society at the time, music was able to show suffering soldiers the success of the counterculture’s actions. Aside from solo artists, bands were highly influential in dispersing views of counterculture and anti-war; of the most two successful being The Grateful Dead and Jefferson Airplane. The Grateful Dead’s style incorporated greatly with what the hippies wanted from an artist at the time; to produce music which fueled their inclination to dream. The Grateful Dead cultivated a powerful base for the folk genre and influenced many artists at the time to become drawn to the initiative of putting an end to the Vietnam War. The band focused more on appealing to societal conflicts of the United States with songs such as “U.S Blues”, “The Golden Road”, “Fire On The Mountain”, and “Hell In A Bucket”, which were all songs written by the band’s lead singer, Jerry Garcia, as examples of hope for the dreams of the youth who were on the counterculture’s side.
During the war, the most favored topic of discussion between anti-war youth was the unjustness of the draft in the United States. Due to the fact that financially privileged youth were able to avoid the draft by going to university, Creedence Clearwater Revival, one of the fundamental performers during Woodstock, released their anti-war hit that appealed to the draft issue,“Fortunate Son”. With lyrics being incorporated in the song such as “It ain’t me/I ain’t no senator’s son/I ain’t no fortunate one”, John Forgerty, the lead singer, captures how the only people who were exempted from fighting in the war were the ones who were financially stable and in luck to avoid the draft with their money, opposite to the majority of the youth in the United States. As well as reproducing an image that the United States is completely opposite from the classless society.
Finally, as mentioned earlier, the most distinguishable bands which flowed from the countercultural ideologies was Jefferson Airplane. The band focused more on showing the flaws and backfires the United States has towards participating in the Vietnam War. It’s lyrics and style focused less in shedding awareness towards the domestic issues coming about in America but centered itself more around putting an end to the war. After the band’s release of the song “Volunteer”, people began recognized the seriousness of their perspective towards putting an end to the war due to the band’s claims in the song to taking the issue to the streets and making an all out revolution. It was manifest that the music at the time developed an air to take a swift action to force the American government to put an end to the Vietnam War. Woodstock Altering American view on Vietnam War: In the United States, there isn't much room in society to draw a line between culture and political diplomacy. With Woodstock proving itself as a powerful symbol of counterculture and serve as an example to the success and capability of the anti-war movement, it brought with it a numerous amount of artists who promoted and supported the movement. Their vast fan-bases, made them become considered to be the megaphones of the counterculture movement by appealing to people through their song lyrics and style. Woodstock’s slogan was: “Three days of peace and music”. The slogan was the festival’s goal; achieving three full days of nothing but peaceful togetherness between Americans by putting the music of counterculture to use. Historians who focus on America’s countercultural back stories have studied the festival’s rich history and background. With many artists developing their lyrics and music to be analyzed into an anti-war perspective, historians such as George McKay are able to claim that Woodstock’s artists were able to bring attention to the negligent treatment of people in Vietnam such as the draft, racial issues, and the loss of hundreds day by day to the people who had no knowledge of why the anti-war movement was occurring. As well, the festival made many people who were in risk of being drafted to Vietnam to get out of it in any way they could. Whether it be faking mental illnesses, purposely going to prison, and entering religious occupations. Also, during the war, more than 50,000 Americans hastily left to Canada to avoid the draft due to it not being a crime in the country itself. Woodstock was an icon of hope to show that the anti-war emotions were not limited to a small amount of people. When the festival mustered together 500,000 people to take part in the “three days of peace and music”, the quiet participants of counterculture’s anti-war movement finally recognized the effects of their ideologies due to the fact that a larger amount of people became demotivated to take part in the war.Woodstock included many different genres of music which different people pertained to. Also, opposing views from historians such as Tom Wells and David Card, as well as American republican citizens came to light regarding their ideas of how the festival was not patriotic and all and traitorous. With Woodstock as a symbol of the power the youth has in counterculture, it was able to diminish the positive views people have towards the war and craved the desire to put an end to it as soon as possible.
Throughout 1969 and 1975, views towards the Vietnam War have been mixed. At first, a great amount of the demographic was supporting the continuation of the war but did not fathom the misfortunate events happening to the people abroad. After examining the anti-war movement and the anti-war sentiment in Woodstock lyrics, the investigation comes to the conclusion that the festival aided the anti-war movement in the United States to the extent that people began to hold protests and boycott the participation of the war and support the movement. With the help of notable artists, social turmoil, and media control, Woodstock became the tangible image of counterculture’s perspective on war. Nevertheless, this study demonstrates that there are clearly problems with reaching a final answer to the question because of the lack of competent views from the American society and the lack of sufficient sources. The evidence and arguments considered has led to the conclusion that the answer to this question is that the festival became the alternative for youth to make the nation listen to them when their voices weren't heard in any other ways; whether it be protests, strikes, or media coverage. By this, Woodstock cultivated itself as one of the key inflection points in American culture.
About the Creator
Reader insights
Be the first to share your insights about this piece.
Comments (1)
Hi, I am writing a research paper on Woodstock and its effect on anti-Vietnam movement and I used this article. I was wondering if you could email/tell me your last name so I can properly cite you. My email is [email protected] . Thank you!
|
This brought several issues with the venue of the festival due to the fact that there were around 450,000 people taking part in it. White Lake, a town in southeast New York presented four men with the perfect spot to hold a concert worth a hundred thousand people. At this time, the role for Woodstock was to exemplify an influential generation and thus creating the slogan of the festival: “Three Days of Peace and Music”. This appealed to a great amount of amicably rebellious youth by targeting the goal of counterculture’s anti-war movement.
With the effective strategies of the anti-war movement in the 1960’s, artists were drawn by its goals and its popularity nationwide. Artists were affected most by the concept of peace and put their acoustic abilities to use to support the anti-war movement and support the civil rights of the people. Numerous modernly influential artists such as Jefferson Airplane, Joan Baez, and Bob Dylan entreated themselves to heartening the counterculture’s stress on spreading peace and love nation-wide. One of the most influential musicians during the anti-war movement was the highly acclaimed and notable Bob Dylan. With his recording of “A Times They Are A-Changin”, Dylan impelled hundreds of musicians and other artists to follow the counterculture movement and reach out to the goals of the anti-war movement and social clashes such as racial equality and free speech. The song’s beauty was its simplicity and only encompassed Dylan himself, his guitar, and his harmonica. However, what brought people’s attention to the song was his lyrics. The song contains the lyrics: “Come writers and critics/Who prophesize with your pen/And keep your eyes wide/ The chance wont come again”. By this, Dylan is reaching out to the media of the United States and how they must cover every story they can about Vietnam to share. Also, by “chance”, he is hinting at the war in Vietnam and how nothing as large and as serious to ever come across the American society. As well, by mentioning the lyrics: “There’s a battle outside/ And it’s ragin’
|
yes
|
Festivals
|
Did Woodstock festival promote peace and love?
|
no_statement
|
"woodstock" "festival" did not "promote" "peace" and "love".. "peace" and "love" were not "promoted" at "woodstock" "festival".
|
https://www.jpost.com/israel-news/israelis-who-grooved-at-woodstock-604192
|
Israelis who grooved at Woodstock - Israel News - The Jerusalem Post
|
Israelis who grooved at Woodstock
‘I remember that it was two years after the Six Day War. I was 16 and I loved meeting new people’
By DUDI PATIMER
Published: OCTOBER 10, 2019 15:54
Woodstock Festival
(photo credit: Courtesy)
Advertisement
Traffic, torrential rain, lack of food, sex, drugs and lots of rock-n-roll – this is what turned the Woodstock Music Festival into a legend. Fifty years later, a number of Israelis who were among the half a million attendees at the rock festival, recount for us that unforgettable experience.
These are the lyrics from a song called “Woodstock,” which describes the mythological event. It was performed by the British band Matthews Southern Comfort and reached the top of the UK pop charts in 1970. The song was written by Joni Mitchell, who wasn’t even at the festival, since she was busy giving a concert in another location.
Mitchell had heard all about Woodstock from her boyfriend, Graham Nash, who had performed at Woodstock with the other members of his band, Crosby, Stills, Nash and Young. If you read the words of the song, you can begin to understand how for the people who lived in the US at the time, the festival was symbolic of the 1960s counterculture, and much more than a gathering of flower children who wanted to listen to music, do drugs and engage in sex.
The festival, which had begun as a single concert to promote a New York record label, soon turned into a three-day-long event that took on political significance and promoted peace and love (and not war). This was toward the end of the Vietnam War, which had taken a heavy toll on the American public. A number of well-known musicians participated, including Jimi Hendrix, Joe Cocker, Janice Joplin, Jefferson Airplane, Joan Baez, The Grateful Dead, Creedence Clearwater Revival, The Who, Richie Havens, Santana and many more fantastic artists, some of whom became famous following their participation in Woodstock.
The Israeli contingency was made up by bohemian artists like Arik Einstein, Uri Zohar, Shalom Hanoch, Boaz Davidson, Zvi Shisel, and Yonatan Gefen, who embraced the messages of peace and camaraderie, and incorporated them into their own music and acts. The closest comparison is the Hebrew version of John Lennon’s song, “Give Peace a Chance,” performed by a group of Israel’s top artists, including Einstein, the Churchills, Hagashash Hachiver and Yossi Banai. A number of Israelis who happened to be in the US at the time were able to attend the three-day festival in August 1969.
“IT WAS mid-summer, 1969. The Vietnam War was in full throttle, and opposition to the war was growing every day,” recalls blogger Jeff Meshel. “In the next town over from where I was going to college, people were extremely conservative and they would get very upset if they heard someone say anything unpatriotic or saw young men with long hair. I remember seeing bumper stickers with, ‘America: Love it or Leave it’ written on them. There was so much tension in the air, which had only increased following the murder of Bobby Kennedy and Martin Luther King, Jr., and riots that had taken place in Chicago during the Democratic National Convention. Ninety percent of Americans hated us purely because we had long hair.”
“We drove north in the purple Mustang belonging to my friend Bill, who worked on the student newspaper with me,” continues Meshel. “On the ride up, we started hearing reports about heavy traffic on the roads leading to the festival area. We decided to get rid of all the marijuana we had left just in case there was a police checkpoint up ahead. We reached the edge of the traffic around noon, which was about 10 kilometers from the festival. Lots of people were parking on the side of the road and walking, so we decided to do the same.”
What’s the first thing you saw when you arrived at the site?
“Lots of huge fields. It was very pastoral and quiet. Some people were sitting on the hoods of their cars, just hanging out and smiling as other people walked by. People had come from as far away as Wisconsin, Missouri, Virginia and Ohio. Many of us felt like society had rejected us, and it felt good to find kindred spirits. Turns out there were like half a million of us at the festival. Freaks, despised by society, discovering that they had a community after all.”
Yankale Shemesh, a Jewish meditation instructor, had been a hippie in the US before he became religious and made aliyah. He’d shown up at Woodstock a full two weeks before the festival began so that he could get a good spot.
“We thought maybe if we came ahead of time, we’d find some paying work, but there wasn’t really any,” he recalls. “I helped set up the stages – a big one that the artists stood on while performing, and a smaller one for them to hang out on after they’d finished their set. The festival organizers had only brought enough food to feed about 10,000 people – that’s how many people they thought would show up. No one imagined in their wildest dreams that so many would come in the end.”
JUDY PORAT, who lives now on Kibbutz Be’erot Yitzchak, was also at Woodstock.
“I was just about to start my fourth year of college at the University of Connecticut,” recalls Porat. “Two friends of mine and I had seen an ad for the festival, and it looked like great fun. We started driving early in the morning on the first day of the festival and ended up parking in a nearby town and walking four kilometers. Everyone there had marijuana or LSD with them. We didn’t go for the music really – we just wanted to enjoy the atmosphere. We were all pretty much high the whole time. It was all very peaceful. There were no police around.”
How long did you stay there?
“It rained really hard the first night of the festival, and so the next day was a catastrophe. There were no bathrooms, the food ran out really quickly, and there weren’t any first-aid stations. We were completely covered in mud, so we left. They were not equipped to handle 500,000 people.”
Shulamit Berman, Porat’s next-door neighbor on the kibbutz, arrived at the festival later that day and had a completely different experience.
“I was a huge rock-n-roll fan,” Berman says eagerly. “Since I keep Shabbat, I’d planned to go up only on Sunday morning with a few other religious friends. It was very mellow and nice. People seemed really happy and full of love.”
Shmulik Bar-Oz, who used to play in the Israeli band Hanisichim, which opened for Jimi Hendrix at Woodstock, arrived at the festival on the third day.
“I’d been traveling around the US with a friend of mine, and we heard there was a cool festival going on, so we decided to check it out,” Bar-Oz explains. “By the time we got there, so many people were going in and out that they were no longer checking for tickets. I’d performed before with Hendrix, so it was incredible to see him again up on the stage.”
“I REMEMBER that it was two years after the Six Day War. I was 16 and I loved meeting new people. And Americans loved hanging out with Israelis, so it was lots of fun for me,” recalls Asher Hershkowitz, who’d been visiting relatives in Brooklyn. “A few guys I met told me they were going to a huge party and that I should come with them. So I joined them. There were lots of people doing drugs, but I wasn’t interested. This was total culture shock for me coming from Israel.”
Did you enjoy the music?
“Well, to be honest, we couldn’t hear it very well,” says Hershkowitz. “It was kind of like background music. And then all of a sudden there was lightning and a huge thunder and I realized we were going to get caught in an enormous storm. I was only 16, so I didn’t mind at all. The first time I actually noticed the music was when Joe Cocker starting singing ‘With a Little Help from My Friends.’ I love the Beatles and when I heard Joe, I left my friends and moved closer to the stage so I could get a closer look. I loved the way he moved his arms around when he sang. His singing really touched my heart.”
The excessive rain on the first night was just one of the difficulties festival attendees had to contend with.
“A few of us needed to pee, and so we began looking for a space where we wouldn’t be right on top of people,” describes Meshel. “It took us 30 minutes to find an empty spot. Then we got hungry, but we couldn’t find any food. So we returned to the car and drove right up to the area where the stage was set up. I don’t actually remember any of the music that was played at the festival – nobody seemed to be paying much attention to it. Then we saw a girl who was experiencing a really bad trip, so we took her to the medical tent. People were dancing and singing in the mud. Some were partly naked. At some point we returned to the car because we were starving. The only thing we’d had to eat was my grandmother’s apple strudel. We tried to get some sleep, but couldn’t really. At 6 a.m., we got in the car and left.”
On March 26, 1970, the movie Woodstock, directed by Michael Wadleigh, debuted. The 330-minute film documents all three days of the festival, and gives a different perspective than that of the few Israelis who’d joined the half a million people at the festival.
“When I was at Woodstock, I didn’t really understand the proportions of what was going on around me,” admits Hershkowitz. “Only after I saw the movie did I begin to comprehend that I had been part of something so momentous. When I returned home after my trip and told friends that I’d been at Woodstock, nobody really reacted with much excitement. Of course, nowadays when I tell people, they think that’s so incredible.”
“We were surrounded by naked people dancing around as we lounged on the grass,” recalls Porat, “but only after I saw the movie did I really understand how crazy it was there. At the time, we thought it was just a fun concert with good music. Over the years, it has taken on greater significance. It certainly was a unique experience.”
“I admit I love seeing the reaction I get from people when I tell them I was at Woodstock,” Meshel confides. “When I hear the songs on the radio nowadays that were played live at the concert, I wax nostalgic, and regret that we left after the first day.”
The Jerusalem Post Customer Service Center can be contacted with any questions or requests:
Telephone: *2421 * Extension 4 Jerusalem Post or 03-7619056
Fax: 03-5613699
E-mail: subs@jpost.com
The center is staffed and provides answers on Sundays through Thursdays between 07:00 AM and 14:00 PM and Fridays only handles distribution requests between 7:00 AM and 12:30 PM
For international customers: The center is staffed and provides answers on Sundays through Thursdays between 7AM and 14PM Israel time Toll
Free number 1-800-448-9291
Telephone +972-3-761-9056
Fax: 972-3-561-3699
E-mail: subs@jpost.com
|
Israelis who grooved at Woodstock
‘I remember that it was two years after the Six Day War. I was 16 and I loved meeting new people’
By DUDI PATIMER
Published: OCTOBER 10, 2019 15:54
Woodstock Festival
(photo credit: Courtesy)
Advertisement
Traffic, torrential rain, lack of food, sex, drugs and lots of rock-n-roll – this is what turned the Woodstock Music Festival into a legend. Fifty years later, a number of Israelis who were among the half a million attendees at the rock festival, recount for us that unforgettable experience.
These are the lyrics from a song called “Woodstock,” which describes the mythological event. It was performed by the British band Matthews Southern Comfort and reached the top of the UK pop charts in 1970. The song was written by Joni Mitchell, who wasn’t even at the festival, since she was busy giving a concert in another location.
Mitchell had heard all about Woodstock from her boyfriend, Graham Nash, who had performed at Woodstock with the other members of his band, Crosby, Stills, Nash and Young. If you read the words of the song, you can begin to understand how for the people who lived in the US at the time, the festival was symbolic of the 1960s counterculture, and much more than a gathering of flower children who wanted to listen to music, do drugs and engage in sex.
The festival, which had begun as a single concert to promote a New York record label, soon turned into a three-day-long event that took on political significance and promoted peace and love (and not war). This was toward the end of the Vietnam War, which had taken a heavy toll on the American public. A number of well-known musicians participated, including Jimi Hendrix, Joe Cocker, Janice Joplin, Jefferson Airplane, Joan Baez, The Grateful Dead, Creedence Clearwater Revival, The Who, Richie Havens, Santana and many more fantastic artists, some of whom became famous following their participation in Woodstock.
|
yes
|
Meteoritics
|
Did a meteor impact cause the Younger Dryas period?
|
yes_statement
|
a "meteor" "impact" "caused" the younger dryas "period".. the younger dryas "period" was "caused" by a "meteor" "impact".
|
https://beta.capeia.com/planetary-science/2019/06/03/disappearance-of-ice-age-megafauna-and-the-younger-dryas-impact
|
Disappearance of Ice Age Megafauna and the Younger Dryas Impact
|
Editorial
When Lewis and Clark reached the Great Plains in 1804 they found a place abounding with megafauna species including antelope, deer, elk, and bison. They also found this huge territory to be divided up between a number of Native American tribes, who made their living from hunting these large mammals. These humans were the apex predators, together with wolves and bears, and there was what appeared to be a perfect predator-prey balance. There is no plausible reason why such balanced predator-prey dynamics should not have also existed thousands of years earlier in North America. At that time the ice age megafauna was dominated by various species of bison, horse, mastodon and the like, which were preyed on by paleo-Indian hunters, most notably the Clovis people who were the direct ancestors of modern Native American populations. Although the correlation between the establishment of the Clovis culture and the demise of the megafauna remains poorly characterized, the "overkill hypothesis" has become a dogma stating that these early, locally sustained hunter-gatherer societies wiped out the majority of the megafauna on a continental scale. As happens so often with revolutionary ideas and concepts that challenge dogmas, an alternative explanation for the demise of the mega-fauna has been met with fierce opposition by main-stream advocates, who have unfortunately taken their response far beyond academic arguments. The proponents of this novel theory, led by Rick Firestone, offer a very different scenario for what happened to the North American megafauna at the end of the last glacial period. This novel extinction scenario is far more plausible from a biological viewpoint and there can be no doubt that this new theory deserves to be treated with considerable academic respect and support.
Disappearance of Ice Age Megafauna and the Younger Dryas Impact
The Younger Dryas (YD) was a sudden period of rapid cooling inferred from oxygen isotopic ratios (18O/16O) in the Greenland Ice Core (GISP2) beginning 12,834 ± 20 years ago and lasting approximately 1,300 years ã. It followed a 5,000 year period of global warming after the last glacial period. It is widely accepted that the YD was caused by the shutdown of the North Atlantic current that circulates warm tropical waters northward. This is proposed to be due to the sudden influx of fresh water from the deglaciation of North America. In the 1990s William Topping discovered that the Gainey Clovis site in Michigan, dating to the onset of the YD, contained copious amounts of iron spherules and particles consistent with an impact event ã. This evidence has subsequently been extended to scores of additional Clovis age sites where a layer of magnetic particles and microspherules, rich in platinum group elements (PGE), and containing nanodiamonds and other high-temperature carbon materials is found often covered by an organic-rich back mat that forms a demarcation line above which no megafauna fossils exist. Simultaneous evidence of major biomass burning is seen in 152 lakes, sediments, marine and ice cores over a wide geographical area. These data provide direct evidence of a comet or meteorite impact event that is referred to as the Younger Dryas Impact.
The black mat
In a study of 97 geoarcheological sites Vance Haynes found that two thirds have a black, organic rich layer (black mat) that dates to the onset of the YD ã. No evidence of megafaunal remains is found within or above the black mat. Haynes concluded that âstratigraphically and chronologically the extinction appears to have been catastrophic, seemingly too sudden and extensive for either human predation or climate change to have been the primary causeâ. An example of the black mat at the Murray Springs Clovis site is shown in Figure 1. Numerous megafaunal fossils have been found directly in contact with the black mat (Fig. 1).
Figure 1. The black mat at Murray Springs Clovis site (left). A mammoth thigh bone from Murray Springs. The upper dark area was in contact with the black mat (right) (Image credits: Richard B. Firestone).
Vance Haynes has discussed the unusual "Big Eloiseâ mammoth find at Murray Springs: âHer bones were arranged in a lifelike pose with her head stretched out upright on the ground, just as she had fallen 13,000 years agoâ (Fig. 2). Broken spear points, scrapers, knives, and other hunting tools were found beside her. Both her back legs were cut off, one lying next to her and the other a short distance away near a fire pit. A broken spear straightening tool was found nearby lying in a mammoth footprint as if it had been trampled by Eloise before her demise. The black mat was draped over Eloise and the other objects at the site. Clearly the butchering of Eloise was suddenly halted by the sudden catastrophic event that must have produced the black mat.
Figure 2. Big Eloise's skeleton as found by Vance Haynes. The dark color is the black mat (Image credit: C. Vance Haynes Jr., University of Arizona).
The widespread occurrence of a black mat 12,800 years ago, above which no megafauna fossils are found, is compelling evidence that the megafauna disappeared suddenly in a catastrophic event.
Magnetic particles and microspherules
A magnet placed directly below the black mat at Murray Springs and other sites will attract many magnetic particles not found in quantity further below or above the black mat. Microscopic analysis of these particles reveals many are spherical, some are dumbbell shaped, others are bottle shaped, hollow, or contain accretions or microcratering by smaller spherules (Fig. 3) ã.
A few microspherules had shear marks that would have formed at >2,200°C. It appears that these microspherules emerged from a cloud of molten iron droplets, colliding with each other as they cascaded to Earth. The microspherules are geochemically and morphologically comparable with cosmic impact material from Meteor Crater, Arizona, and with material from the Trinity nuclear airburst in Socorro, New Mexico, and inconsistent with anthropogenic, volcanic, and authigenic materials ã. Similar evidence has been observed at 18 widely separated sites ãã. This magnetic layer is also found at YD sites where no black mat is present and on the surface YD megafaunal fossils.
Platinum group elements
The PGE elements Os, Ir, Ru, Rh, Pt, Pd are rare in Earthâs crust but abundant in meteorites. Iridium, for example, was found at the CretaceousâPaleogene (K-Pg) boundary indicating that the disappearance of the dinosaurs was linked to a meteorite impact. Similarly, a high concentration of Pt is found in the YDB at 26 sites including the Greenland ice core where the deposition is dated at 12,825 ± 20 cal yr BPã. In addition, Ir and Os anomalies have also been found in the Greenland ice core ãã. PGE elements are compelling evidence that the onset of the YD is associated with a meteor impact.
Widespread fire
Beneath the black mat and at all YDB sites is found charcoal, vesicular carbon spherules, glassy carbon, aciniform carbon (AC/soot), and nanodiamonds (Fig. 4) which are all evidence of associated burning.
While some of these burn markers could indicate localized fire, AC/soot and nanodiamonds have only been associated with meteorite impacts. The evidence for widespread biomass burning is not limited to YDB geoarcheological sites but extends to an even broader study of lake and marine cores across four continents where a simultaneous peak in burning is seen at the onset of the YD ã. Profound evidence of widespread burning at the onset of the YD is also seen in the Greenland ice cores where a massive peak in ammonium (NH4), a marker biomass burning, is observed at 12,822 ± 1 cal yr BP ã. Other markers including nitrate, oxalate, acetate, and formate, indicate that this was the largest biomass burning episode in at least 120,000 years. CO2 evidence from Antarctic ice cores suggests that fires may have consumed â 9% of Earthâs biomass.
Population decline and extinction
At the onset of the Younger Dryas there was a massive, worldwide extinction of mammals weighing over 40 kg. It is estimated that 82% of these animals disappeared in North America, 74% in South America, 71% in Australasia, 59% in Europe, 52% in Asia, and 16% in Sub-Saharan Africa. Fossil evidence suggest the disappearances were very sudden. In addition to the extinction of the mammoths and the disappearance of horses in North America other species, including bison, deer, and moose suffered massive population losses (Fig. 5).
Figure 5. Megafauna and large mammal population changes leading up to and following the onset of the YD (Image adapted from Guthrie RD, 2006, Nature441: 207â209).
Conversely, human population in North America increased dramatically following the Younger Dryas. Clovis people hunted mammoths and other megafauna for only a short time before that prey disappeared. Waters and Stafford estimated that the Clovis time lasted as little as 200 years ã. However, their method of laying out the measured dates in order of value, irrespective of uncertainty, is invalid. A better method is simply weighted average of the data. This yields an average radiocarbon ages of 10,955 ± 27 yr and 10,970 ± 14 yr for North American and South America Clovis sites, respectively. These ages correspond to a calendar date ranging from 12,887-12,827 cal yr BP ã after correction for the difference in the radiocarbon and ice core scales ã. The Clovis era lasted only â 60 years in the Americas and ended at the onset of the YD. The short period of time that Clovis people hunted mammoth is insufficient to have had significant effect on their extinction although humans may indeed have slaughtered the last remaining stragglers.
The Younger Dryas Impact
The nature of the object that caused the YD impact has not yet been experimentally determined. William Napier has proposed that the impact may have come from the debris field of a large 50â100 km comet that broke up in the inner solar system ã. Radio and visual data have identified a zodiacal cloud of largely cometary material including the remnants of Comet Encke. Napier hypothesized that terrestrial catastrophes may happen on timescales of approximately 0.1â1 Myr, due to the Earth running through swarms of debris from disintegrating large comets. This provides a plausible mechanism for the YD impact although direct association with this source is not proven.
The lack of a recognized crater has long been a criticism of the YD Impact. The orientation of the Carolina Bays points to an impact near the Great Lakes, but that explanation remains controversial. The widespread geographical evidence for the YD Impact suggests that there may have been multiple impacts from the breakup of a large, poorly consolidated comet striking various locations, as described by Napier. Recently a 31 km wide crater has been found beneath Hiawatha Glacier in Greenland ã dating to the late Pleistocene (Fig. 6). The authors imply that this impact is consistent with the onset of the YD.
Significant evidence of a YDB layer in Greenland was previously reported by several authors ããã. From the crater size we can estimate that an energy of â 2.4 x 106 megatons of TNT (mT) was released by the impact assuming either a 2.6 km diameter comet traveling at 50 km/s or a 4.2 km asteroid traveling at 15 km/s assuming the impact were into rock ã. An ice impact would require a meteorite â 2 times as large. In either case the blast from this impact would devastate an area up to 1,300 km from the impact ã although the crater is so remote that the devastation area cannot directly explain the YD fires or the disappearance of megafauna. The average global thickness of the dust layer formed by the impact would be â4 mg/cm3, â 200 times larger than the observed microspherules density ã. This dust would lead to a sudden global cooling, worldwide drought, and failure of photosynthesis, for an extended period of time ã. Substantial NOx and SO2 would be produced by this impact, enough to destroy Earthâs ozone layer exposing all life to increased ultraviolet radiation and increased acid rain ã. The impact that caused Hiawatha crater must have left a strong signature in the GISP2 climate record yet the only evidence in GISP2 of an impact coincides with the onset of the YD. There is no plausible reason why an impact leaving a 31 km crater wouldnât have had a major global impact and leave its mark in the geological record.
The Eltanin impact, south of Chile in the Pacific Ocean, is comparable to the Hiawatha Crater. Itâs crater is estimated at â 35 km in diameter and the impact produced a thin melt rock layer enriched in iridium. The Eltanin impact occurred 2.51 ± 0.06 million years ago coincident with the end of the Pliocene era. It produced a 200 m high tsunami at the coast of Chile that was still 35 m high at the coast of Tasmania. The Eltanin impact pushed the Earth into an ice age with a mass extinction of 36% of all genera occurred including 55% of all marine mammals, 35% of sea birds, 43% of sea turtles, and 9% of sharks ã. This suggests that an impact at Hiawatha Crater could cause similar worldwide climate change and extinctions. All previous epochs appear to be correlated with the dates of meteor impacts leaving craters comparable to Hiawatha Crater as shown in Figure 7. No craters are observed in the ocean and few have been found in ice suggesting that the true impact rate is â 4 times the rate inferred from terrestrial craters. It is apparent that these events are important triggers to climate change, extinctions, and evolution.
The bigger picture: regular impacts of comet debris
The Younger Dryas impact is also supported theoretically by astrophysical observations that show that Earth is regularly interacting with the debris of massive dead comets. Historically it is shown that Earth impacts are correlated with major geological changes such as occurred with the Eltanin impact, that involved an object comparable in size to the Hiawatha Crater impactor and caused an extinction rate comparable to that at the YD at the end of the Pliocene epoch. The YD impact coincides with the end of the Pleistocene epoch. While impacts as large as the K-Pg event that killed the dinosaurs are rare, there are certainly many smaller impacts that devastate localized regions yet go unrecognized. Such impacts likely occurred in Beringia â 32-42 ka ago producing massive muck deposits containing the intermixed smashed remains of megafauna fossils and trees, micrometeorites embedded in fossils ã, microspherules and PGE elements in the sediments ã, and likely causing the disappearance of a major mammoth mitochondrial lineage ã. The YD impact is not only supported by extensive physical evidence, but also expected from a long history of cosmic impacts (Fig. 7).
Figure 7. Comparison of known large crater ages (black dots) with the dates of geological epochs. Nearly all coincide with either onset of an epoch or the beginning of a stage within an epoch (dotted lines). Data from the Earth Impact Database (Image credit: Richard B. Firestone).
The Younger Dryas impact should no longer be viewed as a hypothesis but instead as an observation. The black mat provides a distinct dividing line, dated to the onset of the YD, at numerous sites above which no evidence of megafauna exists. Directly beneath the black mat is a thin deposit of impact markers including microspherules, nanodiamonds, PGE elements, and AC/soot. These impact markers can be found adhering directly to the fossil bones of megafauna. Widespread fires occured near the onset of the YD, and the products of intense burning peak simultaneously with the 18O/16O ratio signature of sudden YD cooling in the Greenland ice core. Clovis hunting stops abruptly as the YD commences. This scenario is self-consistent and abundantly supported by experimental data from over 22 sites (Fig. 8).
Figure 8. The Younger Dryas Boundary strewnfield. The red boundary line defines the current known limits of the YDB field of cosmic-impact proxies spanning 50 million square kilometers (Image credit: Richard B. Firestone).
The evidence for the YD impact is compelling but continues to receive widespread criticism from a small number of purists. Attempts by reviewers to suppress the publication of the data have generally failed, and arguments against the data-based conclusions are largely anecdotal. In many respects the arguments against the YD impact have followed similar lines of criticism applied to continental drift, the impact origin of craters on the moon, and K-Pg impact that killed the dinosaurs. In each case âexpertsâ refused to change preconceived notions despite the accumulated evidence. Perhaps the YD Impact can receive quicker acceptance since it is a harbinger of future catastrophic events that we expect to occur.
Other Articles by This Author
Further Articles in Planetary Science
Responses
This work is distributed under a Creative Commons Attribution ShareAlike 4.0 International License (CC BY-SA 4.0). Its use, distribution and reproduction in other forums is permitted provided both the original author(s) and Capeia are accredited, and the license is unchanged (CC BY-SA 4.0). Images or other third party material in this article are covered under the articleâs Creative Commons license unless otherwise indicated; if the material is not covered under a Creative Commons license, users will need to obtain permission from the license holder before it can be reproduced.
|
No craters are observed in the ocean and few have been found in ice suggesting that the true impact rate is â 4 times the rate inferred from terrestrial craters. It is apparent that these events are important triggers to climate change, extinctions, and evolution.
The bigger picture: regular impacts of comet debris
The Younger Dryas impact is also supported theoretically by astrophysical observations that show that Earth is regularly interacting with the debris of massive dead comets. Historically it is shown that Earth impacts are correlated with major geological changes such as occurred with the Eltanin impact, that involved an object comparable in size to the Hiawatha Crater impactor and caused an extinction rate comparable to that at the YD at the end of the Pliocene epoch. The YD impact coincides with the end of the Pleistocene epoch. While impacts as large as the K-Pg event that killed the dinosaurs are rare, there are certainly many smaller impacts that devastate localized regions yet go unrecognized. Such impacts likely occurred in Beringia â 32-42 ka ago producing massive muck deposits containing the intermixed smashed remains of megafauna fossils and trees, micrometeorites embedded in fossils ã, microspherules and PGE elements in the sediments ã, and likely causing the disappearance of a major mammoth mitochondrial lineage ã. The YD impact is not only supported by extensive physical evidence, but also expected from a long history of cosmic impacts (Fig. 7).
Figure 7. Comparison of known large crater ages (black dots) with the dates of geological epochs. Nearly all coincide with either onset of an epoch or the beginning of a stage within an epoch (dotted lines). Data from the Earth Impact Database (Image credit: Richard B. Firestone).
The Younger Dryas impact should no longer be viewed as a hypothesis but instead as an observation.
|
yes
|
Meteoritics
|
Did a meteor impact cause the Younger Dryas period?
|
yes_statement
|
a "meteor" "impact" "caused" the younger dryas "period".. the younger dryas "period" was "caused" by a "meteor" "impact".
|
https://www.astronomy.com/science/fossil-evidence-casts-doubt-on-younger-dryas-impact-theory/
|
Fossil evidence casts doubt on Younger Dryas impact theory ...
|
Fossil evidence casts doubt on Younger Dryas impact theory
Whereas proponents of the theory have offered "carbonaceous spherules" and nanodiamonds, both of which they claimed were formed by intense heat as evidence of the impact, a new study concludes that those supposed clues are nothing more than fossilized balls of fungus, charcoal, and fecal pellets.Provided by the American Geophysical Union, Washington, D.C.
Many soil and plant fungi produce sclerotia — tough balls of cells that are usually 0.02 inches to 0.08 inches (0.5 millimeters to 2 millimeters) in size — as a way to survive periods of harsh conditions. Their shape can vary from spherical to elongated, and their internal structures, which can take on a spongy or honeycomb pattern, matches the descriptions given by Dryas- impact event proponents.
American Geophysical Union
June 21, 2010 New findings challenge a theory that a meteor explosion or impact thousands of years ago caused catastrophic fires over much of North America and Europe, and it triggered an abrupt global cooling period called the Younger Dryas. Whereas proponents of the theory have offered “carbonaceous spherules” and nanodiamonds, both of which they claimed were formed by intense heat as evidence of the impact, a new study concludes that those supposed clues are nothing more than fossilized balls of fungus, charcoal, and fecal pellets. Moreover, these naturally occurring organic materials, some of which had likely been subjected to normal cycles of wildfires, date from a period thousands of years both before and after the time that the Younger Dryas period began, further suggesting that there was no sudden impact event.
“People get very excited about the idea of a major impact causing a catastrophic fire and the abrupt climate change in that period, but there just isn’t the evidence to support it,” said Andrew C. Scott of the Department of Earth Sciences at Royal Holloway, University of London.
The Younger Dryas impact event theory holds that a large meteor struck Earth or exploded in the atmosphere about 12,900 years ago, causing a vast fire over most of North America, which contributed to extinctions of most of the large animals on the continent and triggered a thousand-year-long cold period. While there is previous evidence for the abrupt onset of a cooling period at that time, other researchers have theorized that the climatic change resulted from increased freshwater in the ocean, changes in ocean and atmospheric circulation patterns, or other causes unrelated to impacts.
The impact-theory proponents point to a charred layer of sediment filled with organic material that they say is unique to that period as evidence of such an event. These researchers describe carbon spheres, carbon cylinders, and charcoal pieces that they conclude are melted and charred organic matter created in the intense heat of a widespread fire.
Scott and his fellow researchers analyzed sediment samples to determine the origins of the carbonaceous particles. After comparing the fossil particles with modern fungal ones exposed to low to moderate heat (less than 932°Fahrenheit [500° Celsius]), Scott’s group concludes that the particles are actually balls of fungal material and other ordinary organic particles, such as fecal pellets from insects, plant or fungal galls, and wood, some of which may have been exposed to regularly occurring low-intensity wildfires.
The researchers used microscopic analysis of particles from the Pleistocene-Holocene sediments collected from the California Channel Islands and compared them with modern soil samples that had been subjected to wildfires as well as balls of stringy fungal material, called sclerotia, some of which were also subjected to a range of temperatures in a laboratory. Many soil and plant fungi produce sclerotia — tough balls of cells that are usually 0.02 inch to 0.08 inch (0.5 millimeter to 2 millimeters) in size — as a way to survive periods of harsh conditions. Their shape can vary from spherical to elongated, and their internal structures, which can take on a spongy or honeycomb pattern, matches the descriptions given by Dryas-impact event proponents.
Further, the group studied the amount of light reflected by the fossil spherules and wood charcoal from the sediment layers that included the Dryas period. The researchers used the reflectance of the organic material to determine the amount of heat to which it had been subjected. They found that the fossilized matter was unlikely to have been exposed to temperatures above 842° Fahrenheit (450° Celsius). Radiocarbon dating also showed that the particles taken from several layers ranged in age from 16,821 to 11,467 years ago. Proponents of the impact theory had reported that the spherules they found in the Younger Dryas sediment layer dated to a very narrow time period of 12,900 to 13,000 years ago.
“There is a long history of fire in the fossil record, and these fungal samples are common everywhere from ancient times to the present,” Scott said. “These data support our conclusion that there wasn’t one single intense fire that triggered the onset of the cold period.”
|
Fossil evidence casts doubt on Younger Dryas impact theory
Whereas proponents of the theory have offered "carbonaceous spherules" and nanodiamonds, both of which they claimed were formed by intense heat as evidence of the impact, a new study concludes that those supposed clues are nothing more than fossilized balls of fungus, charcoal, and fecal pellets. Provided by the American Geophysical Union, Washington, D.C.
Many soil and plant fungi produce sclerotia — tough balls of cells that are usually 0.02 inches to 0.08 inches (0.5 millimeters to 2 millimeters) in size — as a way to survive periods of harsh conditions. Their shape can vary from spherical to elongated, and their internal structures, which can take on a spongy or honeycomb pattern, matches the descriptions given by Dryas- impact event proponents.
American Geophysical Union
June 21, 2010 New findings challenge a theory that a meteor explosion or impact thousands of years ago caused catastrophic fires over much of North America and Europe, and it triggered an abrupt global cooling period called the Younger Dryas. Whereas proponents of the theory have offered “carbonaceous spherules” and nanodiamonds, both of which they claimed were formed by intense heat as evidence of the impact, a new study concludes that those supposed clues are nothing more than fossilized balls of fungus, charcoal, and fecal pellets. Moreover, these naturally occurring organic materials, some of which had likely been subjected to normal cycles of wildfires, date from a period thousands of years both before and after the time that the Younger Dryas period began, further suggesting that there was no sudden impact event.
“People get very excited about the idea of a major impact causing a catastrophic fire and the abrupt climate change in that period, but there just isn’t the evidence to support it,” said Andrew C. Scott of the Department of Earth Sciences at Royal Holloway, University of London.
The Younger Dryas impact event theory holds that a large meteor struck Earth or exploded in the atmosphere about 12,
|
no
|
Meteoritics
|
Did a meteor impact cause the Younger Dryas period?
|
yes_statement
|
a "meteor" "impact" "caused" the younger dryas "period".. the younger dryas "period" was "caused" by a "meteor" "impact".
|
https://www.space.com/8824-debate-heats-meteor-role-ice-age.html
|
Debate Heats Up Over Meteor's Role in Ice Age | Space
|
Debate Heats Up Over Meteor's Role in Ice Age
New evidence taken from fossils like this one suggests that the Younger Dryas era (known as the Big Freeze) 12,900 years ago was not caused by a meteor, scientists now suggest.(Image credit: AGU)
Some scientists have thought that the Earth's Ice Age conditions 12,900 years ago were triggered by a meteor or comet. But a recent study suggests that the evidence pointing to the ancient impact is nothing more than fungus and other matter.
According to the impact theory, the event could have caused the extinction of North American mammoths and other species, and killed the early human hunters that occupied North America at the time. Yet the new study concludes that sediment samples taken as evidence of the impact are nothing more than common fossilized balls of fungus and fecal matter - not exactly signs of a space rock crashing into Earth.
Further, the samples -- spherules of carbon used by impact proponents to justify a meteor -- appear thousands of years before and after the Ice Age in sediment records, suggesting they have nothing to do with the impact, scientists said in a statement.
"People get very excited about the idea of a major impact causing a catastrophic fire and the abrupt climate change in that period, but there just isn't the evidence to support it," said lead researcher Andrew C. Scott at the University of London in the UK.
Still, proponents of an impact theory are not backing down.
According to theory, a comet impact or airburst in the atmosphere produced an enormous fire that raged from California to Europe. Melting volumes of ice in the North American ice sheet, the fire sent cold water surging into the world's oceans and knocked off balance the circulation of currents responsible for global heat transport.
Known as the Younger Dryas period or "Big Freeze," the 1,300 years of glacial conditions that followed is well documented in ocean cores and ancient soil samples.
Organic matter normal, not melted
Collected from the same locations in California and Arizona used by proponents of the meteor theory, sediment cores dating back to the inception of the cooling era were compared to samples of modern soil that had been subjected to wildfires. They were also largely identified as compact balls and tendrils of fungal matter known as sclerotia, which are produced by fungi naturally during challenging conditions — hardly unique byproducts of an impact-ignited fire.
Neither the charcoal nor the fossilized balls had been exposed to heat above 450 degrees Celsius (842 F), researchers said in a statement. Further, radiocarbon dating of the spherules, which were sampled from many layers of the sediment cores, found that their ages ranged from 16,821 to 11,467 years old: too wide a berth to count as meaningful trigger for the Younger Dryas period.
Experimental charring tests have shown that this organic matter was subjected to, at most, regular low intensity fire, researcher Nicholas Pinter of Southern Illinois University told SPACE.com. Also, such globules would have been destroyed in any mega fire described by impact proponents, Pinter added.
"After the carbon spherules, only one credible piece of supporting evidence remains — the so-called nanodiamonds purportedly found in 12,900-year-old deposits," Pinter said. "Impact proponents are putting all of their remaining eggs in the nanodiamond basket."
Debate not over
Nanodiamonds are micron-scale fragments of diamond thought to have come to Earth by comets or meteors or formed in the extreme pressure that radiates from their impact.
Considered by the Younger Drays impact proponents as a strong indication of an extraterrestrial event, very small diamonds are found "in the millions to billions" in any sample at the beginning of the Younger Dryas period, as opposed to very small amounts in the background of other periods, geologist James Kennett at the University of California told SPACE.com.
An active member of the school of thought that supports the impact theory, Kennett contested Pinter's claim that carbon spherules are illegitimate evidence.
"He's saying that all carbon spherules are fungal sclerotia, a type of spore that fungus produces," Kennett said. "Our argument for that is that he has not made a compelling case for that at all. There's a whole range of carbon spherules. We can produce these through biomass burning."
A whole family of carbon-rich spherules exist that range beyond sclerotia and show a "striking peak" in sediment layers also rich in nanodiamonds, he said.
"There's no other way of explaining the presence of these diamonds except through extraterrestrial impact," said Kennett. "But there's not only diamonds, there's a range of spherules, a family of spherules that correspond with the nanodiamonds."
The debate may heat up when both sides, including Pinter and Kennett, participate in a public debate at the University of Wyoming Aug. 14.
Get the Space.com Newsletter
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
Zoe Macintosh is a science writer who covered human spaceflight, astronomy and science for Space.com in 2010. She also covered general science for Space.com's sister site Live Science. Zoe studied English literature and physics at Smith College, where she also wrote for the Smith Sophian. Her work has also appeared in the National Association of Science Writers website.
|
Debate Heats Up Over Meteor's Role in Ice Age
New evidence taken from fossils like this one suggests that the Younger Dryas era (known as the Big Freeze) 12,900 years ago was not caused by a meteor, scientists now suggest.(Image credit: AGU)
Some scientists have thought that the Earth's Ice Age conditions 12,900 years ago were triggered by a meteor or comet. But a recent study suggests that the evidence pointing to the ancient impact is nothing more than fungus and other matter.
According to the impact theory, the event could have caused the extinction of North American mammoths and other species, and killed the early human hunters that occupied North America at the time. Yet the new study concludes that sediment samples taken as evidence of the impact are nothing more than common fossilized balls of fungus and fecal matter - not exactly signs of a space rock crashing into Earth.
Further, the samples -- spherules of carbon used by impact proponents to justify a meteor -- appear thousands of years before and after the Ice Age in sediment records, suggesting they have nothing to do with the impact, scientists said in a statement.
"People get very excited about the idea of a major impact causing a catastrophic fire and the abrupt climate change in that period, but there just isn't the evidence to support it," said lead researcher Andrew C. Scott at the University of London in the UK.
Still, proponents of an impact theory are not backing down.
According to theory, a comet impact or airburst in the atmosphere produced an enormous fire that raged from California to Europe. Melting volumes of ice in the North American ice sheet, the fire sent cold water surging into the world's oceans and knocked off balance the circulation of currents responsible for global heat transport.
Known as the Younger Dryas period or "Big Freeze," the 1,300 years of glacial conditions that followed is well documented in ocean cores and ancient soil samples.
|
no
|
Meteoritics
|
Did a meteor impact cause the Younger Dryas period?
|
yes_statement
|
a "meteor" "impact" "caused" the younger dryas "period".. the younger dryas "period" was "caused" by a "meteor" "impact".
|
https://news.agu.org/press-release/fossil-evidence-casts-doubt-on-younger-dryas-impact-theory/
|
Fossil evidence casts doubt on Younger Dryas impact theory - AGU ...
|
Fossil evidence casts doubt on Younger Dryas impact theory
WASHINGTON—New findings challenge a theory that a meteor explosion or impact thousands of years ago caused catastrophic fires over much of North America and Europe and triggered an abrupt global cooling period, called the Younger Dryas. Whereas proponents of the theory have offered “carbonaceous spherules” and nanodiamonds—both of which they claimed were formed by intense heat—as evidence of the impact, a new study concludes that those supposed clues are nothing more than fossilized balls of fungus, charcoal, and fecal pellets. Moreover, these naturally-occurring organic materials, some of which had likely been subjected to normal cycles of wildfires, date from a period of thousands of years both before and after the time that the Younger Dryas period began—further suggesting that there was no sudden impact event.
“People get very excited about the idea of a major impact causing a catastrophic fire and the abrupt climate change in that period, but there just isn’t the evidence to support it,” says Andrew C. Scott of the Department of Earth Sciences at Royal Holloway, University of London, who led the research.
The findings by Scott and his colleagues have been accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union (AGU). The research team included scientists from England, Switzerland, and the United States.
The Younger Dryas impact event theory holds that a very large meteor struck Earth or exploded in the atmosphere about 12,900 years ago, causing a vast fire over most of North America, which contributed to extinctions of most of large animals on the continent and triggered a thousand-year-long cold period. While there is much previous evidence for the abrupt onset of a cooling period at that time, other researchers have theorized that the climatic change resulted from increased freshwater in the ocean, changes in ocean and atmospheric circulation patterns, or other causes unrelated to impacts.
The impact-theory proponents point to a charred layer of sediment filled with organic material that they say is unique to that period as evidence of such an event. These researchers described carbon spheres, carbon cylinders, and charcoal pieces that they conclude are melted and charred organic matter created in the intense heat of a widespread fire.
Scott and his fellow researchers analyzed sediment samples to determine the origins of the carbonaceous particles. After comparing the fossil particles with modern fungal ones exposed to low to moderate heat (less than 500 degrees Celsius, or 932 degrees Fahrenheit), Scott’s group concludes that the particles are actually balls of fungal material and other ordinary organic particles, such as fecal pellets from insects, plant or fungal galls, and wood, some of which may have been exposed to regularly-occurring low-intensity wildfires.
The researchers used microscopic analysis of particles from the Pleistocene-Holocene sediments collected from the California Channel Islands and compared them with modern soil samples that had been subjected to wildfires, as well as balls of stringy fungal material, called sclerotia, some of which were also subjected to a range of temperatures in a laboratory. Many soil and plant fungi produce sclerotia—tough balls of cells that are usually 0.5 millimeters to 2 millimeters in size (0.02 inches to 0.08 inches)—as a way to survive periods of harsh conditions. Their shape can vary from spherical to elongated, and their internal structures, which can take on a spongy or honeycomb pattern, matches the descriptions given by Dryas- impact event proponents.
Further, the group studied the amount of light reflected by the fossil spherules and wood charcoal from the sediment layers that included the Dryas period. The researchers used the reflectance of the organic material to determine the amount of heat to which it had been subjected. They found that the fossilized matter was unlikely to have been exposed to temperatures above 450 degrees Celsius (842 degrees Fahrenheit). Radiocarbon dating also showed that the particles, taken from several layers, ranged in age from 16,821 to 11,467 years ago. Proponents of the impact theory had reported that the spherules they found in the Younger Dryas sediment layer dated to a very narrow time period of 12,900 to 13,000 years before present.
“There is a long history of fire in the fossil record, and these fungal samples are common everywhere, from ancient times to the present,” Scott says. “These data support our conclusion that there wasn’t one single intense fire that triggered the onset of the cold period.”
Funding for this research was provided by the National Geographic Society, the National Science Foundation, the Royal Society of London, the Royal Holloway strategy fund, the Natural Environmental Research Council, and the Integrated Infrastructure Initiative on Synchrotrons and Free Electron Lasers.
As of the date of this press release, the paper by Scott et al. is still “in press” (i.e. not yet published). Journalists and public information officers (PIOs) of educational and scientific institutions who have registered with AGU can download a PDF copy of this manuscript.
Or, you may order a copy of the paper by emailing your request to Kathleen O’Neil at [email protected]. Please provide your name, the name of your publication, and your phone number.
Neither the paper nor this press release are under embargo.
Title
“Fungus, not comet or catastrophe, accounts for carbonaceous spherules in the Younger Dryas 'impact layer'”
|
Fossil evidence casts doubt on Younger Dryas impact theory
WASHINGTON—New findings challenge a theory that a meteor explosion or impact thousands of years ago caused catastrophic fires over much of North America and Europe and triggered an abrupt global cooling period, called the Younger Dryas. Whereas proponents of the theory have offered “carbonaceous spherules” and nanodiamonds—both of which they claimed were formed by intense heat—as evidence of the impact, a new study concludes that those supposed clues are nothing more than fossilized balls of fungus, charcoal, and fecal pellets. Moreover, these naturally-occurring organic materials, some of which had likely been subjected to normal cycles of wildfires, date from a period of thousands of years both before and after the time that the Younger Dryas period began—further suggesting that there was no sudden impact event.
“People get very excited about the idea of a major impact causing a catastrophic fire and the abrupt climate change in that period, but there just isn’t the evidence to support it,” says Andrew C. Scott of the Department of Earth Sciences at Royal Holloway, University of London, who led the research.
The findings by Scott and his colleagues have been accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union (AGU). The research team included scientists from England, Switzerland, and the United States.
The Younger Dryas impact event theory holds that a very large meteor struck Earth or exploded in the atmosphere about 12,900 years ago, causing a vast fire over most of North America, which contributed to extinctions of most of large animals on the continent and triggered a thousand-year-long cold period. While there is much previous evidence for the abrupt onset of a cooling period at that time, other researchers have theorized that the climatic change resulted from increased freshwater in the ocean, changes in ocean and atmospheric circulation patterns, or other causes unrelated to impacts.
The impact-theory proponents point to a charred layer of sediment filled with organic material that they say is unique to that period as evidence of such an event.
|
no
|
Meteoritics
|
Did a meteor impact cause the Younger Dryas period?
|
no_statement
|
a "meteor" "impact" did not "cause" the younger dryas "period".. the younger dryas "period" was not "caused" by a "meteor" "impact".
|
https://today.tamu.edu/2020/07/31/texas-am-study-cooling-of-earth-caused-by-eruptions-not-meteors/
|
Texas A&M Study: Cooling Of Earth Caused By Eruptions, Not Meteors
|
Ancient sediment found in a central Texas cave appears to solve the mystery of why the Earth cooled suddenly about 13,000 years ago, according to a research study co-authored by a Texas A&M University professor.
Some researchers believed the event – which cooled the Earth by about 3 degrees Centigrade, a huge amount – was caused by an extraterrestrial impact with the Earth, such as a meteor collision.
But Waters and the team found that the evidence left in layers of sediment in Hall’s Cave were almost certainly the result of volcanic eruptions.
Waters said that Hall’s Cave, located in the Texas hill country, has a sediment record extending over 20,000 years and he first began researching the cave in 2017.
“It is an exceptional record that offers a unique opportunity for interdisciplinary cooperation to investigate a number of important research questions,” he said.
“One big question was, did an extraterrestrial impact occur near the end of the last ice age, about 13,000 years ago as the ice sheets covering Canada were melting, and cause an abrupt cooling that thrust the northern hemisphere back into the ice age for an extra 1,200 years?”
Waters and the team found that within the cave are layers of sediment, first identified by Thomas Stafford (Stafford Research Laboratories, Colorado), that dated to the time of the proposed impact that could answer the question and perhaps even identify the trigger that started the ancient cold snap.
The event also likely helped cause the extinction of large mammals such as mammoth, horse and camel that once roamed North America.
“This work shows that the geochemical signature associated with the cooling event is not unique but occurred four times between 9,000 and 15,000 years ago,” said Alan Brandon, professor of geosciences at University of Houston and head of the research team.
“Thus, the trigger for this cooling event didn’t come from space. Prior geochemical evidence for a large meteor exploding in the atmosphere instead reflects a period of major volcanic eruptions.
“I was skeptical,” Brandon said. “We took every avenue we could to come up with an alternative explanation, or even avoid, this conclusion. A volcanic eruption had been considered one possible explanation but was generally dismissed because there was no associated geochemical fingerprint.”
After a volcano erupts, the global spread of aerosols reflects incoming solar radiation away from Earth and may lead to global cooling post eruption for one to five years, depending on the size and timescales of the eruption, the team said.
“The Younger Dryas, which occurred about 13,000 years ago, disrupted distinct warming at the end of the last ice age,” said co-author Steven Forman, professor of geosciences at Baylor.
The Earth’s climate may have been at a tipping point at the end of Younger Dryas, possibly from the ice sheet discharge into the North Atlantic Ocean, enhanced snow cover and powerful volcanic eruptions that may have in combination led to intense Northern Hemisphere cooling, Forman said.
“This period of rapid cooling coincides with the extinction of a number of species, including camels and horses, and the appearance of the Clovis archaeological tradition,” said Waters.
Brandon and fellow University of Houston scientist Nan Sun completed the isotopic analysis of sediments collected from Hall’s Cave. They found that elements such as iridium, ruthenium, platinum, palladium and rhenium were not present in the correct proportions, meaning that a meteor or asteroid could not have caused the event.
“The isotope analysis and the relative proportion of the elements matched those that were found in previous volcanic gases,” said Sun, lead author of the report.
Volcanic eruptions cause their most severe cooling near the source, usually in the year of the eruption, with substantially less cooling in the years after the eruption, the team said.
The Younger Dryas cooling lasted about 1,200 years, “so a sole volcanic eruptive cause is an important initiating factor, but other Earth system changes, such as cooling of the oceans and more snow cover were needed to sustain this colder period, “Forman said.
Waters added that the bottom line is that “the chemical anomalies found in sediments dating to the beginning of the Younger Dryas are the result of volcanism and not an extraterrestrial impact.”
|
A volcanic eruption had been considered one possible explanation but was generally dismissed because there was no associated geochemical fingerprint.”
After a volcano erupts, the global spread of aerosols reflects incoming solar radiation away from Earth and may lead to global cooling post eruption for one to five years, depending on the size and timescales of the eruption, the team said.
“The Younger Dryas, which occurred about 13,000 years ago, disrupted distinct warming at the end of the last ice age,” said co-author Steven Forman, professor of geosciences at Baylor.
The Earth’s climate may have been at a tipping point at the end of Younger Dryas, possibly from the ice sheet discharge into the North Atlantic Ocean, enhanced snow cover and powerful volcanic eruptions that may have in combination led to intense Northern Hemisphere cooling, Forman said.
“This period of rapid cooling coincides with the extinction of a number of species, including camels and horses, and the appearance of the Clovis archaeological tradition,” said Waters.
Brandon and fellow University of Houston scientist Nan Sun completed the isotopic analysis of sediments collected from Hall’s Cave. They found that elements such as iridium, ruthenium, platinum, palladium and rhenium were not present in the correct proportions, meaning that a meteor or asteroid could not have caused the event.
“The isotope analysis and the relative proportion of the elements matched those that were found in previous volcanic gases,” said Sun, lead author of the report.
Volcanic eruptions cause their most severe cooling near the source, usually in the year of the eruption, with substantially less cooling in the years after the eruption, the team said.
The Younger Dryas cooling lasted about 1,200 years, “so a sole volcanic eruptive cause is an important initiating factor, but other Earth system changes, such as cooling of the oceans and more snow cover were needed to sustain this colder period, “Forman said.
|
no
|
Meteoritics
|
Did a meteor impact cause the Younger Dryas period?
|
no_statement
|
a "meteor" "impact" did not "cause" the younger dryas "period".. the younger dryas "period" was not "caused" by a "meteor" "impact".
|
https://www.nhm.ac.uk/discover/news/2022/march/greenland-asteroid-struck-world-recovering-from-dinosaur-extinction.html
|
Greenland asteroid struck world recovering from dinosaur extinction ...
|
Accept cookies?
We use cookies to give you the best online experience. We use them to improve our website and content, and to tailor our digital advertising on third-party platforms. You can change your preferences at any time.Â
Greenland asteroid struck world recovering from dinosaur extinction
An asteroid strike in what is now Greenland has been dated to 58 million years ago - just eight million years after the one that caused the extinction of the dinosaurs.Â
While the impacts of this more recent asteroid are uncertain, it could have caused the world to warm significantly in the aftermath of the fifth mass extinction.
A suspect has been ruled out of a historical whodunnit of what caused the world to suddenly become 10â°C colder 13,000 years ago. Â
A team of researchers has found it couldn't have been an asteroid impact in Greenland. Since its discovery in 2015, the Hiawatha crater had been considered a possible explanation for why this period of Earth's history, known as the Younger Dryas, was so cold.Â
However, the scientists investigating the crater, which is large enough to contain the UK city of Birmingham, found that it is much older than first thought, having formed over 58 million years ago.Â
Co-author Professor Michael Storey says, 'Dating the crater has been a particularly tough nut to crack, so it's very satisfying that two laboratories in Denmark and Sweden, using different dating methods, arrived at the same conclusion. Â
'As such, I'm convinced that we've determined the crater's actual age, which is much older than many people once thought.'Â
Its recalculated age makes it possible that the asteroid influenced the Earth's ecosystems when they were still recovering from the impact of the even larger asteroid that hit eight million years before. Â
The findings of the study, conducted by an international team of scientists, were published in Science Advances.Â
What are asteroids?Â
Asteroids are large rocky bodies that are orbiting the Sun, made up of leftovers from the formation of the Solar System. Most known asteroids are found inside the asteroid belt, located between Mars and Jupiter.Â
They can form in a variety of ways, such as accreting from small particles floating in space or being the debris left over from collisions. These collisions can sometimes knock asteroids out of the asteroid belt and put them on their own orbital path.Â
When these enter a planet's atmosphere, they become known as a meteor. If they survive the trip through the atmosphere, they are then described as a meteorite. Any planet can be struck by asteroids, and Earth is no exception.Â
The largest crater left behind by a meteorite is the Vredfort crater in South Africa. Measuring up to 300 kilometres across when it was first formed, it is believed to be the result of a 10-kilometre-wide meteorite hitting Earth two billion years ago. Â
Other large impact craters can be found in Canada's Sudbury basin, Australia's Acraman crater and Russia's Kara crater. However, perhaps the most famous is the Chicxulub crater in Mexico, which is believed to be the remnant of the asteroid strike which wiped out the dinosaurs.Â
Following the shockwave and massive energy release, surviving organisms faced large amounts of debris in the atmosphere which blocked out a significant amount of sunlight. This would have significantly cooled the planet and helped drive many surviving species extinct.Â
As a result, asteroids have long been considered a potential cause of the sharp decline in temperature in the Younger Dryas. The discovery of the Hiawatha crater, which sits within the biggest 10% of impact strikes on Earth, seemed to be a likely candidate.Â
However, sampling of the site has now shown that Hiawatha was not to blame, at least not for the Younger Dryas.Â
What did the Hiawatha asteroid do?Â
The crater lies deep under the Greenland ice sheet making it inaccessible. Scientists instead took sand and melt rock samples formed by the erosion of the crater by a glacier to chemically assess its age.Â
Two separate methods of dating, one using uranium and lead and the other using argon, were used. They both suggest that the crater formed around 58 million years ago, substantially pushing its age back.Â
This means that when it hit what is now Greenland its ice sheet didn't exist, and wouldn't for at least another 55 million years. Instead, the island would have been covered in large coniferous forests, with average temperatures of around 20â°C.Â
On striking this area, the Hiawatha asteroid created a crater 31 kilometres wide and a kilometre deep. It would have been devastating to the area it struck, releasing the equivalent energy of around seven million atomic bombs.Â
Whether it had any affect on the wider world, however, remains uncertain. The date of the meteor strike coincides with the Paleocene Carbon Isotope Maximum, after which the world began to warm up as carbon was released from reservoirs such as bogs and the ocean into the atmosphere.Â
However, no layer of debris from the impact has yet been discovered which might confirm this. This is something the researchers hope to investigate going forward.Â
Lead author Dr Gavin Kenny says, 'Determining the new age of the crater surprised us all. In the future, it will help us investigate the impact's possible effect on climate during an important epoch of Earth's history.'
Don't miss a thing
Receive email updates about our news, science, exhibitions, events, products, services and fundraising activities. We may occasionally include third-party content from our corporate partners and other museums. We will not share your personal details with these third parties. You must be over the age of 13. Privacy notice.
|
Â
Following the shockwave and massive energy release, surviving organisms faced large amounts of debris in the atmosphere which blocked out a significant amount of sunlight. This would have significantly cooled the planet and helped drive many surviving species extinct. Â
As a result, asteroids have long been considered a potential cause of the sharp decline in temperature in the Younger Dryas. The discovery of the Hiawatha crater, which sits within the biggest 10% of impact strikes on Earth, seemed to be a likely candidate. Â
However, sampling of the site has now shown that Hiawatha was not to blame, at least not for the Younger Dryas. Â
What did the Hiawatha asteroid do?Â
The crater lies deep under the Greenland ice sheet making it inaccessible. Scientists instead took sand and melt rock samples formed by the erosion of the crater by a glacier to chemically assess its age. Â
Two separate methods of dating, one using uranium and lead and the other using argon, were used. They both suggest that the crater formed around 58 million years ago, substantially pushing its age back. Â
This means that when it hit what is now Greenland its ice sheet didn't exist, and wouldn't for at least another 55 million years. Instead, the island would have been covered in large coniferous forests, with average temperatures of around 20â°C.Â
On striking this area, the Hiawatha asteroid created a crater 31 kilometres wide and a kilometre deep. It would have been devastating to the area it struck, releasing the equivalent energy of around seven million atomic bombs. Â
Whether it had any affect on the wider world, however, remains uncertain. The date of the meteor strike coincides with the Paleocene Carbon Isotope Maximum, after which the world began to warm up as carbon was released from reservoirs such as bogs and the ocean into the atmosphere. Â
However, no layer of debris from the impact has yet been discovered which might confirm this. This is something the researchers hope to investigate going forward.
|
no
|
Meteoritics
|
Did a meteor impact cause the Younger Dryas period?
|
no_statement
|
a "meteor" "impact" did not "cause" the younger dryas "period".. the younger dryas "period" was not "caused" by a "meteor" "impact".
|
https://phys.org/news/2013-09-prehistoric-climate-due-cosmic-canada.html
|
Prehistoric climate change due to cosmic crash in Canada: Team ...
|
Prehistoric climate change due to cosmic crash in Canada: Team reveals cause of global climate shift 12,900 years ago
An artist's rendition of mastodons, camels and a ground sloth before the environmental changes of the Younger Dryas led to their extinction. Credit: Barry Roal Carlsen, University of Wisconsin
For the first time, a dramatic global climate shift has been linked to the impact in Quebec of an asteroid or comet, Dartmouth researchers and their colleagues report in a new study. The cataclysmic event wiped out many of the planet's large mammals and may have prompted humans to start gathering and growing some of their food rather than solely hunting big game.
The findings appear next week in the online Early Edition of the Proceedings of the National Academy of Sciences.
The impact occurred about 12,900 years ago, at the beginning of the Younger Dryas period, and marks an abrupt global change to a colder, dryer climate with far-reaching effects on both animals and humans. In North America, the big animals all vanished, including mastodons, camels, giant ground sloths and saber-toothed cats. Their human hunters, known to archaeologists as the Clovis people, set aside their heavy-duty spears and turned to a hunter-gatherer subsistence diet of roots, berries and smaller game.
"The Younger Dryas cooling impacted human history in a profound manner," says Dartmouth Professor Mukul Sharma, a co-author of the study. "Environmental stresses may also have caused Natufians in the Near East to settle down for the first time and pursue agriculture."
The high temperatures of the meteorite impact 12,900 years ago produced mm-sized spherules of melted glass with the mullite and corundum crystal structure shown here. Credit: Mukul Sharma
It is not disputed that these powerful environmental changes occurred, but there has long been controversy over their cause. The classic view of the Younger Dryas cooling interlude has been that an ice dam in the North American ice sheet ruptured, releasing a massive quantity of freshwater into the Atlantic Ocean. The sudden influx is thought to have shut down the ocean currents that move tropical water northward, resulting in the cold, dry climate of the Younger Dryas.
But Sharma and his co-authors have discovered conclusive evidence linking an extraterrestrial impact with this environmental transformation. The report focuses on spherules, or droplets of solidified molten rock expelled by the impact of a comet or meteor. The spherules in question were recovered from Younger Dryas boundary layers at sites in Pennsylvania and New Jersey, the layers having been deposited at the beginning of the period. The geochemistry and mineralogy profiles of the spherules are identical to rock found in southern Quebec, where Sharma and his colleagues argue the impact took place.
"We have for the first time narrowed down the region where a Younger Dryas impact did take place," says Sharma, "even though we have not yet found its crater." There is a known impact crater in Quebec—the 4-kilometer wide Corossal crater—but based on the team's mineralogical and geochemical studies, it is not the impact source for the material found in Pennsylvania and New Jersey.
Using a binocular microscope, Dartmouth geochemist Mukul Sharma examines impact-derived spherules that he and his colleagues regard as evidence of a climate-altering meteor or comet impact 12,900 years ago. Credit: Eli Burakian
People have written about many impacts in different parts of the world based on the presence of spherules. "It may well have taken multiple concurrent impacts to bring about the extensive environmental changes of the Younger Dryas," says Sharma. "However, to date no impact craters have been found and our research will help track one of them down."
Citation:
Prehistoric climate change due to cosmic crash in Canada: Team reveals cause of global climate shift 12,900 years ago (2013, September 2)
retrieved 15 August 2023
from https://phys.org/news/2013-09-prehistoric-climate-due-cosmic-canada.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Let us know if there is a problem with our content
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page.
For general inquiries, please use our contact form.
For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Your message to the editors
Your email (only if you want to be contacted back)
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
E-mail the story
Prehistoric climate change due to cosmic crash in Canada: Team reveals cause of global climate shift 12,900 years ago
Note
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose.
The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.
Your message
Newsletter sign up
Get weekly and/or daily updates delivered to your inbox.
You can unsubscribe at any time and we'll never share your details to third parties.
Your Privacy
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties.
By using our site, you acknowledge that you have read and understand our Privacy Policy
and Terms of Use.
|
Their human hunters, known to archaeologists as the Clovis people, set aside their heavy-duty spears and turned to a hunter-gatherer subsistence diet of roots, berries and smaller game.
"The Younger Dryas cooling impacted human history in a profound manner," says Dartmouth Professor Mukul Sharma, a co-author of the study. "Environmental stresses may also have caused Natufians in the Near East to settle down for the first time and pursue agriculture. "
The high temperatures of the meteorite impact 12,900 years ago produced mm-sized spherules of melted glass with the mullite and corundum crystal structure shown here. Credit: Mukul Sharma
It is not disputed that these powerful environmental changes occurred, but there has long been controversy over their cause. The classic view of the Younger Dryas cooling interlude has been that an ice dam in the North American ice sheet ruptured, releasing a massive quantity of freshwater into the Atlantic Ocean. The sudden influx is thought to have shut down the ocean currents that move tropical water northward, resulting in the cold, dry climate of the Younger Dryas.
But Sharma and his co-authors have discovered conclusive evidence linking an extraterrestrial impact with this environmental transformation. The report focuses on spherules, or droplets of solidified molten rock expelled by the impact of a comet or meteor. The spherules in question were recovered from Younger Dryas boundary layers at sites in Pennsylvania and New Jersey, the layers having been deposited at the beginning of the period. The geochemistry and mineralogy profiles of the spherules are identical to rock found in southern Quebec, where Sharma and his colleagues argue the impact took place.
"We have for the first time narrowed down the region where a Younger Dryas impact did take place," says Sharma, "even though we have not yet found its crater."
|
yes
|
Meteoritics
|
Did a meteor impact cause the Younger Dryas period?
|
no_statement
|
a "meteor" "impact" did not "cause" the younger dryas "period".. the younger dryas "period" was not "caused" by a "meteor" "impact".
|
https://news.web.baylor.edu/news/story/2020/texas-cave-sediment-upends-meteorite-explanation-global-cooling
|
Texas Cave Sediment Upends Meteorite Explanation for Global ...
|
WACO, Texas (July 31, 2020) – Texas researchers from the University of Houston, Baylor University and Texas A&M University have discovered evidence for why the earth cooled dramatically 13,000 years ago, dropping temperatures by about 3 degrees Centigrade.
The resolution to this case of mistaken identity recently was reported in the journal Science Advances.
“This work shows that the geochemical signature associated with the cooling event is not unique but occurred four times between 9,000 and 15,000 years ago,” said Alan Brandon, Ph.D., professor of geosciences at University of Houston. “Thus, the trigger for this cooling event didn’t come from space. Prior geochemical evidence for a large meteor exploding in the atmosphere instead reflects a period of major volcanic eruptions.”
After a volcano erupts, the global spread of aerosols reflects incoming solar radiation away from Earth and may lead to global cooling post eruption for one to five years, depending on the size and timescales of the eruption.
The study indicates that the episode of cooling, scientifically known as the Younger Dryas, was caused by numerous coincident Earth-based processes, not an extraterrestrial impact.
“The Younger Dryas, which occurred about 13,000 years ago, disrupted distinct warming at the end of the last ice age,” said co-author Steven Forman, Ph.D., professor of geosciences at Baylor University.
The Earth’s climate may have been at a tipping point at the Younger Dryas, possibly from the ice sheet discharge into the North Atlantic Ocean, enhanced snow cover and powerful volcanic eruptions that may have in combination led to intense Northern Hemisphere cooling, Forman said.
“This period of rapid cooling is associated with the extinction of a number of species, including mammoths and mastodons, and coincides with the appearance of early human occupants of the Clovis tradition,” said co-author Michael Waters, Ph.D., director of the Center for the First Americans at Texas A&M University.
University of Houston scientists Brandon and doctoral candidate Nan Sun, lead author, accomplished the isotopic analysis of sediments collected from Hall’s Cave in the Texas Hill Country. The analysis focused on difficult measurements at the parts per trillion on osmium and levels of highly siderophile elements, which include rare elements like iridium, ruthenium, platinum, palladium and rhenium. The researchers determined the elements in the Texas sediments were not present in the correct relative proportions to have been added by a meteor or asteroid that impacted Earth.
That meant the cooling could not have been caused by an extraterrestrial impact. It had to have been something happening on Earth. But what?
“The signature from the osmium isotope analysis and the relative proportion of the elements matched that previously reported in volcanic gases,” Sun said.
Kenneth Befus, Ph.D., volcanologist at Baylor University, added that “these signatures were likely the result of major eruptions across the Northern Hemisphere, including volcanoes in the Aleutians, Cascades and even Europe.”
“I was skeptical. We took every avenue we could to come up with an alternative explanation, or even avoid, this conclusion,” Brandon said. “A volcanic eruption had been considered one possible explanation but was generally dismissed because there was no associated geochemical fingerprint.”
A volcanic cause for the Younger Dryas is a new, exciting idea, he said. Whether a single major eruption of a volcano could drive the cooling observed, however, is still an open question, the researchers said.
Volcanic eruptions cause their most severe cooling near the source, usually in the year of the eruption, with substantially less cooling in the years after the eruption. The Younger Dryas cooling lasted about 1,200 years, so a sole volcanic eruptive cause is an important initiating factor, but other Earth system changes, such as cooling of the oceans and more snow cover were needed to sustain this colder period, Forman said.
This research underscores that extreme climate variability since the last ice age is attributed to unique Earth-bound drivers rather than extraterrestrial mechanisms. Such insights are important guidance for building better models of past and future climate change.
ABOUT BAYLOR UNIVERSITY
Baylor University is a private Christian University and a nationally ranked research institution. The University provides a vibrant campus community for more than 18,000 students by blending interdisciplinary research with an international reputation for educational excellence and a faculty commitment to teaching and scholarship. Chartered in 1845 by the Republic of Texas through efforts of Baptist pioneers, Baylor is the oldest continually operating University in Texas. Located in Waco, Baylor welcomes students from all 50 states and more than 90 countries to study a broad range of degrees among its 12 nationally recognized academic divisions.
ABOUT THE COLLEGE OF ARTS & SCIENCES AT BAYLOR UNIVERSITY
The College of Arts & Sciences is Baylor University’s largest academic division, consisting of 25 academic departments and eight academic centers and institutes. The more than 5,000 courses taught in the College span topics from art and theatre to religion, philosophy, sociology and the natural sciences. Faculty conduct research around the world, and research on the undergraduate and graduate level is prevalent throughout all disciplines. Visit baylor.edu/artsandsciences.
|
The study indicates that the episode of cooling, scientifically known as the Younger Dryas, was caused by numerous coincident Earth-based processes, not an extraterrestrial impact.
“The Younger Dryas, which occurred about 13,000 years ago, disrupted distinct warming at the end of the last ice age,” said co-author Steven Forman, Ph.D., professor of geosciences at Baylor University.
The Earth’s climate may have been at a tipping point at the Younger Dryas, possibly from the ice sheet discharge into the North Atlantic Ocean, enhanced snow cover and powerful volcanic eruptions that may have in combination led to intense Northern Hemisphere cooling, Forman said.
“This period of rapid cooling is associated with the extinction of a number of species, including mammoths and mastodons, and coincides with the appearance of early human occupants of the Clovis tradition,” said co-author Michael Waters, Ph.D., director of the Center for the First Americans at Texas A&M University.
University of Houston scientists Brandon and doctoral candidate Nan Sun, lead author, accomplished the isotopic analysis of sediments collected from Hall’s Cave in the Texas Hill Country. The analysis focused on difficult measurements at the parts per trillion on osmium and levels of highly siderophile elements, which include rare elements like iridium, ruthenium, platinum, palladium and rhenium. The researchers determined the elements in the Texas sediments were not present in the correct relative proportions to have been added by a meteor or asteroid that impacted Earth.
That meant the cooling could not have been caused by an extraterrestrial impact. It had to have been something happening on Earth. But what?
“The signature from the osmium isotope analysis and the relative proportion of the elements matched that previously reported in volcanic gases,” Sun said.
Kenneth Befus, Ph.D., volcanologist at Baylor University, added that “these signatures were likely the result of major eruptions across the Northern Hemisphere, including volcanoes in the Aleutians, Cascades and even Europe.”
“I was skeptical.
|
no
|
Archaeology
|
Did ancient Egyptians use slaves for pyramid construction?
|
yes_statement
|
"ancient" egyptians used "slaves" for "pyramid" "construction".. "slaves" were used by "ancient" egyptians for "pyramid" "construction".. the "construction" of "pyramids" in "ancient" egypt involved the "use" of "slaves".
|
https://en.wikipedia.org/wiki/Slavery_in_ancient_Egypt
|
Slavery in ancient Egypt - Wikipedia
|
Slavery in ancient Egypt existed at least since the Old Kingdom period. Discussions of slavery in Pharaonic Egypt are complicated by terminology used by the Egyptians to refer to different classes of servitude over the course of dynastic history; interpretation of the textual evidence of classes of slaves in ancient Egypt has been difficult to differentiate by word usage alone.[1] There were three types of enslavement in Ancient Egypt: chattel slavery, bonded labor, and forced labor.[2][3][4] But even these seemingly well-differentiated types of slavery are susceptible to individual interpretation. Egypt's labor culture encompassed many people of various social ranks.
The word translated as "slave" from the Egyptian language does not neatly align with modern terms or traditional labor roles. The classifications of "servant," "peasant," and "slave" could describe different roles in different contexts. Egyptian texts refer to words 'bAk' and 'Hm' that mean laborer or servant. Some Egyptian language refers to slave-like people as 'sqrw-anx', meaning "bound for life".[5] Forms of forced labor and servitude are seen throughout all of ancient Egypt. Egyptians wanted dominion over their kingdoms and would alter political and social ideas to benefit their economic state. The existence of slavery not only was profitable for ancient Egypt, but made it easier to keep power and stability of the Kingdoms.[5][6]
During the Old Kingdom Period, prisoners of war captured by Kemit's army were called skrw-'nh ("bound for life"). This was not a distinct term for "slave" but for prisoners of war, as already stated. The term, hm, emerged with at least two distinct usages: 1) “Laborer” and 2) “Servant”. Documented evidence exists as early as the reign of Sneferu, in the 26th century BC, war campaigns in the territory of Nubia, in which war-captives would be labeled skrw-'nh - and Libyans all of whom would be used to perform labour—regardless of their will otherwise—or – if warranted, would be conscripted into the military. Reliefs from this period depict captured prisoners of war with their hands tied behind their backs. Nubia was targeted—because of its close geographical proximity, cultural similarity, and competitiveness in imperial dominion, and then the scope of campaigns intended to acquire foreign war captives expanded to Libya and Asia. Local Kemites also entered into servitude due to an unstable economy and debts. Officials who abused their power could also be reduced to servitude.[7]
During the First Intermediate Period, slaves were first defined as men with dignity but remained treated as property. When borrowed money owed to wealthier individuals in Egyptian society could not be paid back, family members were sold in return into slavery - especially women. During the Middle Kingdom, records show that coerced laborers included conscripts (hsbw), fugitives (tsjw), and royal laborers (hmw-nsw). The Reisner Papyrus and the El Lahun papyri depict prisoners being employed in state enterprises. Papyrus Brooklyn 35.1446 also shows forced labor being performed on arable state land. If an individual coerced into labor attempted to escape or was absent from their work, they might be condemned to coerced labor for life. One of the El-Lahun papyri describes an example of this occurring: "Order issued by the Great Prison in year 31, third month of the summer season, day 5, that he be condemned with all his family to labor for life on state land, according to the decision of the court." Military expeditions continued to reduce Asiatics to slavery, and state-owned slaves (royal laborers) shared in the same status as these Asiatic slaves. Asiatics could often have Egyptian names but sometimes inscriptions or papyri mentioning them would still apply an ethnic qualification, such as one which mentions an "Asiatic Aduna and her son Ankhu". Both Asiatics and state-owned slaves could perform a variety of jobs: "We find royal laborers employed as fieldworkers, house servants, and cobblers; female laborers as hairdressers, gardeners, and weavers." If a household servant failed to adequately perform their job, they could be dismissed from the home they worked at. In some cases, servants appear to have become emotionally important to their household as depicted on the Cairo Bowl.[7]
One of the Berlin papyri show that by the time of the Second Intermediate Period, a slave could be owned by both an elite individual (like the king) and a community. In addition, the community had grown in power and now held the capacity to own and administer to public property, including that of slaves, replacing some of the traditional power of the king and his private royal laborers. By this period, slaves could also sometimes become citizens. One method by which this could happen was through marriage.[7]
During the New Kingdom period, the military and its expenses grew and so additional coerced labor was needed to sustain it. As such, the "New Kingdom, with its relentless military operations, is the epoch of large-scale foreign slavery".[citation needed] Many more slaves were also acquired via the Mediterranean slave market, where Egypt was the main purchaser of international slaves. This Mediterranean market appears to have been controlled by Asiatic Bedouin who would capture individuals, such as travelers, and sell them on the market. The tomb of Ahmose I contains a biographical text which depicts several boasts regarding the capture of foreign Asiatic slaves. Egyptian servants were treated more humanely as employees, whereas foreign slaves were the objects of trade. The foreigners captured during military campaigns are, for example, referred to in the Annals of Thutmose III as "men in captivity" and individuals were referred to as "dependents" (mrj). In reward for his services in the construction of temples across Egypt, Thutmose III rewarded his official Minmose over 150 "dependents". During and after the reign of Amenhotep II, coerced temple labor was only performed by male and female slaves. At Medinet Habu, defeated Sea Peoples are recorded as having been captured as prisoners of war and reduced to slavery. During this period, slaves could sometimes be rented. One manuscript known as Papyrus Harris I records Ramses III claiming to have captured innumerable foreign slaves:
"I brought back in great numbers those that my sword has spared, with their hands tied behind their backs before my horses, and their wives and children in tens of thousands, and their livestock in hundreds of thousands. I imprisoned their leaders in fortresses bearing my name, and I added to them chief archers and tribal chiefs, branded and enslaved, tattooed with my name, their wives and children being treated in the same way."[8]
In the Adoption Papyrus, the term "slave"/"servant" is contrasted with the term "free citizen (nmhj) of the land of the pharaoh". Often, the phrase nmhj traditionally refers to an orphan or poor. Methods by which slaves could attain their freedom included marriage or entering temple service (being 'purified'). The latter is depicted in, for example, the Restoration Stela of Tutankhamen. Ramesside Egypt saw a development in the institution of slavery where slaves could now become objects of private (rather than just public) property, and they could be bought and sold. Slaves themselves could now own some property and had a few legal protections, although they were not many.[7]
The Chattel slaves were mostly captives of war and were brought to different cities and countries to be sold as slaves. All captives, including civilians not a part of the military forces, were seen as a royal resource. The pharaoh could resettle captives by moving them into colonies for labor, giving them to temples, giving them as rewards to deserving individuals, or giving them to his soldiers as loot. Some chattel slaves began as free people who were found guilty of committing illicit acts and were forced to give up their freedom. Other chattel slaves were born into the life from a slave mother.[9]
Ancient Egyptians were able to sell themselves and children into slavery in a form of bonded labor. Self-sale into servitude was not always a choice made by the individuals' free will, but rather a result of individuals who were unable to pay off their debts.[10] The creditor would wipe the debt by acquiring the individual who was in debt as a slave, along with his children and wife. The debtor would also have to give up all that was owned. Peasants were also able to sell themselves into slavery for food or shelter.[3][4]
Some slaves were bought in slave markets near the Asiatic area and then bonded as war prisoners. Not all were from foreign areas outside of Egypt but it was popular for slaves to be found and collected abroad. This act of slavery grew Egypt's military status and strength. Bonded laborers dreamed of emancipation but never knew if it was ever achievable. Slaves foreign to Egypt had possibilities of return to homelands but those brought from Nubia and Libya were forced to stay in the boundaries of Egypt.[11][12]
One type of slavery in ancient Egypt granted captives the promise of an afterlife. Ushabtis were funerary figures buried with deceased Egyptians. Historians have concluded these figures represent an ideology of earthly persons' loyalty and bond to a master. Evidence of ushabtis shows great relevance to a slavery-type system. The captives were promised an afterlife in the beyond if they obeyed a master and served as a laborer. The origin of this type of slavery is difficult to pinpoint but some say the slaves were willing to be held captive in return for entrance into Egypt. Entrance into Egypt could also be perceived as having been given "life". Willingness of enslavement is known as self-sale.[6] Others suggest that shabtis were held captive because they were foreigners.[6] The full extent of the origins of shabtis are unclear but historians do recognize that women were paid or compensated in some way for their labor, while men were not. However payment could come in many forms. Although men did not receive monetary wages, shabtis were promised life in the netherworld and that promise could be perceived as payment for them.[12] So Shabtis are associated with bonded labor but historians speculate that there was some sort of choice for the Shabtis.
In the slave market, bonded laborers were commonly sold with a 'slave yoke' or a 'taming stick' to show that the slave was troublesome.[13] This specific type of weaponry to torture the slave has many local names in Egyptian documents but the preferred term is called 'sheyba'. There are other forms of restraint used in Ancient Egypt slave markets more common than the shebya, like ropes and cords.
Several departments in the Ancient Egyptian government were able to draft workers from the general population to work for the state with a corvée labor system. The laborers were conscripted for projects such as military expeditions, mining and quarrying, and construction projects for the state. These slaves were paid a wage, depending on their skill level and social status for their work. Conscripted workers were not owned by individuals, like other slaves, but rather required to perform labor as a duty to the state. Conscripted labor was a form of taxation by government officials and usually happened at the local level when high officials called upon small village leaders.[9][14]
Masters of Ancient Egypt were under obligations when owning slaves. Masters were allowed to utilize the abilities of their slaves by employing them in different manners including domestic services (cooks, maids, brewers, nannies, etc.) and labor services (gardeners, stable hands, field hands, etc.). Masters also had the right to force the slave to learn a trade or craft to make the slave more valuable. Masters were forbidden to force child slaves to harsh physical labor.[9]
Ancient Egypt was a peasant-based economy and it was not until the Greco-Roman period that slavery had a greater impact. Slave dealing in Ancient Egypt was done through private dealers and not through a public market. The transaction had to be performed before a local council or officials with a document containing clauses that were used in other valuable sales. However Pharaohs were able to bypass this, and possessed the power to give slaves to any they saw fit, usually being a vizier or noble.[9][14]
Many slaves who worked for temple estates lived under punitive conditions, but on average the Ancient Egyptian slave led a life similar to a serf. They were capable of negotiating transactions and owning personal property. Chattel and debt slaves were given food but probably not given wages.
Egyptian slaves, specifically during the New Kingdom era, originated from foreign lands. The slaves themselves were seen as an accomplishment to Egyptian kings' reign, and a sign of power. Slaves or bak were seen as property or a commodity to be bought and sold. Their human qualities were disregarded and were merely seen as property to be used for a master's labor. Unlike the more modern term, "serf", Egyptian slaves were not tied to the land; the owner(s) could use the slave for various occupational purposes. The slaves could serve towards the productivity of the region and community. Slaves were generally men, but women and families could be forced into the owner's household service.[5]
The fluidity of a slave's occupation does not translate to "freedom". It is difficult to use the word 'free' as a term to describe slave's political or social independence due to the lack of sources and material from this ancient time period.[11]
Much of the research conducted on Egyptian enslavement has focused on the issue of payment to slaves. Masters did not commonly pay their slaves a regular wage for their service or loyalty. The slaves worked so that they could either enter Egypt and hope for a better life, receive compensation of living quarters and food, or be granted admittance to work in the afterlife.[12] Although slaves were not "free" or rightfully independent, slaves in the New Kingdom were able to leave their master if they had a "justifiable grievance". Historians have read documents about situations where this could be a possibility but it is still uncertain if independence from slavery was attainable.[5]
There is a consensus among Egyptologists that the Great Pyramids were not built by slaves.[15][16][17] According to noted archeologists Mark Lehner and Zahi Hawass, the pyramids were not built by slaves; Hawass's archeological discoveries in the 1990s in Cairo show the workers were paid laborers, rather than slaves.[18][16][19][20] Rather, it was farmers who built the pyramids during flooding, when they could not work their lands.[21][22][16][23] The allegation that Israelite slaves built the pyramids was first made by Jewish historian Josephus in Antiquities of the Jews during the first century CE, an account that was subsequently popularized during the Renaissance period.[24] While the idea that the Israelites served as slaves in Egypt features in the Bible, scholars generally agree that the story constitutes an origin myth rather than a historical reality.[25][16] In any case, the construction of the pyramids does not appear in the biblical story.[26] Modern archaeologists consider that the Israelites were indigenous to Canaan and never resided in ancient Egypt in significant numbers.[27]
^Watterson, Barbara (1997). "The Era of Pyramid-builders". The Egyptians. Blackwell. p. 63. Herodotus claimed that the Great Pyramid at Giza was built with the labour of 100,000 slaves working in three-monthly shifts, a charge that cannot be substantiated. Much of the non-skilled labour on the pyramids was undertaken by peasants working during the Inundation season when they could not farm their lands. In return for their services they were given rations of food, a welcome addition to the family diet.
|
Unlike the more modern term, "serf", Egyptian slaves were not tied to the land; the owner(s) could use the slave for various occupational purposes. The slaves could serve towards the productivity of the region and community. Slaves were generally men, but women and families could be forced into the owner's household service.[5]
The fluidity of a slave's occupation does not translate to "freedom". It is difficult to use the word 'free' as a term to describe slave's political or social independence due to the lack of sources and material from this ancient time period.[11]
Much of the research conducted on Egyptian enslavement has focused on the issue of payment to slaves. Masters did not commonly pay their slaves a regular wage for their service or loyalty. The slaves worked so that they could either enter Egypt and hope for a better life, receive compensation of living quarters and food, or be granted admittance to work in the afterlife.[12] Although slaves were not "free" or rightfully independent, slaves in the New Kingdom were able to leave their master if they had a "justifiable grievance". Historians have read documents about situations where this could be a possibility but it is still uncertain if independence from slavery was attainable.[5]
There is a consensus among Egyptologists that the Great Pyramids were not built by slaves.[15][16][17] According to noted archeologists Mark Lehner and Zahi Hawass, the pyramids were not built by slaves; Hawass's archeological discoveries in the 1990s in Cairo show the workers were paid laborers, rather than slaves.[18][16][19][20] Rather, it was farmers who built the pyramids during flooding, when they could not work their lands.[21][22][16][23] The allegation that Israelite slaves built the pyramids was first made by Jewish historian Josephus in Antiquities of the Jews during the first century CE, an account that was subsequently popularized during the Renaissance period.[24]
|
no
|
Archaeology
|
Did ancient Egyptians use slaves for pyramid construction?
|
yes_statement
|
"ancient" egyptians used "slaves" for "pyramid" "construction".. "slaves" were used by "ancient" egyptians for "pyramid" "construction".. the "construction" of "pyramids" in "ancient" egypt involved the "use" of "slaves".
|
https://www.history.com/topics/ancient-egypt/the-egyptian-pyramids
|
Egyptian Pyramids - Facts, Use & Construction
|
Table of Contents
Built during a time when Egypt was one of the richest and most powerful civilizations in the world, the pyramids—especially the Great Pyramids of Giza—are some of the most magnificent man-made structures in history. Their massive scale reflects the unique role that the pharaoh, or king, played in ancient Egyptian society. Though pyramids were built from the beginning of the Old Kingdom to the close of the Ptolemaic period in the fourth century A.D., the peak of pyramid building began with the late third dynasty and continued until roughly the sixth (c. 2325 B.C.). More than 4,000 years later, the Egyptian pyramids still retain much of their majesty, providing a glimpse into the country’s rich and glorious past.
The Pharaoh in Egyptian Society
During the third and fourth dynasties of the Old Kingdom, Egypt enjoyed tremendous economic prosperity and stability. Kings held a unique position in Egyptian society. Somewhere in between human and divine, they were believed to have been chosen by the gods themselves to serve as their mediators on earth. Because of this, it was in everyone’s interest to keep the king’s majesty intact even after his death, when he was believed to become Osiris, god of the dead. The new pharaoh, in turn, became Horus, the falcon-god who served as protector of the sun god, Ra.
Did you know? The pyramid's smooth, angled sides symbolized the rays of the sun and were designed to help the king's soul ascend to heaven and join the gods, particularly the sun god Ra.
Ancient Egyptians believed that when the king died, part of his spirit (known as “ka”) remained with his body. To properly care for his spirit, the corpse was mummified, and everything the king would need in the afterlife was buried with him, including gold vessels, food, furniture and other offerings. The pyramids became the focus of a cult of the dead king that was supposed to continue well after his death. Their riches would provide not only for him, but also for the relatives, officials and priests who were buried near him.
EXPLORE THE MYSTERIES OF ANCIENT EGYPT WITH HISTORY TRAVEL™. CLICK TO LEARN MORE.
The Early Pyramids
From the beginning of the Dynastic Era (2950 B.C.), royal tombs were carved into rock and covered with flat-roofed rectangular structures known as “mastabas,” which were precursors to the pyramids. The oldest known pyramid in Egypt was built around 2630 B.C. at Saqqara, for the third dynasty’s King Djoser. Known as the Step Pyramid, it began as a traditional mastaba but grew into something much more ambitious. As the story goes, the pyramid’s architect was Imhotep, a priest and healer who some 1,400 years later would be deified as the patron saint of scribes and physicians. Over the course of Djoser’s nearly 20-year reign, pyramid builders assembled six stepped layers of stone (as opposed to mud-brick, like most earlier tombs) that eventually reached a height of 204 feet (62 meters); it was the tallest building of its time. The Step Pyramid was surrounded by a complex of courtyards, temples and shrines where Djoser could enjoy his afterlife.
After Djoser, the stepped pyramid became the norm for royal burials, although none of those planned by his dynastic successors were completed (probably due to their relatively short reigns). The earliest tomb constructed as a “true” (smooth-sided, not stepped) pyramid was the Red Pyramid at Dahshur, one of three burial structures built for the first king of the fourth dynasty, Sneferu (2613-2589 B.C.) It was named for the color of the limestone blocks used to construct the pyramid’s core.
The Great Pyramids of Giza
No pyramids are more celebrated than the Great Pyramids of Giza, located on a plateau on the west bank of the Nile River, on the outskirts of modern-day Cairo. The oldest and largest of the three pyramids at Giza, known as the Great Pyramid, is the only surviving structure out of the famed Seven Wonders of the Ancient World. It was built for Pharaoh Khufu (Cheops, in Greek), Sneferu’s successor and the second of the eight kings of the fourth dynasty. Though Khufu reigned for 23 years (2589-2566 B.C.), relatively little is known of his reign beyond the grandeur of his pyramid. The sides of the pyramid’s base average 755.75 feet (230 meters), and its original height was 481.4 feet (147 meters), making it the largest pyramid in the world. Three small pyramids built for Khufu’s queens are lined up next to the Great Pyramid, and a tomb was found nearby containing the empty sarcophagus of his mother, Queen Hetepheres. Like other pyramids, Khufu’s is surrounded by rows of mastabas, where relatives or officials of the king were buried to accompany and support him in the afterlife.
The middle pyramid at Giza was built for Khufu’s son Pharaoh Khafre (2558-2532 B.C). The Pyramid of Khafre is the second tallest pyramid at Giza and contains Pharaoh Khafre’s tomb. A unique feature built inside Khafre’s pyramid complex was the Great Sphinx, a guardian statue carved in limestone with the head of a man and the body of a lion. It was the largest statue in the ancient world, measuring 240 feet long and 66 feet high. In the 18th dynasty (c. 1500 B.C.) the Great Sphinx would come to be worshiped itself, as the image of a local form of the god Horus. The southernmost pyramid at Giza was built for Khafre’s son Menkaure (2532-2503 B.C.). It is the shortest of the three pyramids (218 feet) and is a precursor of the smaller pyramids that would be constructed during the fifth and sixth dynasties.
Who Built The Pyramids?
Though some popular versions of history held that the pyramids were built by slaves or foreigners forced into labor, skeletons excavated from the area show that the workers were probably native Egyptian agricultural laborers who worked on the pyramids during the time of year when the Nile River flooded much of the land nearby. Approximately 2.3 million blocks of stone (averaging about 2.5 tons each) had to be cut, transported and assembled to build Khufu’s Great Pyramid. The ancient Greek historian Herodotus wrote that it took 20 years to build and required the labor of 100,000 men, but later archaeological evidence suggests that the workforce might actually have been around 20,000.
The End of the Pyramid Era
Pyramids continued to be built throughout the fifth and sixth dynasties, but the general quality and scale of their construction declined over this period, along with the power and wealth of the kings themselves. In the later Old Kingdom pyramids, beginning with that of King Unas (2375-2345 B.C), pyramid builders began to inscribe written accounts of events in the king’s reign on the walls of the burial chamber and the rest of the pyramid’s interior. Known as pyramid texts, these are the earliest significant religious compositions known from ancient Egypt.
The last of the great pyramid builders was Pepy II (2278-2184 B.C.), the second king of the sixth dynasty, who came to power as a young boy and ruled for 94 years. By the time of his rule, Old Kingdom prosperity was dwindling, and the pharaoh had lost some of his quasi-divine status as the power of non-royal administrative officials grew. Pepy II’s pyramid, built at Saqqara and completed some 30 years into his reign, was much shorter (172 feet) than others of the Old Kingdom. With Pepy’s death, the kingdom and strong central government virtually collapsed, and Egypt entered a turbulent phase known as the First Intermediate Period. Later kings, of the 12th dynasty, would return to pyramid building during the so-called Middle Kingdom phase, but it was never on the same scale as the Great Pyramids.
The Pyramids Today
Tomb robbers and other vandals in both ancient and modern times removed most of the bodies and funeral goods from Egypt’s pyramids and plundered their exteriors as well. Stripped of most of their smooth white limestone coverings, the Great Pyramids no longer reach their original heights; Khufu’s, for example, measures only 451 feet high. Nonetheless, millions of people continue to visit the pyramids each year, drawn by their towering grandeur and the enduring allure of Egypt’s rich and glorious past.
HISTORY.com works with a wide range of writers and editors to create accurate and informative content. All articles are regularly reviewed and updated by the HISTORY.com team. Articles with the “HISTORY.com Editors” byline have been written or edited by the HISTORY.com editors, including Amanda Onion, Missy Sullivan, Matt Mullen and Christian Zapata.
Fact Check
We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us! HISTORY reviews and updates its content regularly to ensure it is complete and accurate.
Sign up for Inside History
Get HISTORY’s most fascinating stories delivered to your inbox three times a week.
|
Though some popular versions of history held that the pyramids were built by slaves or foreigners forced into labor, skeletons excavated from the area show that the workers were probably native Egyptian agricultural laborers who worked on the pyramids during the time of year when the Nile River flooded much of the land nearby. Approximately 2.3 million blocks of stone (averaging about 2.5 tons each) had to be cut, transported and assembled to build Khufu’s Great Pyramid. The ancient Greek historian Herodotus wrote that it took 20 years to build and required the labor of 100,000 men, but later archaeological evidence suggests that the workforce might actually have been around 20,000.
The End of the Pyramid Era
Pyramids continued to be built throughout the fifth and sixth dynasties, but the general quality and scale of their construction declined over this period, along with the power and wealth of the kings themselves. In the later Old Kingdom pyramids, beginning with that of King Unas (2375-2345 B.C), pyramid builders began to inscribe written accounts of events in the king’s reign on the walls of the burial chamber and the rest of the pyramid’s interior. Known as pyramid texts, these are the earliest significant religious compositions known from ancient Egypt.
The last of the great pyramid builders was Pepy II (2278-2184 B.C.), the second king of the sixth dynasty, who came to power as a young boy and ruled for 94 years. By the time of his rule, Old Kingdom prosperity was dwindling, and the pharaoh had lost some of his quasi-divine status as the power of non-royal administrative officials grew. Pepy II’s pyramid, built at Saqqara and completed some 30 years into his reign, was much shorter (172 feet) than others of the Old Kingdom. With Pepy’s death, the kingdom and strong central government virtually collapsed, and Egypt entered a turbulent phase known as the First Intermediate Period.
|
no
|
Archaeology
|
Did ancient Egyptians use slaves for pyramid construction?
|
yes_statement
|
"ancient" egyptians used "slaves" for "pyramid" "construction".. "slaves" were used by "ancient" egyptians for "pyramid" "construction".. the "construction" of "pyramids" in "ancient" egypt involved the "use" of "slaves".
|
https://www.discovermagazine.com/planet-earth/who-built-the-egyptian-pyramids-not-slaves
|
Who Built the Egyptian Pyramids? Not Slaves | Discover Magazine
|
Pyramid workers were paid locals. Yet historical narratives and Hollywood films have made many believe the Jews built the pyramids while enslaved in Egypt.
Newsletter
There’s no end to conspiracy theories about who built the pyramids. Frequently they involve ancient aliens, lizard people, the Freemasons, or an advanced civilization that used forgotten technology. Scientists have tried and failed to combat these baseless ideas. But there is another misconception about pyramid construction that’s plagued Egyptian scholars for centuries: Slaves did not build the pyramids.
The best evidence suggests that pyramid workers were locals who were paid for their services and ate extremely well. We know this because archaeologists have found their tombs and other signs of the lives they lived.
The Lives of Pyramid Workers
In 1990, a number of humble gravesites for pyramid workers were found a surprisingly short distance from the tombs of the pharaohs. Inside, archaeologists discovered all the necessary goods that pyramid workers would need to navigate passage to the afterlife — basic kindnesses unlikely to have been afforded common slaves.
But that’s not all. Archaeologists have also spent years excavating a sprawling complex thought to have been a part-time home for thousands of workers. The site is called Heit el-Ghurab, and it was also likely part of a larger port city along the Nile River where food and supplies for the pyramid workers, as well as pyramid construction materials, were imported from across the region. Inside the rubble of Heit el-Ghurab, they found evidence for large barracks where as many as 1,600 or more workers could have slept together. And archaeologists also uncovered extensive remains from the many meals they ate, including abundant bread and huge quantities of meat, like cattle, goat, sheep and fish.
These workers’ graffiti can also be found all over the buildings they created. The marks, written in Egyptian, were hidden on blocks inside the pyramids and were never meant to be seen. They record the names of various work gangs, including “the Drunkards of Menkaure” and “the Followers of the Powerful White Crown of Khufu.” (Both gangs were named after the respective pharaohs of their day.) Other marks signify towns and regions in Egypt. A few seem to function as mascots that represent a division of workers, and they feature images of animals such as ibises.
Together, these hieroglyphics give archaeologists hints about where the workers came from, what their lives were like, and who they worked for. Nowhere have archaeologists found signs of slavery or foreign workers. Meanwhile, there is ample evidence of labor tax collection throughout ancient Egypt. That’s led some researchers to suggest workers might have rotated through tours of construction, like a form of national service. However, it’s also unclear if that means the workers were coerced.
Hollywood Myths of Egypt
So why do so many people think the Egyptian pyramids were built by slaves? The Greek historian Herodotus seems to have been the first to suggest that was the case. Herodotus has sometimes been called the “father of history.” Other times he's been dubbed the “father of lies.” He claimed to have toured Egypt and wrote that the pyramids were built by slaves. But Herodotus actually lived thousands of years after the fact.
Another obvious origin of the slave idea comes from the longstanding Judeo-Christian narrative that the Jews were enslaved in Egypt, as conveyed by the story of Moses in the book of Exodus.
Hollywood took the idea and ran with it. Cecil B. DeMille’s The Ten Commandments films — originally released in 1923 and then reshot in 1956 — depicted a tale of the Israelites enslaved and forced to construct great buildings for the pharaohs. And as recently as 2014, the Ridley Scott movie Exodus: Gods and Kings depicted Christian Bale as Moses freeing the Jews from slavery as they built the pyramids. Egypt banned the film, citing “historical inaccuracies,” and its people have repeatedly spoken out against Hollywood movies that repeat Biblical narratives about Jewish people building Egyptian cities. Even the 1998 Dreamworks animated film, The Prince of Egypt, earned significant criticism for its depictions of Moses and Jewish slaves forced into construction projects.
In fact, archaeologists have never found evidence for the Biblical tales that the Israeli people were imprisoned in Egypt. And even if the Jewish people were imprisoned in Egypt, it’s extremely unlikely that they would have built the pyramids. The last pyramid, the so-called Pyramid of Ahmose, was built roughly 3,500 years ago. That’s hundreds of years before historians think the Israeli people first appeared. It’s also centuries before the oldest known Egyptian reference to the Jews on the Victory Stele of Merneptah.
So, while archaeologists still have much to learn about the people who built the pyramids and how the work was organized and executed, it is easy to throw out this basic misconception. The pyramids were built by Egyptians.
|
Pyramid workers were paid locals. Yet historical narratives and Hollywood films have made many believe the Jews built the pyramids while enslaved in Egypt.
Newsletter
There’s no end to conspiracy theories about who built the pyramids. Frequently they involve ancient aliens, lizard people, the Freemasons, or an advanced civilization that used forgotten technology. Scientists have tried and failed to combat these baseless ideas. But there is another misconception about pyramid construction that’s plagued Egyptian scholars for centuries: Slaves did not build the pyramids.
The best evidence suggests that pyramid workers were locals who were paid for their services and ate extremely well. We know this because archaeologists have found their tombs and other signs of the lives they lived.
The Lives of Pyramid Workers
In 1990, a number of humble gravesites for pyramid workers were found a surprisingly short distance from the tombs of the pharaohs. Inside, archaeologists discovered all the necessary goods that pyramid workers would need to navigate passage to the afterlife — basic kindnesses unlikely to have been afforded common slaves.
But that’s not all. Archaeologists have also spent years excavating a sprawling complex thought to have been a part-time home for thousands of workers. The site is called Heit el-Ghurab, and it was also likely part of a larger port city along the Nile River where food and supplies for the pyramid workers, as well as pyramid construction materials, were imported from across the region. Inside the rubble of Heit el-Ghurab, they found evidence for large barracks where as many as 1,600 or more workers could have slept together. And archaeologists also uncovered extensive remains from the many meals they ate, including abundant bread and huge quantities of meat, like cattle, goat, sheep and fish.
These workers’ graffiti can also be found all over the buildings they created. The marks, written in Egyptian, were hidden on blocks inside the pyramids and were never meant to be seen. They record the names of various work gangs, including “the Drunkards of Menkaure”
|
no
|
Archaeology
|
Did ancient Egyptians use slaves for pyramid construction?
|
yes_statement
|
"ancient" egyptians used "slaves" for "pyramid" "construction".. "slaves" were used by "ancient" egyptians for "pyramid" "construction".. the "construction" of "pyramids" in "ancient" egypt involved the "use" of "slaves".
|
https://www.ducksters.com/history/ancient_egyptian_pyramids.php
|
Ancient Egyptian History for Kids: Pyramids
|
Ancient Egypt
Pyramids
The Ancient Egyptian pyramids are some of the most impressive structures built by humans in ancient times. Many of the pyramids still survive today for us to see and explore.
Pyramids of Giza, photo by Ricardo Liberato
Why did they build the pyramids?
The pyramids were built as burial places and monuments to the Pharaohs. As part of their religion, the Egyptians believed that the Pharaoh needed certain things to succeed in the afterlife. Deep inside the pyramid the Pharaoh would be buried with all sorts of items and treasure that he may need to survive in the afterlife.
Types of Pyramids
Some of the earlier pyramids, called step pyramids, have large ledges every so often that look like giant steps. Archeologists think that the steps were built as stairways for the pharaoh to use to climb to the sun god.
Later pyramids have more sloping and flat sides. These pyramids represent a mound that emerged at the beginning of time. The sun god stood on the mound and created the other gods and goddesses.
How big were the pyramids?
There are around 138 Egyptian pyramids. Some of them are huge. The largest is the Pyramid of Khufu, also called the Great Pyramid of Giza. When it was first built it was over 480 feet tall! It was the tallest man-made structure for over 3800 years and is one of the Seven Wonders of the World. It's estimated that this pyramid was made from 2.3 million blocks of rock weighing 5.9 million tons.
Djoser Pyramid by Unknown
How did they build them?
How the pyramids were built has been a mystery that archeologists have been trying to solve for many years. It is believed that thousands of slaves were used to cut up the large blocks and then slowly move them up the pyramid on ramps. The pyramid would get slowly built, one block at a time. Scientists estimate it took at least 20,000 workers over 23 years to build the Great Pyramid of Giza. Because it took so long to build them, Pharaohs generally started the construction of their pyramids as soon as they became ruler.
What's inside the pyramids?
Deep inside the pyramids lays the Pharaoh's burial chamber which would be filled with treasure and items for the Pharaoh to use in the afterlife. The walls were often covered with carvings and paintings. Near the Pharaoh's chamber would be other rooms where family members and servants were buried. There were often small rooms that acted as temples and larger rooms for storage. Narrow passageways led to outside.
Sometimes fake burial chambers or passages would be used to try and trick grave robbers. Because there was such valuable treasure buried within the pyramid, grave robbers would try to break in and steal the treasure. Despite the Egyptian's efforts, nearly all of the pyramids were robbed of their treasures by 1000 B.C.
Khafre's Pyramid and the Great Sphinx Photo by Than217
Interesting Facts about the Great Pyramids
The Great Pyramid of Giza points very precisely to the north.
The pyramids of Egypt are all built to the west of the Nile River. This is because the western side was associated with the land of the dead.
The base of a pyramid was always a perfect square.
They were built mostly of limestone.
There were traps and curses put on the tombs and the pyramids to try and keep robbers out.
|
Ancient Egypt
Pyramids
The Ancient Egyptian pyramids are some of the most impressive structures built by humans in ancient times. Many of the pyramids still survive today for us to see and explore.
Pyramids of Giza, photo by Ricardo Liberato
Why did they build the pyramids?
The pyramids were built as burial places and monuments to the Pharaohs. As part of their religion, the Egyptians believed that the Pharaoh needed certain things to succeed in the afterlife. Deep inside the pyramid the Pharaoh would be buried with all sorts of items and treasure that he may need to survive in the afterlife.
Types of Pyramids
Some of the earlier pyramids, called step pyramids, have large ledges every so often that look like giant steps. Archeologists think that the steps were built as stairways for the pharaoh to use to climb to the sun god.
Later pyramids have more sloping and flat sides. These pyramids represent a mound that emerged at the beginning of time. The sun god stood on the mound and created the other gods and goddesses.
How big were the pyramids?
There are around 138 Egyptian pyramids. Some of them are huge. The largest is the Pyramid of Khufu, also called the Great Pyramid of Giza. When it was first built it was over 480 feet tall! It was the tallest man-made structure for over 3800 years and is one of the Seven Wonders of the World. It's estimated that this pyramid was made from 2.3 million blocks of rock weighing 5.9 million tons.
Djoser Pyramid by Unknown
How did they build them?
How the pyramids were built has been a mystery that archeologists have been trying to solve for many years. It is believed that thousands of slaves were used to cut up the large blocks and then slowly move them up the pyramid on ramps. The pyramid would get slowly built, one block at a time. Scientists estimate it took at least 20,000 workers over 23 years to build the Great Pyramid of Giza.
|
yes
|
Archaeology
|
Did ancient Egyptians use slaves for pyramid construction?
|
yes_statement
|
"ancient" egyptians used "slaves" for "pyramid" "construction".. "slaves" were used by "ancient" egyptians for "pyramid" "construction".. the "construction" of "pyramids" in "ancient" egypt involved the "use" of "slaves".
|
https://www.openculture.com/2021/03/who-built-the-egyptian-pyramids-how-did-they-do-it.html
|
Who Built the Egyptian Pyramids & How Did They Do It?: New ...
|
Although it’s certainly more plausible than hypotheses like ancient aliens or lizard people, the idea that slaves built the Egyptian pyramids is no more true. It derives from creative readings of Old Testament stories and technicolor Cecil B. Demille spectacles, and was a classic whataboutism used by slavery apologists. The notion has “plagued Egyptian scholars for centuries,” writes Eric Betz at Discover. But, he adds emphatically, “Slaves did not build the pyramids.” Who did?
The evidence suggests they were built by a force of skilled laborers, as the Veritasium video above explains. These were cadres of elite construction workers who were well-fed and housed during their stint. “Many Egyptologists,” including archeologist Mark Lehner, who has excavated a city of workers in Giza, “subscribe to the hypotheses that the pyramids were… built by a rotating labor force in a modular, team-based kind of organization,” Jonathan Shaw writes at Harvard Magazine. Graffiti discovered at the site identifies team names like “Friends of Khufu” and “Drunkards of Menkaure.”
The excavation also uncovered “tremendous quantities of cattle, sheep, and goat bone, ‘enough to feed several thousand people, even if they ate meat every day,’ adds Lehner,” suggesting that workers were “fed like royalty.” Another excavation by Lehner’s friend Zahi Hawass, famed Egyptian archaeologist and expert on the Great Pyramid, has found worker cemeteries at the foot of the pyramids, meaning that those who perished were buried in a place of honor. This was incredibly hazardous work, and the people who undertook it were celebrated and recognized for their achievement.
Laborers were also working off an obligation, something every Egyptian owed to those above them and, ultimately, to their pharoah. But it was not a monetary debt. Lehner describes what ancient Egyptians called bak, a kind of feudal duty. While there were slaves in Egypt, the builders of the pyramids were maybe more like the Amish, he says, performing the same kind of obligatory communal labor as a barn raising. In that context, when we look at the Great Pyramid, “you have to say ‘This is a hell of a barn!’’’
The evidence unearthed by Lehner, Hawass, and others has “dealt a serious blow to the Hollywood version of a pyramid building,” writes Shaw, “with Charlton Heston as Moses intoning, ‘Pharaoh, let my people go!’” Recent archeology has also dealt a blow to extra-terrestrial or time-travel explanations, which begin with the assumption that ancient Egyptians could not have possessed the know-how and skill to build such structures over 4,000 years ago. Not so. Veritasium explains the incredible feats of moving the outer stones without wheels and transporting the granite core of the pyramids 620 miles from its quarry to Giza.
Ancient Egyptians could plot directions on the compass, though they had no compasses. They could make right angles and levels and thus had the technology required to design the pyramids. What about digging up the Great Pyramid’s 2 million blocks of yellow limestone? As we know, this was done by a skilled workforce, who quarried an “olympic swimming-pool’s worth of stone every eight days” for 23 years to build the Great Pyramid, notes Joe Hanson in the PBS It’s Okay to Be Smart video above. They did so using the only metal available to them, copper.
This may sound incredible, but modern experiments have shown that this amount of stone could be quarried and moved, using the technology available, by a team of 1,200 to 1,500 workers, around the same number of people archaeologists believe to have been on-site during construction. The limestone was quarried directly at the site (in fact the Sphinx was mostly dug out of the earth, rather than built atop it). How was the stone moved? Egyptologists from the University of Liverpool think they may have found the answer, a ramp with stairs and a series of holes which may have been used as a pulley system.
Learn more about the myths and the realities of the builders of Egypt’s pyramids in the It’s Okay to Be Smart “Who Built the Pyramids, Part 1″ video above.
We’re hoping to rely on our loyal readers rather than erratic ads. To support Open Culture’s educational mission, please consider making a donation. We accept PayPal, Venmo (@openculture), Patreon and Crypto! Please find all options here. We thank you!
Comments (132)
You can skip to the end and leave a response. Pinging is currently not allowed.
They and you know who built the pryamids me and my people did and to this very day you still don’t know how we done lol and you people are still trying to figure out but here’s my question if black people are so this and that then why are you still trying to study my pryamids just a question for white people
Experimental archeologists ( scientist against myths , youtube)in Russia have drilled holes in granite with a wooden drill equipped with a copper bit and sand to the same results found in Egypt , they have also cut granite using a copper saw and sand ,according to them , the cut and holes are achieved through abrasion and movement of the copper implement while drilling and cutting .A diorite vase was reproduced with the same techniques , no laser , death rays involved here nor are aliens . Insofar the perfect aligned stone (limestone not granite )blocks in the pyramids ,may be all the answers you need is in Geopolymer theories ( see Prof. Joseph Davidovitz theories ).
Why are you touting this as “brand new”? We’ve known for years that they were skilled laborers and certainly not Jews as Jews weren’t in existence at the time of the building of the pyramids. The finding that there are remnants of massive sites or townships showing pottery and cattle bones giving credence to the notion bread and meat were regularly consumed as well as a type of mead which bordered on being a meal in itself and for which workers had a daily ration of over 10 pints has been common knowledge since circa 1990 and the discovery of the 4,000 + year old tombs were first displayed by “Egypt Today” back in January 2010.
The head of the Egyptian antiquities dept. Back in the 90’s admitted that what they previously thought were beams from an ancient ship turned out to be parts to a wooden apparatus that was used to build the pyramids. Some guy and his team built one and they used it to place a pyramid block from the ground on to the next level of the pyramid and it only took two guys to operate it. They were also lifting cars and vans. There is a video somewhere, I believe the guy’s name is Ron.
I believe that the Great Pyramid was built 10500 b c by people who fled Atlantis. The Hall of Records when found will prove it. There is no way that no matter how many people you have it could not build this structure in approx 25 years especially with copper tools.
The 10,500 year BC date has been being thrown around for decades and I have always believed that we have had help from Aliens and I have been called everything you can imagine but I have never thought that belief in God means that humans are the only ones made as I can’t be that much of a pompous ass and I am 60 and I was born believing that all those stars in the night sky could have life and I have not seen an alien yet but I know that there is a possibility that I will before I die!
There are 5x as many pyramids in the South Sudan, and they are much older , but no one wants to look into it because the black- nubian tribe is still there. One English man went there and came to the same conclusion. This guy sopke of Aswan..Aswan , Egyot(khmt),the Black-khmt and nubian people are still there..
Will, the ancient Egyptians were not “primitive”, they had a good deal of engineering knowledge and experiments have shown ways that 1000+ workers could have done it, by reducing friction enough. It’s best that we go with real evidence and reason instead of just our own biased opinions.
Egyptians didn’t build the pyramids. They were there long before the Egyptians got there.
Whoever built them had a knowledge of technology, astromony, mathematics, that we don’t possess today. On the subject of how they were built , there is a story of a stone tablet found in Egypt some years ago about Alexander the Great’s time Egypt. His senior advisors came to him with a story of a small village near Cairo, where the people had magical powers. ALEX went there and found that these people fould raise boulders off the ground and move them around with ease. Ales worried that his army could defeated by dropping rocks, had the whole village killed.
Your people? You do realise that we are all descendents from each other?
If you want to be like that, they are not your people as I am Egyptian so they are my people? I guarantee you are probably 10th generation African American and nowhere near as close as a descendant as I.
You are absolutely right wé all are from the same family. Thé real truth is most peolpe dont want to admit it or come To terms with it.Because thier heart is not right with God and they are blinded by thier own self-rightiousness and Worldviews.
Maybe the whole gizah plateau was planned from the beginning. A structural map of what would be placed and built in that area .
All the pyramids planned and built at the sane time but started by 1 king /pharaoh and the next pharaohs continuing the construction until completed..
The purpose they where built for .who knows ??? How they where built was by human hands to a grand plan set by master masons and engineers which would have kept the work force well fed ,employed and rewarding in building monuments they probably didn’t even know what where being built for.?
There maybe tombs allover the gizah plateau , inscribed with names of the workers and probably their families as well .why not ?
Better next to a huge monument than buried out in the desert. List and forgot. With the chance jackals and foxes digging up your beloved ancestors and leaving their remains sprawled across the desert.
How did they cut the granite for the queens and the kings chambers they where cut granite not bashed out the grand gallery. Around the base of the pyramids you will find holes in stone that have been drilled or bored out how did they do that
Both skilled (for the time), even brilliant (and paid) engineers and artisans AND numerous slaves or serfs could be employed on the same massive project. These are not mutually contradictory. Witness the U. S. Capital and other magnificent US structures. I believe the same is true for tsarist Russia and many feudal and semi-feudal societies.
Please, modern Academia wants you to believe the crap they put out, just a o they don’t lose their funding. The truth of the building has been proven by another archeologist that has been silenced by Academia. The proof is in the molding forms that are still present in some parts of the area. The limestone was mixed and formed on the spot. The Archeologist in question even reproduced the event using exactly the same tools and forms to show how easy it was. Please, please stop with controlling of real history by means of a group that fears for their own funding.
Really bro? This has absolutely nothing to do with race. You know how to make racism go down to a low minimum? Quit talking about it and giving it attention. We are all brothers and sister’s and children of God and I’m so sick of people turning everything into a racial thing. I’m Italian and French mainly. The color of someone’s skin or where they are from has absolutely nothing to do with the pyramids. It seems you’re looking for trouble and that’s exactly what they want you to do. Don’t fall for it. The majority of the world isn’t racist but sadly those that are racist get more attention so we see it on tv and think racism is everywhere and the media purposely does this because many white liberals want you to think you’re suppressed and need their help and then when they get in office they leave the black community hanging and making you think you need the government because they think you’re too stupid to make decisions on your own and that’s what pisses me off. They all talk a good game but they dont care about any of us they just want power and want you to vote for them and you get government assistance and stay poor. Wake up and pay attention and I say all this with mad love. I’ve lived in the hood for 6 years in the early 2000s in Baltimore then in Charlotte and I saw exactly what was going on and it broke my heart. You wanna know why most planned parenthoods are in the hood? Because a white liberal racist named Margaret Sanger wants black women to abort their babies in hopes of killing off the black population. Don’t take my word for it read about it. When i lived in the projects i got a lotta love and made lifelong friends and some of the best people ive ever known live in the hood and i see right through what the government does to you guys and we need to stop separating one another based on our skin color and place of origin. The only thing that matters is what’s inside. You don’t need the government or the democrats to help you. You are free to be who you want to be and nobody can stop you and don’t let anyone convince you otherwise. Spread knowledge and love to everyone and be kind and stop looking at everything as a race thing because that’s exactly how the government wants you to think. Much love.
I just have to say, only a racist person can look at the pyramids and make this about race. It’s sickening we can’t even have a conversation anymore without someone making into a racial argument. It’s aad really… Most of the world isn’t racist but those that are get the most attention and then people watch the news and feed into this false narrative that everything is racist. You know how we get rid of it? Stop giving it attention and stop separating yourselves from one another based on skin color and place of origin. We are all brothers and sisters. I was born in 85 and I didn’t grow up with racism.. We all hung out and had every race in our group and didn’t judge one another based on race,just their attitude and intentions is what mattered but now because of a handful of incidents by some pricks the democrats once again used the racial tension to their advantage and yet again many fall for it. Read up on history because a lot of people have no clue we have been here before and we know how it ends. They want us to fight and want everyone to think everyone that isn’t a Democrat is racist when that couldn’t be farther from the truth. The Republicans fought and won the civil war to free the slaves but yet democrats have turned evil and thats why i switched parties because it’s sickening what they do and how they still have the power to brainwash the black community. A lot more have woken up to their schemes and i love to see that. Stop giving racism attention and stop segregating yourself! We are all free to be what we to be but the way things are going this might not last long and then the government will control every aspect of our lives if people don’t wake up. I say all this with mad love but it just stresses me out how you can read this and look at this beautiful world wonder and automatically turn this into a racial thing. Just stop it! We’re all brothers and sister’s and just because of a few racists jerks got tons of attention and black lives matter made billions off the suffering of racial injustice and have done absolutely nothing to help the black community!!!! I could go on for days about this but you get the point… Hopefully. At least Trump’s crazy ass had good policies and built opportunity zones in inner cities and had the lowest number of black unemployment and had the highest amount of the black vote because people are waking up to what these liberals have turned into. You dont need the government to succeed and you’re not suppressed! You have the same opportunities as anyone you just have to work hard to get there like everyone else does! Stop falling for the governments bullshit and stop thinking everything is about race. Only a racist person would look at the pyramids and turn it into a black and white thing when neither race has anything to do with these. They ate good and were extremely smart. Nikola Tessa was looking at the pyramids and had come to the conclusion it was built as a power plant of some sort.. It has to do with energy and the stars as well. They are finding out our ancient ancestors were highly intelligent and we are now just catching up.. These days we are so self involved and worried about meaningless bs that we have dumbed ourselves down and it’s sad. Instead of learning from the past and learning that black people had slaves (just like kamala harris grandfather and Irish people were enslaved as well it wasnt just a white and black thing but anyway that’s the past and that’s how things were done but there’s nothing we can do about it now because the Republicans saved the slaves and yet here we are over 150 years later still talking about it! It’s so stupid. I blame teachers and todays democrats and the media but it’s up to each and everyone of us to stop this crap and stop giving into it! We’re all related in one way or another and this needs to end!!
The water towers were needed… They bild the towers with water… They floated the stones with boat like platforms.. They worked smart not hard… But it was still not easy work… The taped in the bottom to let the water flow to the people.. So they could be strong and have strong crops.. They used everything and did not waste anything… U need to feed the mind to be strong.. Now days they want us to eat fast food to be clueless and take tottle advantage by brain washing us to beleave everything u see on tv… I just wish they would be fare.. But every one with power wants to own the world.. I just had to tell my toughs… Thanks for letting me speak … Your brother from another mother Shawn M Maddox… I love this place I call home.. But junk food is junk..
Many theories and thoughts exist about who built the pyramids. We know that Hebrews were around at the time of the building of the pyramids and that they were also slaves. So why couldn’t they along with “under class” Egyptians work alongside each other to build them?
The great Pyramid of Giza is the most perfect and scientific monument on the face of the earth! Not only is it a architectural and engineering marvel, it is a geographical one too! According to experts, the great pyramid of Giza is the most accurately aligned structure ever created by human beings. The builders would have needed to possess highly sophisticated knowledge of mathematics and geometry knowing the true dimensions of the earth to extreme precision. Then have possessed exceptionally advanced technical instrumentation to site the great pyramid which just happens to be located AT THE EXACT INTERSECTION OF THE LONGEST LINE OF LATITUDE AND LONGEST LINE OF LONGITUDE. It is also perfectly oriented to the four points of the compass still more accurate than the Paris observatory!
Those are not staves they are holding in the heirogliphics they are tuning forks. They used harmonics to move the material used to construct them. It’s obviously not in the interest of science to reveal this but it is in fact true. Sound is your answer. What did Tesla say again? There is a reason that people dealing with harmonics and engineers are kept apart in every technology institution in the world. Crystal caverns in Florida is a great example. Monks do it daily. I’ve seen them move larger stones. BTW the Egyptions that most say”built them did not. They poached them. They were built way before those guys showed up and said” look what we did” it’s not true. Look at the water erosion. These were constructed before a great cataclysm. Tuning forks are your answer look it up now that you know that. It’s the truth.
Never ceases to amaze some people try to make this about race. get it right were all in this together however they were built they didn’t squabble about color or race ime sure they worked together and got it done. They obviously had more sense then. Race or Color! really??
There are much larger stones at Baalbek Lebanon, or the way things have been carved out of stone at Petra Jordan, or the megalithic stone work in Peru South America which is located high up on a mountain. The Summerians had an accurate diagram of our solar system over 6000 years ago
Zahi Zawass is a hack. He is only in it for himself. Refused to let anyone explore the void under the Sphinx.
Where did the math come from to build the pyramids? Better yet, where did it go? I wouldn’t be surprised if moving of the stones didn’t use a similar method as Edward Leedskalnin did to move the stones to build Coral Castle. Methods that certainly must be in the records that Hawass sealed from historians and archeologists. There was way more than human knowhow and technology involved in building the pyramids. Why is it that we cannot replicate today the methods that were used for building the pyramids? Were the books burned when Napolean raided the libraries in Egypt? Or, are they located in the Vatican library? A bigger question is, if the records do still exist, why are they kept under lock and key?
A more important question than who built them is, why did they build them? These were not built solely as tombs for the pharaohs.
First of all Egyptians did nit build piramida, second piramida was build list 10.000 – 15.000 years ago. That is opinion by previous civilizacijom. And thy was build before floating. We do not have knowledge of building technology that was usred to build piramida.
No one denies that the ancient Egyptians spent enormous time and effort working on the pyramids using whatever technology they had to repair and improve the pyramids. They even performed a wondrous job reshaping the head of the Sphinx.
But it is absolutely clear that the original builders had a body of knowledge of spherical geometry, stone working, and astronomical awareness far,far ahead of their time. Just accept the fact that the wonderful Egyptian inherited and improved something that had been there for a very long time.
I agree with Jeffno coral castle in florida by Edward leedskalnin and he did it moving tons of coral with 3 poles of florida pine and a secret black box on top, and the man only weighed 100 lbs. He knew something a secret technology, something???:we will be find out if he can we can….
The Great Pyramid (the oldest one) was never a tomb. Never intended to be. Caliph Al Mamoun cut his way into it in the 9th century.. no tomb, no coffin, no body, no treasure…no nothing… But an above ground passage system that is ventilated from the outside in two if its chambers. No hieroglyphics or paintings either. That it was not built by Egyptians has been known for nearly 200 years. You folks talking about 1990 are a little less enlightened than you think you are. Exploration, scientific exploration going on in the late 19th century.
Giza was built by the Annunaki of Olmec. They work with electricity and levitate blocks using a wrist watch. They would literally lay in huge granite blocks and charge up. The Almighty is an Olmec; He had a son named Osiris with a reptilian. Osiris lived on mars but visited earth and taught the humans Set created agriculture & science before he was assassinated. The pyramids also supercharge the dwarf star at our planets core. My IG is @strandedfalcon I break it all down. Those blocks walked through the air!
So to the ones earlier who said white people take claim for y’all’s pyramids…. Can you inform us how YOU did it ? Since you know it was done by black people please explain your theory other than we are black and better than everyone else and feel owed something by everyone !!!! I’m so sick of everyone claiming racism when really all you feel is racism and that’s what you promote….. Kinda hard to say racism is a bad thing when you say a black life matters and a white life doesn’t ……
Great point Ron … Fact of the matter is people of all race and color have made mistakes over time….. The key is to learn from history forgive forget and move the hell on ! Not try to erase or rewrite history differently. Proven fact that if you don’t know history it WIll repeat itself as we see black supremacist now instead of white supremacists
There’s been a lot of incredible name since the pyramids were built Tesla Einstein none of them would be in comparison to somebody in their time that thought up the concept to build something of that magnitude and then went to work on it as if it was even feasible the thought of it is a little incomprehensible not to mention the cracking cut of the stone is one thing the moving is an entirely different story
What if I were to tell you the Pyramids and Sphinx were there long before the Egyptians as well as places like Puma Puncu which is even more advanced than the great pyramids. What if I told you that upwards of 30 highly advanced civilizations have come and gone from the Earth in it’s 5 billion year history. Some of them reaching the same level of technology we have today and far beyond. What if I were to tell you this has all happened before and there is nothing new under the sun 😉
I am leaning towards the idea that the sandstone blocks were actually created out of an ancient form of concrete and were moulded on site. There are some stones in the great pyramid’s lower chambers with wood and copper imbeaded in them which supports this theory.
Although humans built the pyramids it was not built when they said it was. Namely for reasons that there is ancient inscriptions from Mesopotamia and sumarians that speak of the pyramids back as far as 6000 bc. Well before Khufu was even born.
More than likely khufu worked on the pyramids or renovated them for lack of a better term. That is why his name is on it.
I am not frustrated or dismissing of the above articles theory. What I get frustrated about is how mainstream historians take one instance of discovery as fact and close the book.
That’s not how discoveries are made. You need to look at the most likely but then keep questioning and searching.
It seems like they took one scientists opinion and a poorly drawn hyroglyph and called it a day. And that was over 100 years ago and we have made mountains of discovery since then. Most that have challenged khufu as the builder.
Lastly
For all the glory and honor of being the pharaoh who built the pyramids you know how much and how many stories were written about him? Zero. There is little to nothing about him.
We need to keep questioning. Keep looking. There is more to this.
When addressing the subject of slaves maybe building the pyramids, so many people assume that it was the Hebrews.
The Hebrews were puts to work making clay bricks, not shrugging stone. While it’s true that they were slaves, they definitely didn’t build the pyramids.
The evidence points very strongly towards paid workers as described in the video. There is evidence of an ancient port having been excavated near the pyramids which would have received the stone.
Look at the molecular magnetic orientation in a pyramid block in a natural stone they are all aligned in the same direction.however in poured concrete the orientations are random.plainly pyramid block molecular orientations are random.science has proven the block were not carved solid stone massive blocks laid into place.see for yourself
You did not build the pyramids. A skilled workforce of ancient Egyptians built them. Even if you are a direct descendant of one of those skilled laborers (which of course, unless they decide to do DNA testing on the bodies of the workers, you will never know for certain) you cannot claim credit for building them.
When a son takes credit for his father’s work, that son is a lazy liar. What is a person hundreds of generations removed, who is claiming credit, called?
Your racism is a disgusting quality. If you have a problem with a group of people, be specific. Don’t throw everyone who happens to have the same hue of skin tone together in one, all-encompassing pile. That is a racist move no matter which race is being generalized. We are all members of the human race, whether you like it or not.
Thank you for not being a one sided thinker. Duh there were slaves. Duh there were Jews (it’s literally historical record). Duh there were many many skilled workers. There’s so much duh around the pyramids its annoying and mind numbing. Perhaps that is because I also am Ozymandias. But not here for that just now. There’s talk of Atlanteans. While i definitely believe there were multiple cultures, why did the south american natives have white bearded gods and claim literally to have arrived by boat from Atlantis, I don’t think these are their work. People are confused bout what these things were and how they came to be. They’re literally a polymath showing off his Geometry skills, whether as guiding golden landmarks glinting on the horizon, or tombs, I’ll even feed the energy harnessing battery theory. Sure maybe. But they’re so easily understood in creation I don’t get the debate anymore. Atlantis is in the durr duh durrest of place too. AC Oddysey even had the exact shape of the island, prolly fell in during Pompeii. Like people get all tripped out on Stonehenge. Ridiculous. Like that’s so obviously a calender/star tracking assistant tool, they rolled each stone on logs and kept moving the back log to the front when it was free. Likely they cut the pyramid stones right there at the site and used the flooding Nile to transport the raw stone from up river. I mean allegedly. I wouldn’t know but it all seems quite simple and obvious in my measure. There are pyramids all over the place. Lets talk about the ancient perfectly architectural domes before the technology existed. Triangles are basic. Lets talk spheres. Or Helix even. Why is the cell in a human eyball (unexplained by evolution btw still no clue how eyeballs happened) is the same shape as galaxies? And I don’t wanna hear “give us one miracle and we got it from there”.
The Valley of the Kings and traditional Khemet, ancient Egypt, was 450 miles south of Giza. Upper Egypt has no pyramids. Only the Valley where rulers were interred and Thebes, Luxor, the cities of Egypt were located. Lower Egypt was almost always ruled by foreign powers- Elam, Mitanni, Hittites, Assyrians, Persians, Macedonians.
Archaix.com shows Great Pyramid buiot using machines, geopolymers, by about 600 workers and that all other pyramids were attempts to replicate it and never attempted in Upper Egypt.
Seriously has nobody here ever read The Emerald Tablets of Thoth ? If you really want to know the truth about not only ancient Egypt, but the rest of the world, start reading everything they try to hide from you. Every little mythology, every little band from the Bible, especially the Lost books of Enki, the Mahabharata, the enuma elish, the Dead Sea scrolls, and also research the fact that Zacharia stitchin was not the first to translate the Sumerian clay tablets there were about a half dozen or so before him that translated them and got pretty much the exact same translations. Start looking at geology, paleontology and then combine the two and look at this world on a grander scale than anyone has ever imagined, and you’ll start to understand that literally everything you’ve been taught is a lie. The truth is right in front of you, and it’s bigger than you could ever imagine !
Oh, and I nearly forgot a short nearly forgotten book written by Robert Morningsky. It’s called the Terra papers. Is a short history of our solar system told through the eyes of a native American tribe that was told to them by the sky people. This one little book alone will open your world nearly as far as everything I mentioned in my previous comment.
This is outrageous 😳 😤… as if they over look contraindications current evidence just to make these false claims…. this has a feel of propaganda to it. What government or educational institutions are over looking Randal Carlson and gram handcock….
I like pbs.. but now I understand how it us used to influence this nation and its people… i am sore about this and will never give to pbs.
The government is still hiding the truth from we the people unless you have been there to see the pyramids it’s an accomplishment that cannot be done today. It is the past and as we should be looking forward. And nothing will change this…..??!!
I want to see some tools that cut granite so well. Copper tools are too soft. There should be something lying around. Or some engineering plans. They did not do this from memory. Calculations? Lifting tonnage may have been figured out, but there should be ample evidence on how they planned, mathmatics, sophisticated tools etc. They kept records very well. This was not done by memory.
All of a sudden 5000 BC Hunter and gathers figured out advanced mathmatics, engineering design,advanced moving of toonage,waited for soft, copper tools to be made to cut granite precisely. Hieroglyphics is just a progression of cave drawings. And they did not write anything down. They did this from memory.And Neferatiti is African you can tell from the sculpture founds. It all makes sense using deductive reasoning. Right? Those Smart hunter gathers must have bigger brains in their elongated skulls.
Please explain the 200ft underground chamber surrounded by fresh clear water with a massive black granite coffin? How long ago to did this miracoulsous hunter gathers figure out how to find water 200ft down in desert , cut a path thru stone and deposit such a heavy finely cut coffin thru small tunnels? Copper of course! Obviously it was built long before the pyramid above. Theory???
This is rubbish, especially the building cranes. The newest research discovert several plains under the pyramid with a small river at the lowest level. I suppose the stone have been transported by water in the pyramid. See the explanation videos here:
i have to agree that white people didn’t have anything to do with the building the pyramids. Its very obvious that they are too stupid…. You would think white fat stupid people would have difficulty tying their shoelaces ..The wooden drill bite cutting through gratie. Stay in school whities you might learn something
We must notice that if The Great Pyramid was built during 30 years,-
that means that if they were working 24 hours daily, 7 days week and 365 days a year during 30 years- than every 6.9 minutes one stone block (2 to 50 tons) was made (carved), delivered and exactly placed! Every 7 minutes!
Believable?
Just wondering if the postulators of the theory this was all built by a cadre of skilled labors using copper tools and brute strength, have ever built anything, ever? Just the skill of cutting block, particularly given the preciseness of their fit, would take months per block with soft tools… and there are 2.3 million per pyramid. Don’t believe that time line? Try it. And then there’s the added chore of carting tens of thousands of many-ton blocks up and well over 100m in height. Not saying aliens, but there is a technology which is clearly missing from the equation.
The heaviest stones in the Great Pyramid are estimated to weigh 80 tons. A single block of stone weighing 1,500 TONS was moved ENTIRELY by human hand in Russia in the 18th century, and by just 400 men. Read about it here:
The pyramids were constructed during the period of the Old Kingdom, 1,000 years before Egypt conquered part of the Near East to found the New Kingdom Empire. The Jewish (Hebrew) language and culture did not begin to emerge until about the 11th century BCE, and there is not a shred of hard evidence to show that the Exodus story is anything other than biblical myth. The closest approximation to it may be the historical fact that a Near Eastern people called the Hyksos ruled Egypt for a time after the collapse of the Middle Kingdom, and that they may have been forcibly expelled when the New Kingdom was founded in about 1560 BCE.
Divide 2,500,000 by 10,950 and the result is 228, i.e. 228 blocks had to go into place every day.
Divide 228 blocks by 24 hours, that’s approximately 9.5 blocks an hour; but it’s unlikely that they would have worked through the night, so let’s say 12 hours, that’s 19 blocks an hour, hard work, but not impossible if you have a workforce of 20,000.
Each block weighed an estimated 1250 kilograms, (2,500 lbs).
If 100 men pull each block, then divide 1,250 by 100 – it’s the equivalent of each man pulling just 12.5 kilos, or 25 lbs.
If they used just 5,000 of the estimated 20,000 work force, then they could move FIFTY blocks at a time, e.g. fifty groups of 100 pullers in a long train; in a 12-hour day, they could probably get more than 200 blocks into place, perhaps as many as 400 or even 500, if then used half the estimated workforce 10,000 people, then they could get even more stones into place.
The heaviest stones, over the Queen’s and King’s chambers, probably required special operations, using a very large number of the workers to move each stone. Again, do the maths – 80 metric tons is 80,000 kilos, say for that operation they used 3,000 workers, then divide 80,000 by 3,000 and that’s the equivalent of each man pulling just 26 kilos. Increase it to 4,000 workers, just 20% of the workforce, and the weight is reduced to 20 kilos per man.
When they built those pyrmaids they used what they had the most of, SAND. All they did was souround the base of their project and roll the blocks of stone into place and set them down. Then pile up some more sand higher all around it and just keep rolling up the blocks of stone and set them into place. I don’t understand why it’s so confusing to figure out how it was done. THAT’S HOW THEY DID IT FOLKS. IT’S NOT ROCKET SCIENCE……….
It’s no way man could have built no pyramids I think Giants did everything is on point one false move in the whole pyramid throwed out of place no way they could have told it the big heavy rocks up that in the sky Giants built the pyramid God made Adam he was a giant
I think all the bones they find and saying they dinosaurs they would actually giant peoples and they took other structures out to see from giant fish and that is saying they are dinosaurs the giant bones come from the Giants that existed here on this Earth many of years ago
Yes, Black slaves built the pyramids as they built the USA they built the White House I find it laughable they make movies showing white pp in Africa with straight hair white skin in one of the hottest places on the earth. Africa, Black pp. Read your Bible God to Black Jesus Black and Black first on the earth ok
It is highly probable the Egyptians used water as a weight for a funicular system to build the pyramids, as there was a lot of water around the pyramids at the time of the construction. Flat bottomed barges were used to ship the stone from a reasonable distance away, right up to the base of the pyramids. To use a system of pulleys and ropes to pull the 2.2tonne blocks up a slope would be very simple when a container of about 4-5m3 weighing 4-5 metric tonnes as a counterweight to the stone. It would have been far less effort to fill the containers each lift, than for hundreds of people to pull the stones via ropes and pulleys. Logs as tracks with plenty of rendered animals fat for grease or olive oil would have facilitated the slide. A water ballasted funicular wold have been well within the capabilities of the Egyptians even thousands of years ago. A simple, elegant solution to moving large heavy materials, even then !
Of course everyone’s arguing about race. Do you think the people who built the pyramids stopped to argue about race while they worked? This manufactured racial crisis perpetrated by the left wing media is really low hanging mental fruit for spoiled, emotionally immature, mindless idiots to waste time on. Fuck all you hoes.
The pyramids Kemet (Egypt) and Nubia (Sudan) were built by ancient black Africans as temples of their Sun worship. The pyramid-building culture was later exported to other parts of the world, including West Africa, Asia, Central-America, and South America. In Nigeria (West Africa), the British destroyed the Uto pyramids, which ancient Ibibio mystics of the Sun-Moon worship erected in honor of Uto, the earth, greenery, fertility, agricultural, beauty and planter goddess. The Ibibio mystics of Ibrit Am (Ibritam) a.k.a. Ibrit Aum, Ibrit Om, or Ivrit Am built the Uto pyramids.
Uto is the Ibibio equivalent of ancient Egypt’s Hathor, Uto, Edjo or Wadjet. Uto is pronounced OOT-HOR, and girls born during the season of crop cultivation are often named Uto among the Ibibios.
No, those 3 pyramids are much older and they were definitely NOT build by Egipcians. This theory made by one man 100 years ago was obsolete from the beginning. The Egipcians might have built the 3 little ones and that is all their technology allowed them to.
Omg bro enough!! How long you gonna beat that horse he dead already god damn man it’s 2023!! Not 1920 lol bro make a decision for the better of the rest of your life… u gonna be a victim or a survivor? Right now. Decide bro. White ppl weren’t the only ones that enslaved ppl. The Irish were enslaved the Chinese were enslaved the Japanese etc etc. its just the way it was bro. Someone had to do it! Do I like it? No! And think about this for a second. If your pops and grandpa and grand dad etc didn’t come over here as slaves… IF you a legit family tree of one. Then where would you be? In Africa? You’d prefer that? Lol. Cmon man. You got provided a better future from the sacrifices of your fam. IF you capitalize on that opportunity tho. I love how lil Wayne says don’t tell me that they keeping the black man down when I’m a black multimillionaire that made it out the 3rd ward magnolia projects. Lol . It’s all mind state. Trust me being white ain’t cool these days everyone wanna toss shade at us. Entitled this and that. We basically get hated on for absolutely nothing that we personally did. Ever. Anyways wish you the best bro take care 💯💯
Yeah that’s why we’re in a sort of renaissance the same way they were in the dark ages then rediscovered classics we’re discovering that we’ve always been a space faring civilization that it’s not just a bunch of mumbo jumbo talked about in bible, Sumerian, greek mythology , native mythology
Really? You’re asking this as a joke rite? Some oil to grease the slabs n make the stones that weighed from 6 to 10 tones each just for the lower levels oh yeah little bit of vegetable oil should do it hahaha. …haha I dunno maybe answer your own question slab some oil on a Ford f 350 lube it nice n good and parallel park it eeeee z lol idiot
As ur waiting for ambulance to arrive in severe pain…you’ll have a nice clear answer that no ! Also want to say NOBODY knows anything to it’s fullest and 100% from anything that if it’s in the past unless you were there nobody! Just like death cannot know 1fuking hundred percent what happens when we die until it happens. Period.
The stones used to construct the pyramid are not the same size; the levels are not the same height ; the stones that make up the bulk of the structure are not perfect because they did not need to be perfect. The casing stones were cut perfectly and polished mirror smooth. It would take more effort to cast the stones and
the casted material would have to cure before additional weight could be placed upon them delaying your building by years . You would have to move more material to cast the stones , remember you are using water to mix your material , the stones would decrease in weight and dimensions as the water is lost through the anhydrous process . That curing process could last for years based on the mass . I’m’ just putting this information in so it can be considered
About Us
Open Culture scours the web for the best educational media. We find the free courses and audio books you need, the language lessons & educational videos you want, and plenty of enlightenment in between.
|
Although it’s certainly more plausible than hypotheses like ancient aliens or lizard people, the idea that slaves built the Egyptian pyramids is no more true. It derives from creative readings of Old Testament stories and technicolor Cecil B. Demille spectacles, and was a classic whataboutism used by slavery apologists. The notion has “plagued Egyptian scholars for centuries,” writes Eric Betz at Discover. But, he adds emphatically, “Slaves did not build the pyramids.” Who did?
The evidence suggests they were built by a force of skilled laborers, as the Veritasium video above explains. These were cadres of elite construction workers who were well-fed and housed during their stint. “Many Egyptologists,” including archeologist Mark Lehner, who has excavated a city of workers in Giza, “subscribe to the hypotheses that the pyramids were… built by a rotating labor force in a modular, team-based kind of organization,” Jonathan Shaw writes at Harvard Magazine. Graffiti discovered at the site identifies team names like “Friends of Khufu” and “Drunkards of Menkaure.”
The excavation also uncovered “tremendous quantities of cattle, sheep, and goat bone, ‘enough to feed several thousand people, even if they ate meat every day,’ adds Lehner,” suggesting that workers were “fed like royalty.” Another excavation by Lehner’s friend Zahi Hawass, famed Egyptian archaeologist and expert on the Great Pyramid, has found worker cemeteries at the foot of the pyramids, meaning that those who perished were buried in a place of honor. This was incredibly hazardous work, and the people who undertook it were celebrated and recognized for their achievement.
Laborers were also working off an obligation, something every Egyptian owed to those above them and, ultimately, to their pharoah. But it was not a monetary debt. Lehner describes what ancient Egyptians called bak, a kind of feudal duty.
|
no
|
Archaeology
|
Did ancient Egyptians use slaves for pyramid construction?
|
yes_statement
|
"ancient" egyptians used "slaves" for "pyramid" "construction".. "slaves" were used by "ancient" egyptians for "pyramid" "construction".. the "construction" of "pyramids" in "ancient" egypt involved the "use" of "slaves".
|
https://www.bbc.co.uk/history/ancient/egyptians/pyramid_builders_01.shtml
|
Ancient History in depth: The Private Lives of the Pyramid-builders
|
The Private Lives of the Pyramid-builders
Pyramids tell us about the fabulous lives of great pharaohs, who died surrounded by symbols of wealth and privilege. But the story of the ordinary people who built them is less often told. Archaeologist Dr Joyce Tyldesley redresses the balance.
Page options
Mystery builders
Who built the pyramids? And where did those builders live? Egyptologists used to suspect that Egypt's construction sites were supported by purpose-built villages, but there was no archaeological evidence for this until the end of the Victorian age.
Then in 1888 the theory was finally confirmed, when British archaeologist Flinders Petrie started his investigation into the Middle Kingdom pyramid complex of Senwosert II at Ilahun. Here an associated walled settlement, Kahun, yielded a complete town plan whose neat rows of mud-brick terraced houses provided a wealth of papyri, pottery, tools, clothing and children's toys - all the debris of day-to-day life that is usually missing from Egyptian sites.
...few early Egyptologists were prepared to 'waste time' looking for domestic architecture.
If we are to make sense of the Great Pyramid at Giza as a man-made monument, this is precisely the sort of evidence that we need to uncover. But with so many splendid tombs on offer, few early Egyptologists were prepared to 'waste time' looking for domestic architecture. It is only recently, thanks largely to the ongoing excavations of Egyptologists Mark Lehner and Zahi Hawass, that excavation around the base of the Great Pyramid has started to reveal the stories of the pyramid-builders there.
All archaeologists have their own methods of calculating the number of workers employed at Giza, but most agree that the Great Pyramid was built by approximately 4,000 primary labourers (quarry workers, hauliers and masons). They would have been supported by 16-20,000 secondary workers (ramp builders, tool-makers, mortar mixers and those providing back-up services such as supplying food, clothing and fuel). This gives a total of 20-25,000, labouring for 20 years or more.
All archaeologists have their own methods of calculating the number of workers employed at Giza...
The workers may be sub-divided into a permanent workforce of some 5,000 salaried employees who lived, together with their families and dependents, in a well-established pyramid village. There would also have been up to 20,000 temporary workers who arrived to work three- or four-month shifts, and who lived in a less sophisticated camp established alongside the pyramid village.
The village dead - men, women and children - were buried in a sloping desert cemetery.
The village dead - men, women and children - were buried in a sloping desert cemetery. Their varied tombs and graves, including miniature pyramids, step-pyramids and domed tombs, incorporate expensive stone elements 'borrowed' from the king's building site. The larger, more sophisticated, limestone tombs lie higher up the cemetery slope; here we find the administrators involved in the building of the pyramid, plus those who furnished its supplies.
...the permanent workers lived with their families in the shadow of the rising pyramid.
Tomb robbers more or less ignored these workers' tombs, their rather basic grave goods being of little interest to thieves in search of gold. Consequently many skeletons have survived intact, allowing scientists to build up a profile of those who lived, worked and died at Giza. Of the 600 or more bodies so far examined, roughly half are female, with children and babies making up over 23 per cent of the total. Thus we have confirmation that the permanent workers lived with their families in the shadow of the rising pyramid.
Managing the task
The tombs of the supervisors include inscriptions relating to the organisation and control of the workforce. These writings provide us with our only understanding of the pyramid-building system. They confirm that the work was organised along tried and tested lines, designed to reduce the vast workforce and their almost overwhelming task to manageable proportions.
...the work was organised along tried and tested lines, designed to reduce the vast workforce and their almost overwhelming task to manageable proportions.
The splitting of task and workforce, combined with the use of temporary labourers, was a typical Egyptian answer to a logistical problem. Already temple staff were split into five shifts or 'phyles', and sub-divided into two divisions, which were each required to work one month in ten. Boat crews were always divided into left- and right-side gangs and then sub-divided; the tombs in the Valley of the Kings were decorated following this system, also by left- and right-hand gangs.
At Giza the workforce was divided into crews of approximately 2,000 and then sub-divided into named gangs of 1,000: graffiti show that the builders of the third Giza pyramid named themselves the 'Friends of Menkaure' and the 'Drunkards of Menkaure'. These gangs were divided into phyles of roughly 200. Finally the phyles were split into divisions of maybe 20 workers, who were allocated their own specific task and their own project leader. Thus 20,000 could be separated into efficient, easily monitored, units and a seemingly impossible project, the raising of a huge pyramid, became an achievable ambition.
...graffiti show that the builders of the third Giza pyramid named themselves the 'Friends of Menkaure' and the 'Drunkards of Menkaure'.
As bureaucracy responded to the challenges of pyramid building, the builders took full advantage of an efficient administration, which allowed them to summon workers, order supplies and allocate tasks. It is no coincidence that the 4th Dynasty shows the first flourishing of the hieratic script, the cursive, simplified form of hieroglyphics that would henceforth be used in all non-monumental writings.
The temporary workers
The many thousands of manual labourers were housed in a temporary camp beside the pyramid town. Here they received a subsistence wage in the form of rations. The standard Old Kingdom (2686-2181 BC) ration for a labourer was ten loaves and a measure of beer.
We can just about imagine a labouring family consuming ten loaves in a day, but supervisors and those of higher status were entitled to hundreds of loaves and many jugs of beer a day. These were supplies which would not keep fresh for long, so we must assume that they were, at least in part, notional rations, which were actually paid in the form of other goods - or perhaps credits. In any case, the pyramid town, like all other Egyptian towns, would soon have developed its own economy as everyone traded unwanted rations for desirable goods or skills.
...the pyramid town, like all other Egyptian towns, would soon have developed its own economy...
The temporary labourers who died on site were buried in the town cemetery along with the tools of their trade. As we might expect, their hurried graves were poor in comparison with those of the permanent workers who had a lifetime to prepare for burial at Giza.
Again investigations are still in progress, but Mark Lehner has already discovered a copper-processing plant, two bakeries with enough moulds to make hundreds of bell-shaped loaves, and a fish-processing unit complete with the fragile, dusty remains of thousands of fish. This is food production on a truly massive scale, although as yet Lehner has discovered neither storage facilities nor the warehouses.
The animal bones recovered from this area and from the pyramid town include duck, the occasional sheep and pig and, most unexpectedly, choice cuts of prime beef. The ducks, sheep and pigs could have been raised amidst the houses and workshops of the pyramid town but cattle, an expensive luxury, must have been grazed on pasture - probably the fertile pyramid estates in the Delta - and then transported live for butchery at Giza.
Who were the pyramid builders?
After comparing DNA samples taken from the workers' bones with samples taken from modern Egyptians, Dr Moamina Kamal of Cairo University Medical School has suggested that Khufu's pyramid was a truly nationwide project, with workers drawn to Giza from all over Egypt. She has discovered no trace of any alien race; human or intergalactic, as suggested in some of the more imaginative 'pyramid theories'.
...a truly nationwide project, with workers drawn to Giza from all over Egypt.
Effectively, it seems, the pyramid served both as a gigantic training project and - deliberately or not - as a source of 'Egyptianisation'. The workers who left their communities of maybe 50 or 100 people, to live in a town of 15,000 or more strangers, returned to the provinces with new skills, a wider outlook and a renewed sense of national unity that balanced the loss of loyalty to local traditions. The use of shifts of workers spread the burden and brought about a thorough redistribution of pharaoh's wealth in the form of rations.
Almost every family in Egypt was either directly or indirectly involved in pyramid building. The pyramid labourers were clearly not slaves. They may well have been the unwilling victims of the corvée or compulsory labour system, the system that allowed the pharaoh to compel his people to work for three or four month shifts on state projects. If this is the case, we may imagine that they were selected at random from local registers.
Almost every family in Egypt was either directly or indirectly involved in pyramid building.
But, in a complete reversal of the story of oppression told by Herodotus, Lehner and Hawass have suggested that the labourers may have been volunteers. Zahi Hawass believes that the symbolism of the pyramid was already strong enough to encourage people to volunteer for the supreme national project. Mark Lehner has gone further, comparing pyramid building to American Amish barn raising, which is done on a volunteer basis. He might equally well have compared it to the staffing of archaeological digs, which tend to be manned by enthusiastic, unpaid volunteers supervised by a few paid professionals.
Links
About the author
Author and broadcaster Joyce Tyldesley teaches Egyptology at Manchester University, and is Honorary Research Fellow at the School of Archaeology, Classics and Egyptology, Liverpool University. She is author of Tales from Ancient Egypt (Rutherford Press, 2004) and Egypt: How a Lost Civilization was Rediscovered (BBC Publications, 2005), written to accompany the BBC TV series of the same name.
BBC links
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.
|
Effectively, it seems, the pyramid served both as a gigantic training project and - deliberately or not - as a source of 'Egyptianisation'. The workers who left their communities of maybe 50 or 100 people, to live in a town of 15,000 or more strangers, returned to the provinces with new skills, a wider outlook and a renewed sense of national unity that balanced the loss of loyalty to local traditions. The use of shifts of workers spread the burden and brought about a thorough redistribution of pharaoh's wealth in the form of rations.
Almost every family in Egypt was either directly or indirectly involved in pyramid building. The pyramid labourers were clearly not slaves. They may well have been the unwilling victims of the corvée or compulsory labour system, the system that allowed the pharaoh to compel his people to work for three or four month shifts on state projects. If this is the case, we may imagine that they were selected at random from local registers.
Almost every family in Egypt was either directly or indirectly involved in pyramid building.
But, in a complete reversal of the story of oppression told by Herodotus, Lehner and Hawass have suggested that the labourers may have been volunteers. Zahi Hawass believes that the symbolism of the pyramid was already strong enough to encourage people to volunteer for the supreme national project. Mark Lehner has gone further, comparing pyramid building to American Amish barn raising, which is done on a volunteer basis. He might equally well have compared it to the staffing of archaeological digs, which tend to be manned by enthusiastic, unpaid volunteers supervised by a few paid professionals.
Links
About the author
Author and broadcaster Joyce Tyldesley teaches Egyptology at Manchester University, and is Honorary Research Fellow at the School of Archaeology, Classics and Egyptology, Liverpool University.
|
no
|
Archaeology
|
Did ancient Egyptians use slaves for pyramid construction?
|
yes_statement
|
"ancient" egyptians used "slaves" for "pyramid" "construction".. "slaves" were used by "ancient" egyptians for "pyramid" "construction".. the "construction" of "pyramids" in "ancient" egypt involved the "use" of "slaves".
|
https://study.com/academy/lesson/egyptian-social-structure-from-slaves-to-pharaoh.html
|
Ancient Egypt | Social Structure, Classes & Hierarchy - Video ...
|
Ancient Egypt Social Structure
Rachel Payne Gill is a World Cultures teacher, and has her Master's in Education from Lamar university. She attended the Business school at the University of Houston where she received her Bachelor's in Business Administration. She is certified to teacher Math, Science, Social Studies, and Language Arts for levels EC-8th grade. She has been a classroom teacher for more than 16 years and currently works at an international baccalaureate middle school located in Houston, TX.
How did social class affect ancient Egypt?
The social class of ancient Egypt kept the society very well-organized. The Egyptian kingdom lasted for many solid years due to the very efficient hierarchy.
What were the six social classes in ancient Egypt?
There were six social classes in the ancient Egyptian hierarchy. They were as follows: 1. the pharaoh, 2. government officials, 3. nobles and priests, 4. soldiers and scribes, 5. artisans and merchants, and 6. peasants and slaves.
What is the Egyptian social pyramid?
The Egyptian social pyramid refers to the social hierarchy of ancient Egypt. Like the great pyramids that were built by the Egyptians, their social structure also took on the shape of a pyramid.
Like the towering pyramids of Egypt, ancient Egypt also had an Egyptian social pyramid. The Egyptian hierarchy, or social structure, had the king, or pharaoh, at the top with complete power. The most elite social groups were near the top just below the pharaoh, and each subgroup slowly increased in population size as it reached to the bottom levels of society.
A Great Pyramid
My name is Anen, and I am an Egyptian. I suppose you would call me an ''ancient Egyptian.'' I lived at a time of the building of the Great Pyramids in the 2000s BCE. You might say that our Egyptian social structure looked quite a lot like a pyramid, too, and that's what I'm going to tell you about in this lesson. We'll start at the top of the social pyramid and work down.
An error occurred trying to load this video.
Try refreshing the page, or contact customer support.
You must cCreate an account to continue watching
Register to view this lesson
Are you a student or a teacher?
Create Your Account To Continue Watching
As a member, you'll also get unlimited access to over 88,000
lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you
succeed.
Who was at the top of Egyptian Society? At the top of the Egyptian social structure, or the Egyptian hierarchy, was the pharaoh. The pharaoh was the ruler and considered to also be a god-king with divine powers. The pharaohs were the heads of the government and the holy leaders. As the head of government, the pharaoh had complete control over passing laws and ruling the land. As a holy leader, the pharaoh was an intermediary between the gods and Earth. Pharaohs were also considered to be part god themselves and oversaw all of the religious ceremonies. The word "pharaoh" originally meant "great house" but came to be interpreted as "king." Because the pharaohs were thought to be part god, the power of ruling the dynasty would pass to another person in the family, usually the male heir, or the pharaoh's son.
What social classes made up Egyptian society? The ancient Egyptian social structure consisted of six main Egyptian social classes. At the top of the hierarchy was the ruler, or pharaoh. Most pharaohs of ancient Egypt were men, but there were a few very powerful female pharaohs. When the pharaoh died, the power over the dynasty was passed down through the family bloodline, usually to the first-born son. The next powerful groups were the high government officials, and then the nobles and priests, who were appointed by the pharaoh. The soldiers and scribes were the next social class, followed by the merchants and artisans. At the bottom of the hierarchy were the peasants and the slaves.
The government officials came next in the social Egyptian hierarchy. Often nobles, or people related to the pharaoh, served as government officials, and they were also chosen by the pharaoh. The government positions were as follows:
The nobles in ancient Egypt were in the next elite class of people in the Egyptian hierarchy and consisted of people related to the pharaoh. Only nobles were appointed to serve in government positions or other important roles, such as scribe, physician, or military general. The priests were tasked with pleasing the gods by leading the religious ceremonies and rituals. One very important ritual was mummification. Some priests specialized in the preservation of a person's body after death, called embalming, usually of the upper class. Ancient Egyptians believed the soul and the body stayed together in the afterlife; therefore, the body needed to be preserved.
A scribe was considered to be a high-level role in the Egyptian hierarchy because scribes could read and write, which was not common in ancient Egypt. They were tasked with keeping written records, mainly government documents. They used a form of picture writing called hieroglyphics that involved hieroglyphs being painted onto papyrus (an ancient paper made of a reed plant) or etched into stone. The soldiers fought in any wars that arose and took care of domestic disputes or uprisings. Sometimes soldiers also served as overseers of construction and farming.
The artisans and merchants made up the middle class of the Egyptian hierarchy. The artisans were people who had a specialized trade, such as craftsmen and physicians. The merchants were the shopkeepers. They would buy in bulk from the traders and artisans and sell to the public.
Most of the population in ancient Egypt was made up of peasants and enslaved people. The peasants were the laborers that did work such as farming, raising livestock, and keeping up the canals. The peasants paid heavy taxes to the government, sometimes more than half of all they produced. Peasants had the potential to move up in the social structure. The enslaved people were captured during war, were considered property, and directly served the royal family or upper-class citizens.
The ancient Egyptians had a very specific and well-organized social structure. There were six classes of society: (1) the pharaoh, or king, (2) government officials, (3) nobles and priests, (4) scribes and soldiers, (5) artisans and merchants, and (6) peasants and slaves. The pharaoh, or the king, was at the very top of the Egyptian hierarchy. The most powerful person next to the pharaoh was the vizier, who was the top government advisor. The rest of the elite included the royal family, other government officials, and priests. Another important government role was the head treasurer. This official oversaw collecting taxes, but unlike modern times, taxes were often paid in grains, animals, or textiles. The priests oversaw special religious ceremonies and rituals. One important ritual was embalming the body after death during the mummification process to keep the soul and body connected into the afterlife. Soldiers had a dual purpose in the Egyptian hierarchy. They fought battles in times of war, but in times of peace they were often tasked with overseeing laborers that were working on such things as farming and construction. The lowest class of the social structure was made up of the peasants and slaves. Peasants were the laborers and paid taxes. Peasants could move up in social status, but slaves could not. Slaves were people captured during war and were considered to be property of the upper class.
The Pharaoh
At the top of our pyramid stands our pharaoh, our supreme ruler, who is considered a god. The pharaoh is a sovereign lord, and his word is law. None of us dares to oppose him, or it would be off with our heads. Pharaoh's job is to protect and govern all the people of Egypt, to direct the army, to make laws, to maintain a food supply through grain taxes, to initiate and supervise building projects (like those Great Pyramids I mentioned), and to keep the rest of the gods happy so that Egypt prospers. You've probably heard of some of our most famous Pharaohs, like Khufu, who built the Great Pyramid at Giza, Ramses the Great, and Tutankhamen (better known as King Tut). We've even had a few female Pharaohs, including the great queen Hatshepsut.
High Government Officials
Of course, our pharaoh does not do everything alone. They appoints high government officials to help them, and they come next in our great social pyramid. The highest of these ministers is the vizier. He is the pharaoh's right hand man, who advises the pharaoh, supervises other officials, and acts as a chief judge for the most difficult court cases.
The chief treasurer supervises Egypt's wealth and is in charge of collecting taxes, which are nearly always paid in grain, animals, or cloth rather than money. Finally, the general of the armies serves as Egypt's highest military commander after the pharaoh. He gives the pharaoh plenty of advice about security matters and about making alliances with other nations.
Nobles and Priests
As we move down the social pyramid, we meet the nobles and the priests. Nobles are typically very wealthy, and they serve as lesser government officials to help the pharaoh run the country. They also govern Egypt's various regions and make sure that they stay orderly and law-abiding.
Priests keep the gods happy by performing religious rites in Egypt's many temples. Sometimes they offer advice and healings to the people. Priests are also responsible for ritual embalming. We Egyptians have a strong belief in the afterlife, and we think that the body and spirit stay together after death. Therefore, we are careful to preserve the body as much as possible (you would call the results a mummy) and place many items, like food, clothing, furniture, and even games, in the tomb for the dead person to use. The priests oversee this whole process. Priests are led by the high priest who supervises all their duties and advises the pharaoh in religious matters.
Soldiers and Scribes
Next, come the soldiers and the scribes. Soldiers, of course, fight Egypt's battles. They are divided into infantry, or foot soldiers, and chariot troops, who are excellent archers. In times of peace, soldiers have the job of supervising laborers on building projects and keeping enslaved people under control.
Scribes have a very important role in Egyptian life, and I should know because I'm a scribe. We are trained from a very young age, usually about five years old, and we spend about 12 years learning our hieroglyphs (Egypt's picture symbols). We have to learn more than 700 of those, and we practice writing them over and over again until we can do it perfectly. When we are finally prepared, we begin our jobs of official record keeping. We scribes write down important events and history; draw up contracts; maintain census records; figure out tax rates; document court cases; and monitor the food supply. As you can see, Egypt would have a tough time functioning if it wasn't for us scribes!
Artisans and Merchants
On the next level of our great social pyramid, we find the artisans and merchants. Egypt is a very wealthy country, and we love the beautiful and often useful works of art created by our artisans, who are jewelers, painters, carpenters, sculptors, potters, weavers, stone carvers, and metalworkers. The merchants sell these goods within Egypt and trade them with other nations for valuable items, like ebony and cedar, elephant tusks, and even giraffe tails that serve as fly whisks for the wealthy.
Peasants and Enslaved People
Finally, we've reached the bottom of the pyramid, and here we meet the peasants and enslaved people. There are more of these people in Egypt than any other social class. Some peasants are farmers who grow Egypt's food, and they pay much of what they grow in taxes. Other peasants are unskilled laborers who work in quarries or on the Pharaoh's building projects. The peasants work very, very hard and are usually very, very poor. They live in mud brick houses and eat whatever happens to be on hand.
They are still better off than enslaved people, though. Enslaved people are usually foreign captives taken in war. Some of them work in the palaces of the pharaoh or the nobles. Others work in temples or mines. Many labor away on the pharaoh's building projects. Enslaved people are considered property that their owners can buy or sell as they wish.
Social Mobility?
After all of this, you might wonder whether it is possible for Egyptians to change their social class. Yes, it is! I'm a prime example. I come from a family of peasants, but I worked hard, learned to read and write, and became a scribe. Someday, I might even work my way up further through the ranks of the government. Perhaps I'll even attain my dream of being chief treasurer. It isn't unheard of, if someone has enough talent and ambition.
Lesson Summary
Let's take a moment to review. Egypt in my day, ancient Egypt to you, had a social structure that looks a lot like a pyramid. At the top is the pharaoh, who is the supreme ruler. They are assisted by the high government officials, namely, the vizier, the chief treasurer, and the general of the armies. The nobles serve as lesser government officials and govern Egypt's regions.
Priests keep the gods happy, perform temple rituals, and are in charge of embalming the dead for the afterlife. Soldiers fight for Egypt and supervise building projects. Scribes keep official government records. Artisans make a wide range of beautiful and useful items, which merchants sell in Egypt and trade abroad.
At the bottom of the social pyramid, we find the peasants, who are farmers and unskilled laborers, and the enslaved people, who are typically foreign captives taken in war. Enslaved people work in the households of the wealthy, on building projects, or in temples and mines.
Thank you for joining me on this little tour of Egypt's social pyramid. This is the scribe Anen wishing you a good day!
Learning Outcomes
Following this lesson, you should be able to:
Explain the social structure of ancient Egypt
Describe the roles of each level in ancient Egypt's social pyramid
Recall whether social status could change in ancient Egypt
Video Transcript
A Great Pyramid
My name is Anen, and I am an Egyptian. I suppose you would call me an ''ancient Egyptian.'' I lived at a time of the building of the Great Pyramids in the 2000s BCE. You might say that our Egyptian social structure looked quite a lot like a pyramid, too, and that's what I'm going to tell you about in this lesson. We'll start at the top of the social pyramid and work down.
The Pharaoh
At the top of our pyramid stands our pharaoh, our supreme ruler, who is considered a god. The pharaoh is a sovereign lord, and his word is law. None of us dares to oppose him, or it would be off with our heads. Pharaoh's job is to protect and govern all the people of Egypt, to direct the army, to make laws, to maintain a food supply through grain taxes, to initiate and supervise building projects (like those Great Pyramids I mentioned), and to keep the rest of the gods happy so that Egypt prospers. You've probably heard of some of our most famous Pharaohs, like Khufu, who built the Great Pyramid at Giza, Ramses the Great, and Tutankhamen (better known as King Tut). We've even had a few female Pharaohs, including the great queen Hatshepsut.
High Government Officials
Of course, our pharaoh does not do everything alone. They appoints high government officials to help them, and they come next in our great social pyramid. The highest of these ministers is the vizier. He is the pharaoh's right hand man, who advises the pharaoh, supervises other officials, and acts as a chief judge for the most difficult court cases.
The chief treasurer supervises Egypt's wealth and is in charge of collecting taxes, which are nearly always paid in grain, animals, or cloth rather than money. Finally, the general of the armies serves as Egypt's highest military commander after the pharaoh. He gives the pharaoh plenty of advice about security matters and about making alliances with other nations.
Nobles and Priests
As we move down the social pyramid, we meet the nobles and the priests. Nobles are typically very wealthy, and they serve as lesser government officials to help the pharaoh run the country. They also govern Egypt's various regions and make sure that they stay orderly and law-abiding.
Priests keep the gods happy by performing religious rites in Egypt's many temples. Sometimes they offer advice and healings to the people. Priests are also responsible for ritual embalming. We Egyptians have a strong belief in the afterlife, and we think that the body and spirit stay together after death. Therefore, we are careful to preserve the body as much as possible (you would call the results a mummy) and place many items, like food, clothing, furniture, and even games, in the tomb for the dead person to use. The priests oversee this whole process. Priests are led by the high priest who supervises all their duties and advises the pharaoh in religious matters.
Soldiers and Scribes
Next, come the soldiers and the scribes. Soldiers, of course, fight Egypt's battles. They are divided into infantry, or foot soldiers, and chariot troops, who are excellent archers. In times of peace, soldiers have the job of supervising laborers on building projects and keeping enslaved people under control.
Scribes have a very important role in Egyptian life, and I should know because I'm a scribe. We are trained from a very young age, usually about five years old, and we spend about 12 years learning our hieroglyphs (Egypt's picture symbols). We have to learn more than 700 of those, and we practice writing them over and over again until we can do it perfectly. When we are finally prepared, we begin our jobs of official record keeping. We scribes write down important events and history; draw up contracts; maintain census records; figure out tax rates; document court cases; and monitor the food supply. As you can see, Egypt would have a tough time functioning if it wasn't for us scribes!
Artisans and Merchants
On the next level of our great social pyramid, we find the artisans and merchants. Egypt is a very wealthy country, and we love the beautiful and often useful works of art created by our artisans, who are jewelers, painters, carpenters, sculptors, potters, weavers, stone carvers, and metalworkers. The merchants sell these goods within Egypt and trade them with other nations for valuable items, like ebony and cedar, elephant tusks, and even giraffe tails that serve as fly whisks for the wealthy.
Peasants and Enslaved People
Finally, we've reached the bottom of the pyramid, and here we meet the peasants and enslaved people. There are more of these people in Egypt than any other social class. Some peasants are farmers who grow Egypt's food, and they pay much of what they grow in taxes. Other peasants are unskilled laborers who work in quarries or on the Pharaoh's building projects. The peasants work very, very hard and are usually very, very poor. They live in mud brick houses and eat whatever happens to be on hand.
They are still better off than enslaved people, though. Enslaved people are usually foreign captives taken in war. Some of them work in the palaces of the pharaoh or the nobles. Others work in temples or mines. Many labor away on the pharaoh's building projects. Enslaved people are considered property that their owners can buy or sell as they wish.
Social Mobility?
After all of this, you might wonder whether it is possible for Egyptians to change their social class. Yes, it is! I'm a prime example. I come from a family of peasants, but I worked hard, learned to read and write, and became a scribe. Someday, I might even work my way up further through the ranks of the government. Perhaps I'll even attain my dream of being chief treasurer. It isn't unheard of, if someone has enough talent and ambition.
Lesson Summary
Let's take a moment to review. Egypt in my day, ancient Egypt to you, had a social structure that looks a lot like a pyramid. At the top is the pharaoh, who is the supreme ruler. They are assisted by the high government officials, namely, the vizier, the chief treasurer, and the general of the armies. The nobles serve as lesser government officials and govern Egypt's regions.
Priests keep the gods happy, perform temple rituals, and are in charge of embalming the dead for the afterlife. Soldiers fight for Egypt and supervise building projects. Scribes keep official government records. Artisans make a wide range of beautiful and useful items, which merchants sell in Egypt and trade abroad.
At the bottom of the social pyramid, we find the peasants, who are farmers and unskilled laborers, and the enslaved people, who are typically foreign captives taken in war. Enslaved people work in the households of the wealthy, on building projects, or in temples and mines.
Thank you for joining me on this little tour of Egypt's social pyramid. This is the scribe Anen wishing you a good day!
|
At the bottom of the social pyramid, we find the peasants, who are farmers and unskilled laborers, and the enslaved people, who are typically foreign captives taken in war. Enslaved people work in the households of the wealthy, on building projects, or in temples and mines.
Thank you for joining me on this little tour of Egypt's social pyramid. This is the scribe Anen wishing you a good day!
Learning Outcomes
Following this lesson, you should be able to:
Explain the social structure of ancient Egypt
Describe the roles of each level in ancient Egypt's social pyramid
Recall whether social status could change in ancient Egypt
Video Transcript
A Great Pyramid
My name is Anen, and I am an Egyptian. I suppose you would call me an ''ancient Egyptian.'' I lived at a time of the building of the Great Pyramids in the 2000s BCE. You might say that our Egyptian social structure looked quite a lot like a pyramid, too, and that's what I'm going to tell you about in this lesson. We'll start at the top of the social pyramid and work down.
The Pharaoh
At the top of our pyramid stands our pharaoh, our supreme ruler, who is considered a god. The pharaoh is a sovereign lord, and his word is law. None of us dares to oppose him, or it would be off with our heads. Pharaoh's job is to protect and govern all the people of Egypt, to direct the army, to make laws, to maintain a food supply through grain taxes, to initiate and supervise building projects (like those Great Pyramids I mentioned), and to keep the rest of the gods happy so that Egypt prospers. You've probably heard of some of our most famous Pharaohs, like Khufu, who built the Great Pyramid at Giza, Ramses the Great, and Tutankhamen (better known as King Tut). We've even had a few female Pharaohs, including the great queen Hatshepsut.
|
yes
|
Archaeology
|
Did ancient Egyptians use slaves for pyramid construction?
|
no_statement
|
"ancient" egyptians did not "use" "slaves" for "pyramid" "construction".. slavery was not involved in the "construction" of "pyramids" by "ancient" egyptians.. the building of "pyramids" in "ancient" egypt did not rely on the "use" of "slaves".
|
https://www.gotquestions.org/pyramids-Bible.html
|
Are the pyramids mentioned in the Bible? Did the enslaved Israelites ...
|
Find Out
Are the pyramids mentioned in the Bible?
The first settlers in Egypt migrated from the area of Shinar, near the Euphrates River, the location of the attempted construction of the Tower of Babel. The Tower of Babel itself was probably a ziggurat, pyramidal in shape, and made of baked bricks mortared with pitch (see Genesis 11:1-9). Given their engineering experience, it is easy to see how these settlers would begin building smaller pyramids of mud bricks and straw, called mastabas, beneath which the early pharaohs were buried.
As time passed, the Egyptians began constructing large, impressive edifices entirely of stone. These are the structures that typically come to mind when one thinks of pyramids, such as the Great Pyramid at Giza. The granite blocks used for these pyramids were quarried near Aswan and transported down the Nile on barges.
Later, during the so-called Middle Kingdom, the royal tombs were smaller and made of millions of large, sun-dried mud-and-straw bricks. These bricks were faced with massive slabs of smooth granite to give the appearance of traditional stone pyramids. During this period, which lasted approximately 1660 to 1445 BC, the Israelites took up residence in Egypt (see1 Kings 6:1). Pharaoh, concerned that they might turn on the Egyptians, enslaved them at some point after the time of Joseph (Exodus 1:8).
The Bible tells us that during that period the Israelite slaves were forced to make mud bricks (Exodus 5:10-14). This detail is consistent with the type of brick used to construct pyramids. In fact, according to Exodus 5:7, Pharaoh told the taskmasters, âYou shall no longer give the people straw to make brick as before. Let them go and gather straw for themselves.â While we are not told specifically that the bricks were used for pyramids, it seems plausible that they were. The Jewish historian Josephus supports this theory: âThey [the Egyptian taskmasters] set them also to build pyramidsâ (Antiquities, II:9.1).
The slavery of the Israelites ended abruptly at the Exodus. According to archeologist A. R. David, the slaves suddenly disappeared. She admits that âthe quantity, range and type of articles of everyday use which were left behind in the houses may indeed suggest that the departure was sudden and unpremeditatedâ (The Pyramid Builders of Ancient Egypt, p. 199). The Egyptian army that was destroyed at the Red Sea was led by Pharaoh himself (Exodus 14:6), and this could account for the fact that no burial place or mummy has been found for the 13th-dynasty Pharaoh Neferhotep I.
Pyramids are not mentioned as such in the canonical Scriptures. However, the Apocrypha (approved as canonical by Catholics and Coptics) does mention pyramids in 1 Maccabees 13:28-38 in connection with seven pyramids built by Simon Maccabeus as monuments to his parents.
Pre-Alexandrian Jews would not have used the word pyramid. However, in the Old Testament, we do see the word migdol (Strongâs, H4024). This word is translated âtowerâ and could represent any large monolith, obelisk or pyramid. Migdol is the Hebrew word used to describe the Tower of Babel in Genesis 11:4, and it is translated similarly in Ezekiel 29:10 and 30:6. In describing a âpyramid,â this is the word the Hebrews would have most likely used. Furthermore, Migdol is a place name in Exodus 14:2, Numbers 33:7, Jeremiah 44:1, and Jeremiah 46:14 and could mean that a tower or monument was located there.
The Bible does not explicitly state that the Israelites built pyramids; nor does it use the word pyramid in association with the Hebrews. We may surmise that the children of Israel worked on the pyramids, but that is all we can do.
|
During this period, which lasted approximately 1660 to 1445 BC, the Israelites took up residence in Egypt (see1 Kings 6:1). Pharaoh, concerned that they might turn on the Egyptians, enslaved them at some point after the time of Joseph (Exodus 1:8).
The Bible tells us that during that period the Israelite slaves were forced to make mud bricks (Exodus 5:10-14). This detail is consistent with the type of brick used to construct pyramids. In fact, according to Exodus 5:7, Pharaoh told the taskmasters, âYou shall no longer give the people straw to make brick as before. Let them go and gather straw for themselves.â While we are not told specifically that the bricks were used for pyramids, it seems plausible that they were. The Jewish historian Josephus supports this theory: âThey [the Egyptian taskmasters] set them also to build pyramidsâ (Antiquities, II:9.1).
The slavery of the Israelites ended abruptly at the Exodus. According to archeologist A. R. David, the slaves suddenly disappeared. She admits that âthe quantity, range and type of articles of everyday use which were left behind in the houses may indeed suggest that the departure was sudden and unpremeditatedâ (The Pyramid Builders of Ancient Egypt, p. 199). The Egyptian army that was destroyed at the Red Sea was led by Pharaoh himself (Exodus 14:6), and this could account for the fact that no burial place or mummy has been found for the 13th-dynasty Pharaoh Neferhotep I.
Pyramids are not mentioned as such in the canonical Scriptures. However, the Apocrypha (approved as canonical by Catholics and Coptics) does mention pyramids in 1 Maccabees 13:28-38 in connection with seven pyramids built by Simon Maccabeus as monuments to his parents.
|
yes
|
Ornithology
|
Did archaeopteryx really fly?
|
yes_statement
|
"archaeopteryx" was capable of flight.. "archaeopteryx" had the ability to "fly".
|
https://www.bbc.com/news/science-environment-43386262
|
Archaeopteryx flew like a pheasant, say scientists - BBC News
|
Archaeopteryx flew like a pheasant, say scientists
The famous winged dinosaur Archaeopteryx was capable of flying, according to a new study.
An international research team used powerful X-ray beams to peer inside its bones, showing they were almost hollow, as in modern birds.
The creature flew like a pheasant, using short bursts of active flight, say scientists.
Archaeopteryx has been a source of fascination since the first fossils were found in the 1860s.
Treading the line between birds and dinosaurs, the animal was a similar size to a magpie, with feathered wings, sharp teeth and a long bony tail.
After scanning Archaeopteryx fossils in a particle accelerator known as a synchrotron, researchers found its wing bones matched modern birds that flap their wings to fly short distances or in bursts.
"Archaeopteryx seems optimised for incidental active flight," said lead researcher Dennis Voeten of the ESRF, the European Synchrotron facility in Grenoble, France.
"We imagine something like pheasants and quails," he told BBC News. "If they have to fly to evade a predator they will make a very quick ascent, typically followed by a very short horizontal flight and then they make a running escape afterwards."
Image source, Pascal Goetgheluck/ESRF
Image caption,
The Munich specimen of the fossil Archaeopteryx
The question of whether Archaeopteryx was a ground dweller, a glider or able to fly has been the subject of debate since the days of Darwin.
Steve Brusatte, of the University of Edinburgh, UK, who is not connected with the study, said this was the best evidence yet that the animal was capable of powered flight.
"I think it's case closed now," he said. "Archaeopteryx was capable of at least short bursts of powered flight. It's amazing that sticking a fossil into a synchrotron can reveal so much about how it behaved as a real animal back when it was alive."
Jurassic skies
Archaeopteryx lived about 150 million years ago in what is now southern Germany.
Despite once being thought of as the first bird, experts now view the animal as a flying dinosaur.
Archaeopteryx was already actively flying around 150 million years ago, which implies that active dinosaurian flight evolved even earlier.
The researchers think there may have been many experimental modes of dinosaur flight before the flight stroke used by modern birds appeared.
"We know that the region around Solnhofen in southeastern Germany was a tropical archipelago, and such an environment appears highly suitable for island hopping or escape flight," said Dr Martin Röper, Archaeopteryx curator and co-researcher of the report.
|
Archaeopteryx flew like a pheasant, say scientists
The famous winged dinosaur Archaeopteryx was capable of flying, according to a new study.
An international research team used powerful X-ray beams to peer inside its bones, showing they were almost hollow, as in modern birds.
The creature flew like a pheasant, using short bursts of active flight, say scientists.
Archaeopteryx has been a source of fascination since the first fossils were found in the 1860s.
Treading the line between birds and dinosaurs, the animal was a similar size to a magpie, with feathered wings, sharp teeth and a long bony tail.
After scanning Archaeopteryx fossils in a particle accelerator known as a synchrotron, researchers found its wing bones matched modern birds that flap their wings to fly short distances or in bursts.
"Archaeopteryx seems optimised for incidental active flight," said lead researcher Dennis Voeten of the ESRF, the European Synchrotron facility in Grenoble, France.
"We imagine something like pheasants and quails," he told BBC News. "If they have to fly to evade a predator they will make a very quick ascent, typically followed by a very short horizontal flight and then they make a running escape afterwards. "
Image source, Pascal Goetgheluck/ESRF
Image caption,
The Munich specimen of the fossil Archaeopteryx
The question of whether Archaeopteryx was a ground dweller, a glider or able to fly has been the subject of debate since the days of Darwin.
Steve Brusatte, of the University of Edinburgh, UK, who is not connected with the study, said this was the best evidence yet that the animal was capable of powered flight.
"I think it's case closed now," he said. "Archaeopteryx was capable of at least short bursts of powered flight. It's amazing that sticking a fossil into a synchrotron can reveal so much about how it behaved as a real animal back when it was alive.
|
yes
|
Ornithology
|
Did archaeopteryx really fly?
|
yes_statement
|
"archaeopteryx" was capable of flight.. "archaeopteryx" had the ability to "fly".
|
https://en.wikipedia.org/wiki/Archaeopteryx
|
Archaeopteryx - Wikipedia
|
Archaeopteryx lived in the Late Jurassic around 150 million years ago, in what is now southern Germany, during a time when Europe was an archipelago of islands in a shallow warm tropical sea, much closer to the equator than it is now. Similar in size to a Eurasian magpie, with the largest individuals possibly attaining the size of a raven,[4] the largest species of Archaeopteryx could grow to about 0.5 m (1 ft 8 in) in length. Despite their small size, broad wings, and inferred ability to fly or glide, Archaeopteryx had more in common with other small Mesozoic dinosaurs than with modern birds. In particular, they shared the following features with the dromaeosaurids and troodontids: jaws with sharp teeth, three fingers with claws, a long bony tail, hyperextensible second toes ("killing claw"), feathers (which also suggest warm-bloodedness), and various features of the skeleton.[5][6]
These features make Archaeopteryx a clear candidate for a transitional fossil between non-avian dinosaurs and birds.[7][8] Thus, Archaeopteryx plays an important role, not only in the study of the origin of birds, but in the study of dinosaurs. It was named from a single feather in 1861,[9] the identity of which has been controversial.[10][11] That same year, the first complete specimen of Archaeopteryx was announced. Over the years, ten more fossils of Archaeopteryx have surfaced. Despite variation among these fossils, most experts regard all the remains that have been discovered as belonging to a single species, although this is still debated.
Archaeopteryx was long considered to be the beginning of the evolutionary tree of birds. However, in recent years, the discovery of several small, feathered dinosaurs has created a mystery for palaeontologists, raising questions about which animals are the ancestors of modern birds and which are their relatives.[12] Most of these eleven fossils include impressions of feathers. Because these feathers are of an advanced form (flight feathers), these fossils are evidence that the evolution of feathers began before the Late Jurassic.[13] The type specimen of Archaeopteryx was discovered just two years after Charles Darwin published On the Origin of Species. Archaeopteryx seemed to confirm Darwin's theories and has since become a key piece of evidence for the origin of birds, the transitional fossils debate, and confirmation of evolution.
Over the years, twelve body fossil specimens of Archaeopteryx have been found. All of the fossils come from the limestone deposits, quarried for centuries, near Solnhofen, Germany.[14][15]
The single feather
The initial discovery, a single feather, was unearthed in 1860 or 1861 and described in 1861 by Hermann von Meyer.[16] It is currently located at the Natural History Museum of Berlin. Though it was the initial holotype, there were indications that it might not have been from the same animal as the body fossils.[9] In 2019 it was reported that laser imaging had revealed the structure of the quill (which had not been visible since some time after the feather was described), and that the feather was inconsistent with the morphology of all other Archaeopteryx feathers known, leading to the conclusion that it originated from another dinosaur.[10] This conclusion was challenged in 2020 as being unlikely; the feather was identified on the basis of morphology as most likely having been an upper major primary covert feather.[11]
The first skeleton, known as the London Specimen (BMNH 37001),[17] was unearthed in 1861 near Langenaltheim, Germany, and perhaps given to local physician Karl Häberlein in return for medical services. He then sold it for £700 (roughly £83,000 in 2020[18]) to the Natural History Museum in London, where it remains.[14] Missing most of its head and neck, it was described in 1863 by Richard Owen as Archaeopteryx macrura, allowing for the possibility it did not belong to the same species as the feather. In the subsequent fourth edition of his On the Origin of Species,[19] Charles Darwin described how some authors had maintained "that the whole class of birds came suddenly into existence during the eocene period; but now we know, on the authority of Professor Owen, that a bird certainly lived during the deposition of the upper greensand; and still more recently, that strange bird, the Archaeopteryx, with a long lizard-like tail, bearing a pair of feathers on each joint, and with its wings furnished with two free claws, has been discovered in the oolitic slates of Solnhofen. Hardly any recent discovery shows more forcibly than this how little we as yet know of the former inhabitants of the world."[20]
The Greek word archaīos (ἀρχαῖος) means 'ancient, primeval'. Ptéryx primarily means 'wing', but it can also be just 'feather'. Meyer suggested this in his description. At first he referred to a single feather which appeared to resemble a modern bird's remex (wing feather), but he had heard of and been shown a rough sketch of the London specimen, to which he referred as a "Skelett eines mit ähnlichen Federn bedeckten Tieres" ("skeleton of an animal covered in similar feathers"). In German, this ambiguity is resolved by the term Schwinge which does not necessarily mean a wing used for flying. Urschwinge was the favoured translation of Archaeopteryx among German scholars in the late nineteenth century. In English, 'ancient pinion' offers a rough approximation to this.[citation needed]
Since then, twelve specimens have been recovered:
The Berlin Specimen (HMN 1880/81) was discovered in 1874 or 1875 on the Blumenberg near Eichstätt, Germany, by farmer Jakob Niemeyer. He sold this precious fossil for the money to buy a cow in 1876, to innkeeper Johann Dörr, who again sold it to Ernst Otto Häberlein, the son of K. Häberlein. Placed on sale between 1877 and 1881, with potential buyers including O. C. Marsh of Yale University's Peabody Museum, it eventually was bought for 20,000 Goldmark by the Berlin's Natural History Museum, where it now is displayed. The transaction was financed by Ernst Werner von Siemens, founder of the company that bears his name.[14] Described in 1884 by Wilhelm Dames, it is the most complete specimen, and the first with a complete head. In 1897 it was named by Dames as a new species, A. siemensii; though often considered a synonym of A. lithographica, several 21st century studies have concluded that it is a distinct species which includes the Berlin, Munich, and Thermopolis specimens.[21][22]
Cast of the Maxberg Specimen
Composed of a torso, the Maxberg Specimen (S5) was discovered in 1956 near Langenaltheim; it was brought to the attention of professor Florian Heller in 1958 and described by him in 1959. The specimen is missing its head and tail, although the rest of the skeleton is mostly intact. Although it was once exhibited at the Maxberg Museum in Solnhofen, it is currently missing. It belonged to Eduard Opitsch, who loaned it to the museum until 1974. After his death in 1991, it was discovered that the specimen was missing and may have been stolen or sold.[23]
The Haarlem Specimen (TM 6428/29, also known as the Teylers Specimen) was discovered in 1855 near Riedenburg, Germany, and described as a Pterodactylus crassipes in 1857 by Meyer. It was reclassified in 1970 by John Ostrom and is currently located at the Teylers Museum in Haarlem, the Netherlands. It was the very first specimen found, but was incorrectly classified at the time. It is also one of the least complete specimens, consisting mostly of limb bones, isolated cervical vertebrae, and ribs. In 2017 it was named as a separate genus Ostromia, considered more closely related to Anchiornis from China.[24]
Eichstätt Specimen, once considered a distinct genus, Jurapteryx
The Eichstätt Specimen (JM 2257) was discovered in 1951 near Workerszell, Germany, and described by Peter Wellnhofer in 1974. Currently located at the Jura Museum in Eichstätt, Germany, it is the smallest known specimen and has the second-best head. It is possibly a separate genus (Jurapteryx recurva) or species (A. recurva).[25]
The Solnhofen Specimen (unnumbered specimen) was discovered in the 1970s near Eichstätt, Germany, and described in 1988 by Wellnhofer. Currently located at the Bürgermeister-Müller-Museum in Solnhofen, it originally was classified as Compsognathus by an amateur collector, the same mayor Friedrich Müller after which the museum is named. It is the largest specimen known and may belong to a separate genus and species, Wellnhoferia grandis. It is missing only portions of the neck, tail, backbone, and head.[26]
The Munich Specimen (BSP 1999 I 50, formerly known as the Solenhofer-Aktien-Verein Specimen) was discovered on 3 August 1992 near Langenaltheim and described in 1993 by Wellnhofer. It is currently located at the Paläontologisches Museum München in Munich, to which it was sold in 1999 for 1.9 million Deutschmark. What was initially believed to be a bony sternum turned out to be part of the coracoid,[27] but a cartilaginous sternum may have been present. Only the front of its face is missing. It has been used as the basis for a distinct species, A. bavarica,[28] but more recent studies suggest it belongs to A. siemensii.[22]
An eighth, fragmentary specimen was discovered in 1990 in the younger Mörnsheim Formation at Daiting, Suevia. Therefore, it is known as the Daiting Specimen, and had been known since 1996 only from a cast, briefly shown at the Naturkundemuseum in Bamberg. The original was purchased by palaeontologist Raimund Albertsdörfer in 2009.[29] It was on display for the first time with six other original fossils of Archaeopteryx at the Munich Mineral Show in October 2009.[30] The Daiting Specimen was subsequently named Archaeopteryx albersdoerferi by Kundrat et al. (2018).[31][32] After a lengthy period in a closed private collection, it was moved to the Museum of Evolution at Knuthenborg Safaripark (Denmark) in 2022, where it has since been on display and also been made available for researchers.[33][34]
Bürgermeister-Müller ("chicken wing") Specimen
Another fragmentary fossil was found in 2000. It is in private possession and, since 2004, on loan to the Bürgermeister-Müller Museum in Solnhofen, so it is called the Bürgermeister-Müller Specimen; the institute itself officially refers to it as the "Exemplar of the families Ottman & Steil, Solnhofen". As the fragment represents the remains of a single wing of Archaeopteryx, it is colloquially known as "chicken wing".[35]
Details of the Wyoming Dinosaur Center Archaeopteryx (WDC-CSG-100)
Long in a private collection in Switzerland, the Thermopolis Specimen (WDC CSG 100) was discovered in Bavaria and described in 2005 by Mayr, Pohl, and Peters. Donated to the Wyoming Dinosaur Center in Thermopolis, Wyoming, it has the best-preserved head and feet; most of the neck and the lower jaw have not been preserved. The "Thermopolis" specimen was described on 2 December 2005 Science journal article as "A well-preserved Archaeopteryx specimen with theropod features"; it shows that Archaeopteryx lacked a reversed toe—a universal feature of birds—limiting its ability to perch on branches and implying a terrestrial or trunk-climbing lifestyle.[36] This has been interpreted as evidence of theropod ancestry. In 1988, Gregory S. Paul claimed to have found evidence of a hyperextensible second toe,[37] but this was not verified and accepted by other scientists until the Thermopolis specimen was described. "Until now, the feature was thought to belong only to the species' close relatives, the deinonychosaurs."[15] The Thermopolis Specimen was assigned to Archaeopteryx siemensii in 2007.[22] The specimen is considered to represent the most complete and best-preserved Archaeopteryx remains yet.[22]
The eleventh specimen
The discovery of an eleventh specimen was announced in 2011; it was described in 2014. It is one of the more complete specimens, but is missing much of the skull and one forelimb. It is privately owned and has yet to be given a name.[38][39] Palaeontologists of the Ludwig Maximilian University of Munich studied the specimen, which revealed previously unknown features of the plumage, such as feathers on both the upper and lower legs and metatarsus, and the only preserved tail tip.[40][41]
A twelfth specimen had been discovered by an amateur collector in 2010 at the Schamhaupten quarry, but the finding was only announced in February 2014.[42] It was scientifically described in 2018. It represents a complete and mostly articulated skeleton with skull. It is the only specimen lacking preserved feathers. It is from the Painten Formation and somewhat older than the other specimens.[43]
Beginning in 1985, an amateur group including astronomerFred Hoyle and physicistLee Spetner, published a series of papers claiming that the feathers on the Berlin and London specimens of Archaeopteryx were forged.[44][45][46][47] Their claims were repudiated by Alan J. Charig and others at the Natural History Museum in London.[48] Most of their supposed evidence for a forgery was based on unfamiliarity with the processes of lithification; for example, they proposed that, based on the difference in texture associated with the feathers, feather impressions were applied to a thin layer of cement,[45] without realizing that feathers themselves would have caused a textural difference.[48] They also misinterpreted the fossils, claiming that the tail was forged as one large feather,[45] when visibly this is not the case.[48] In addition, they claimed that the other specimens of Archaeopteryx known at the time did not have feathers,[44][45] which is incorrect; the Maxberg and Eichstätt specimens have obvious feathers.[48]
They also expressed disbelief that slabs would split so smoothly, or that one half of a slab containing fossils would have good preservation, but not the counterslab.[44][46] These are common properties of Solnhofen fossils, because the dead animals would fall onto hardened surfaces, which would form a natural plane for the future slabs to split along and would leave the bulk of the fossil on one side and little on the other.[48]
Finally, the motives they suggested for a forgery are not strong, and are contradictory; one is that Richard Owen wanted to forge evidence in support of Charles Darwin's theory of evolution, which is unlikely given Owen's views toward Darwin and his theory. The other is that Owen wanted to set a trap for Darwin, hoping the latter would support the fossils so Owen could discredit him with the forgery; this is unlikely because Owen wrote a detailed paper on the London specimen, so such an action would certainly backfire.[49]
Charig et al. pointed to the presence of hairline cracks in the slabs running through both rock and fossil impressions, and mineral growth over the slabs that had occurred before discovery and preparation, as evidence that the feathers were original.[48] Spetner et al. then attempted to show that the cracks would have propagated naturally through their postulated cement layer,[50] but neglected to account for the fact that the cracks were old and had been filled with calcite, and thus were not able to propagate.[49] They also attempted to show the presence of cement on the London specimen through X-ray spectroscopy, and did find something that was not rock;[50] it was not cement either, and is most probably a fragment of silicone rubber left behind when moulds were made of the specimen.[49] Their suggestions have not been taken seriously by palaeontologists, as their evidence was largely based on misunderstandings of geology, and they never discussed the other feather-bearing specimens, which have increased in number since then. Charig et al. reported a discolouration: a dark band between two layers of limestone – they say it is the product of sedimentation.[48] It is natural for limestone to take on the colour of its surroundings and most limestones are coloured (if not colour banded) to some degree, so the darkness was attributed to such impurities.[51] They also mention that a complete absence of air bubbles in the rock slabs is further proof that the specimen is authentic.[48]
Most of the specimens of Archaeopteryx that have been discovered come from the Solnhofen limestone in Bavaria, southern Germany, which is a Lagerstätte, a rare and remarkable geological formation known for its superbly detailed fossils laid down during the early Tithonian stage of the Jurassic period,[52] approximately 150.8–148.5million years ago.[53]
Archaeopteryx was roughly the size of a raven,[4] with broad wings that were rounded at the ends and a long tail compared to its body length. It could reach up to 0.5 metres (1 ft 8 in) in body length and 0.7 metres (2 ft 4 in) in wingspan, with an estimated mass of 0.5 to 1 kilogram (1.1 to 2.2 lb).[4][54]Archaeopteryx feathers, although less documented than its other features, were very similar in structure to modern-day bird feathers.[52] Despite the presence of numerous avian features,[55]Archaeopteryx had many non-avian theropod dinosaur characteristics. Unlike modern birds, Archaeopteryx had small teeth,[52] as well as a long bony tail, features which Archaeopteryx shared with other dinosaurs of the time.[56]
Because it displays features common to both birds and non-avian dinosaurs, Archaeopteryx has often been considered a link between them.[52] In the 1970s, John Ostrom, following Thomas Henry Huxley's lead in 1868, argued that birds evolved within theropod dinosaurs and Archaeopteryx was a critical piece of evidence for this argument; it had several avian features, such as a wishbone, flight feathers, wings, and a partially reversed first toe along with dinosaur and theropod features. For instance, it has a long ascending process of the ankle bone, interdental plates, an obturator process of the ischium, and long chevrons in the tail. In particular, Ostrom found that Archaeopteryx was remarkably similar to the theropod family Dromaeosauridae.[57][58][59][60]
Archaeopteryx had three separate digits on each fore-leg each ending with a "claw". Few birds have such features. Some birds, such as ducks, swans, Jacanas (Jacana sp.), and the hoatzin (Opisthocomus hoazin), have them concealed beneath their leg-feathers.[61]
Anatomical illustration comparing the "frond-tail" of Archaeopteryx with the "fan-tail" of a modern bird
Specimens of Archaeopteryx were most notable for their well-developed flight feathers. They were markedly asymmetrical and showed the structure of flight feathers in modern birds, with vanes given stability by a barb-barbule-barbicel arrangement.[62] The tail feathers were less asymmetrical, again in line with the situation in modern birds and also had firm vanes. The thumb did not yet bear a separately movable tuft of stiff feathers.
The body plumage of Archaeopteryx is less well-documented and has only been properly researched in the well-preserved Berlin specimen. Thus, as more than one species seems to be involved, the research into the Berlin specimen's feathers does not necessarily hold true for the rest of the species of Archaeopteryx. In the Berlin specimen, there are "trousers" of well-developed feathers on the legs; some of these feathers seem to have a basic contour feather structure, but are somewhat decomposed (they lack barbicels as in ratites).[63] In part they are firm and thus capable of supporting flight.[64]
A patch of pennaceous feathers is found running along its back, which was quite similar to the contour feathers of the body plumage of modern birds in being symmetrical and firm, although not as stiff as the flight-related feathers. Apart from that, the feather traces in the Berlin specimen are limited to a sort of "proto-down" not dissimilar to that found in the dinosaur Sinosauropteryx: decomposed and fluffy, and possibly even appearing more like fur than feathers in life (although not in their microscopic structure). These occur on the remainder of the body—although some feathers did not fossilize and others were obliterated during preparation, leaving bare patches on specimens—and the lower neck.[63]
There is no indication of feathering on the upper neck and head. While these conceivably may have been nude, this may still be an artefact of preservation. It appears that most Archaeopteryx specimens became embedded in anoxic sediment after drifting some time on their backs in the sea—the head, neck and the tail are generally bent downward, which suggests that the specimens had just started to rot when they were embedded, with tendons and muscle relaxing so that the characteristic shape (death pose) of the fossil specimens was achieved.[65] This would mean that the skin already was softened and loose, which is bolstered by the fact that in some specimens the flight feathers were starting to detach at the point of embedding in the sediment. So it is hypothesized that the pertinent specimens moved along the sea bed in shallow water for some time before burial, the head and upper neck feathers sloughing off, while the more firmly attached tail feathers remained.[21]
In 2011, graduate student Ryan Carney and colleagues performed the first colour study on an Archaeopteryx specimen.[66] Using scanning electron microscopy technology and energy-dispersive X-ray analysis, the team was able to detect the structure of melanosomes in the isolated feather specimen described in 1861. The resultant measurements were then compared to those of 87modern bird species, and the original colour was calculated with a 95% likelihood to be black. The feather was determined to be black throughout, with heavier pigmentation in the distal tip. The feather studied was most probably a dorsal covert, which would have partly covered the primary feathers on the wings. The study does not mean that Archaeopteryx was entirely black, but suggests that it had some black colouration which included the coverts. Carney pointed out that this is consistent with what we know of modern flight characteristics, in that black melanosomes have structural properties that strengthen feathers for flight.[67] In a 2013 study published in the Journal of Analytical Atomic Spectrometry, new analyses of Archaeopteryx's feathers revealed that the animal may have had complex light- and dark-coloured plumage, with heavier pigmentation in the distal tips and outer vanes.[68] This analysis of colour distribution was based primarily on the distribution of sulphate within the fossil. An author on the previous Archaeopteryx colour study argued against the interpretation of such biomarkers as an indicator of eumelanin in the full Archaeopteryx specimen.[69] Carney and other colleagues also argued against the 2013 study's interpretation of the sulphate and trace metals,[70][71] and in a 2020 study published in Scientific Reports demonstrated that the isolated covert feather was entirely matte black (as opposed to black and white, or iridescent) and that the remaining "plumage patterns of Archaeopteryx remain unknown".[11]
Today, fossils of the genus Archaeopteryx are usually assigned to one or two species, A. lithographica and A. siemensii, but their taxonomic history is complicated. Ten names have been published for the handful of specimens. As interpreted today, the name A. lithographica only referred to the single feather described by Meyer. In 1954 Gavin de Beer concluded that the London specimen was the holotype. In 1960, Swinton accordingly proposed that the name Archaeopteryx lithographica be placed on the official genera list making the alternative names Griphosaurus and Griphornis invalid.[72] The ICZN, implicitly accepting De Beer's standpoint, did indeed suppress the plethora of alternative names initially proposed for the first skeleton specimens,[73] which mainly resulted from the acrimonious dispute between Meyer and his opponent Johann Andreas Wagner (whose Griphosaurus problematicus – 'problematic riddle-lizard' – was a vitriolic sneer at Meyer's Archaeopteryx).[74] In addition, in 1977, the Commission ruled that the first species name of the Haarlem specimen, crassipes, described by Meyer as a pterosaur before its true nature was realized, was not to be given preference over lithographica in instances where scientists considered them to represent the same species.[7][75]
It has been noted that the feather, the first specimen of Archaeopteryx described, does not correspond well with the flight-related feathers of Archaeopteryx. It certainly is a flight feather of a contemporary species, but its size and proportions indicate that it may belong to another, smaller species of feathered theropod, of which only this feather is known so far.[9] As the feather had been designated the type specimen, the name Archaeopteryx should then no longer be applied to the skeletons, thus creating significant nomenclatorial confusion. In 2007, two sets of scientists therefore petitioned the ICZN requesting that the London specimen explicitly be made the type by designating it as the new holotype specimen, or neotype.[76] This suggestion was upheld by the ICZN after four years of debate, and the London specimen was designated the neotype on 3 October 2011.[77]
It has been argued that all the specimens belong to the same species, A. lithographica.[78] Differences do exist among the specimens, and while some researchers regard these as due to the different ages of the specimens, some may be related to actual species diversity. In particular, the Munich, Eichstätt, Solnhofen, and Thermopolis specimens differ from the London, Berlin, and Haarlem specimens in being smaller or much larger, having different finger proportions, having more slender snouts lined with forward-pointing teeth, and the possible presence of a sternum. Due to these differences, most individual specimens have been given their own species name at one point or another. The Berlin specimen has been designated as Archaeornis siemensii, the Eichstätt specimen as Jurapteryx recurva, the Munich specimen as Archaeopteryx bavarica, and the Solnhofen specimen as Wellnhoferia grandis.[21]
In 2007, a review of all well-preserved specimens including the then-newly discovered Thermopolis specimen concluded that two distinct species of Archaeopteryx could be supported: A. lithographica (consisting of at least the London and Solnhofen specimens), and A. siemensii (consisting of at least the Berlin, Munich, and Thermopolis specimens). The two species are distinguished primarily by large flexor tubercles on the foot claws in A. lithographica (the claws of A. siemensii specimens being relatively simple and straight). A. lithographica also had a constricted portion of the crown in some teeth and a stouter metatarsus. A supposed additional species, Wellnhoferia grandis (based on the Solnhofen specimen), seems to be indistinguishable from A. lithographica except in its larger size.[22]
The Solnhofen Specimen, by some considered as belonging to the genus Wellnhoferia
If two names are given, the first denotes the original describer of the "species", the second the author on whom the given name combination is based. As always in zoological nomenclature, putting an author's name in parentheses denotes that the taxon was originally described in a different genus.
Comparison of the forelimb of Archaeopteryx (right) with that of Deinonychus (left)
Modern palaeontology has often classified Archaeopteryx as the most primitive bird. However, it is not thought to be a true ancestor of modern birds, but rather a close relative of that ancestor.[79] Nonetheless, Archaeopteryx was often used as a model of the true ancestral bird. Several authors have done so.[80] Lowe (1935)[81] and Thulborn (1984)[82] questioned whether Archaeopteryx truly was the first bird. They suggested that Archaeopteryx was a dinosaur that was no more closely related to birds than were other dinosaur groups. Kurzanov (1987) suggested that Avimimus was more likely to be the ancestor of all birds than Archaeopteryx.[83] Barsbold (1983)[84] and Zweers and Van den Berge (1997)[85] noted that many maniraptoran lineages are extremely birdlike, and they suggested that different groups of birds may have descended from different dinosaur ancestors.
The discovery of the closely related Xiaotingia in 2011 led to new phylogenetic analyses that suggested that Archaeopteryx is a deinonychosaur rather than an avialan, and therefore, not a "bird" under most common uses of that term.[2] A more thorough analysis was published soon after to test this hypothesis, and failed to arrive at the same result; it found Archaeopteryx in its traditional position at the base of Avialae, while Xiaotingia was recovered as a basal dromaeosaurid or troodontid. The authors of the follow-up study noted that uncertainties still exist, and that it may not be possible to state confidently whether or not Archaeopteryx is a member of Avialae or not, barring new and better specimens of relevant species.[86]
Phylogenetic studies conducted by Senter, et al. (2012) and Turner, Makovicky, and Norell (2012) also found Archaeopteryx to be more closely related to living birds than to dromaeosaurids and troodontids.[87][88] On the other hand, Godefroit et al. (2013) recovered Archaeopteryx as more closely related to dromaeosaurids and troodontids in the analysis included in their description of Eosinopteryx brevipenna. The authors used a modified version of the matrix from the study describing Xiaotingia, adding Jinfengopteryx elegans and Eosinopteryx brevipenna to it, as well as adding four additional characters related to the development of the plumage. Unlike the analysis from the description of Xiaotingia, the analysis conducted by Godefroit, et al. did not find Archaeopteryx to be related particularly closely to Anchiornis and Xiaotingia, which were recovered as basal troodontids instead.[89]
Agnolín and Novas (2013) found Archaeopteryx and (possibly synonymous) Wellnhoferia to be form a clade sister to the lineage including Jeholornis and Pygostylia, with Microraptoria, Unenlagiinae, and the clade containing Anchiornis and Xiaotingia being successively closer outgroups to the Avialae (defined by the authors as the clade stemming from the last common ancestor of Archaeopteryx and Aves).[90] Another phylogenetic study by Godefroit, et al., using a more inclusive matrix than the one from the analysis in the description of Eosinopteryx brevipenna, also found Archaeopteryx to be a member of Avialae (defined by the authors as the most inclusive clade containing Passer domesticus, but not Dromaeosaurus albertensis or Troodon formosus). Archaeopteryx was found to form a grade at the base of Avialae with Xiaotingia, Anchiornis, and Aurornis. Compared to Archaeopteryx, Xiaotingia was found to be more closely related to extant birds, while both Anchiornis and Aurornis were found to be more distantly so.[3]
Hu et al. (2018),[91] Wang et al. (2018)[92] and Hartman et al. (2019)[93] found Archaeopteryx to have been a deinonychosaur instead of an avialan. More specifically, it and closely related taxa were considered basal deinonychosaurs, with dromaeosaurids and troodontids forming together a parallel lineage within the group. Because Hartman et al. found Archaeopteryx isolated in a group of flightless deinonychosaurs (otherwise considered "anchiornithids"), they considered it highly probable that this animal evolved flight independently from bird ancestors (and from Microraptor and Yi). The following cladogram illustrates their hypothesis regarding the position of Archaeopteryx:
1880 photo of the Berlin specimen, showing leg feathers that were removed subsequently, during preparation
As in the wings of modern birds, the flight feathers of Archaeopteryx were somewhat asymmetrical and the tail feathers were rather broad. This implies that the wings and tail were used for lift generation, but it is unclear whether Archaeopteryx was capable of flapping flight or simply a glider. The lack of a bony breastbone suggests that Archaeopteryx was not a very strong flier, but flight muscles might have attached to the thick, boomerang-shaped wishbone, the platelike coracoids, or perhaps, to a cartilaginoussternum. The sideways orientation of the glenoid (shoulder) joint between scapula, coracoid, and humerus—instead of the dorsally angled arrangement found in modern birds—may indicate that Archaeopteryx was unable to lift its wings above its back, a requirement for the upstroke found in modern flapping flight. According to a study by Philip Senter in 2006, Archaeopteryx was indeed unable to use flapping flight as modern birds do, but it may well have used a downstroke-only flap-assisted gliding technique.[94] However, a more recent study solves this issue by suggesting a different flight stroke configuration for non-avian flying theropods.[95]
Archaeopteryx wings were relatively large, which would have resulted in a low stall speed and reduced turning radius. The short and rounded shape of the wings would have increased drag, but also could have improved its ability to fly through cluttered environments such as trees and brush (similar wing shapes are seen in birds that fly through trees and brush, such as crows and pheasants). The presence of "hind wings", asymmetrical flight feathers stemming from the legs similar to those seen in dromaeosaurids such as Microraptor, also would have added to the aerial mobility of Archaeopteryx. The first detailed study of the hind wings by Longrich in 2006, suggested that the structures formed up to 12% of the total airfoil. This would have reduced stall speed by up to 6% and turning radius by up to 12%.[64]
The feathers of Archaeopteryx were asymmetrical. This has been interpreted as evidence that it was a flyer, because flightless birds tend to have symmetrical feathers. Some scientists, including Thomson and Speakman, have questioned this. They studied more than 70 families of living birds, and found that some flightless types do have a range of asymmetry in their feathers, and that the feathers of Archaeopteryx fall into this range.[96] The degree of asymmetry seen in Archaeopteryx is more typical for slow flyers than for flightless birds.[97]
The Munich Specimen
In 2010, Robert L. Nudds and Gareth J. Dyke in the journal Science published a paper in which they analysed the rachises of the primary feathers of Confuciusornis and Archaeopteryx. The analysis suggested that the rachises on these two genera were thinner and weaker than those of modern birds relative to body mass. The authors determined that Archaeopteryx and Confuciusornis, were unable to use flapping flight.[98] This study was criticized by Philip J. Currie and Luis Chiappe. Chiappe suggested that it is difficult to measure the rachises of fossilized feathers, and Currie speculated that Archaeopteryx and Confuciusornis must have been able to fly to some degree, as their fossils are preserved in what is believed to have been marine or lake sediments, suggesting that they must have been able to fly over deep water.[99]Gregory Paul also disagreed with the study, arguing in a 2010 response that Nudds and Dyke had overestimated the masses of these early birds, and that more accurate mass estimates allowed powered flight even with relatively narrow rachises. Nudds and Dyke had assumed a mass of 250 g (8.8 oz) for the Munich specimen Archaeopteryx, a young juvenile, based on published mass estimates of larger specimens. Paul argued that a more reasonable body mass estimate for the Munich specimen is about 140 g (4.9 oz). Paul also criticized the measurements of the rachises themselves, noting that the feathers in the Munich specimen are poorly preserved. Nudds and Dyke reported a diameter of 0.75 mm (0.03 in) for the longest primary feather, which Paul could not confirm using photographs. Paul measured some of the inner primary feathers, finding rachises 1.25–1.4 mm (0.049–0.055 in) across.[100] Despite these criticisms, Nudds and Dyke stood by their original conclusions. They claimed that Paul's statement, that an adult Archaeopteryx would have been a better flyer than the juvenile Munich specimen, was dubious. This, they reasoned, would require an even thicker rachis, evidence for which has not yet been presented.[101] Another possibility is that they had not achieved true flight, but instead used their wings as aids for extra lift while running over water after the fashion of the basilisk lizard, which could explain their presence in lake and marine deposits (see Origin of avian flight).[102][103]
Replica of the London Specimen
In 2004, scientists analysing a detailed CT scan of the braincase of the London Archaeopteryx concluded that its brain was significantly larger than that of most dinosaurs, indicating that it possessed the brain size necessary for flying. The overall brain anatomy was reconstructed using the scan. The reconstruction showed that the regions associated with vision took up nearly one-third of the brain. Other well-developed areas involved hearing and muscle coordination.[104] The skull scan also revealed the structure of its inner ear. The structure more closely resembles that of modern birds than the inner ear of non-avian reptiles. These characteristics taken together suggest that Archaeopteryx had the keen sense of hearing, balance, spatial perception, and coordination needed to fly.[105]Archaeopteryx had a cerebrum-to-brain-volume ratio 78% of the way to modern birds from the condition of non-coelurosaurian dinosaurs such as Carcharodontosaurus or Allosaurus, which had a crocodile-like anatomy of the brain and inner ear.[106] Newer research shows that while the Archaeopteryx brain was more complex than that of more primitive theropods, it had a more generalized brain volume among Maniraptora dinosaurs, even smaller than that of other non-avian dinosaurs in several instances, which indicates the neurological development required for flight was already a common trait in the maniraptoran clade.[107]
Archaeopteryx continues to play an important part in scientific debates about the origin and evolution of birds. Some scientists see it as a semi-arboreal climbing animal, following the idea that birds evolved from tree-dwelling gliders (the "trees down" hypothesis for the evolution of flight proposed by O. C. Marsh). Other scientists see Archaeopteryx as running quickly along the ground, supporting the idea that birds evolved flight by running (the "ground up" hypothesis proposed by Samuel Wendell Williston). Still others suggest that Archaeopteryx might have been at home both in the trees and on the ground, like modern crows, and this latter view is what currently is considered best-supported by morphological characters. Altogether, it appears that the species was not particularly specialized for running on the ground or for perching. A scenario outlined by Elżanowski in 2002 suggested that Archaeopteryx used its wings mainly to escape predators by glides punctuated with shallow downstrokes to reach successively higher perches, and alternatively, to cover longer distances (mainly) by gliding down from cliffs or treetops.[21]
In March 2018, scientists reported that Archaeopteryx was likely capable of flight, but in a manner distinct and substantially different from that of modern birds.[109][110] This study on Archaeopteryx's bone histology suggests that it was closest to true flying birds, and in particular to pheasants and other burst flyers.
Studies of Archaeopteryx's feather sheaths revealed that like modern birds, it had a center-out, flight related molting strategy. As it was a weak flier, this was extremely advantageous in preserving its maximum flight performance.[111]
An histological study by Erickson, Norell, Zhongue, and others in 2009 estimated that Archaeopteryx grew relatively slowly compared to modern birds, presumably because the outermost portions of Archaeopteryx bones appear poorly vascularized;[4] in living vertebrates, poorly vascularized bone is correlated with slow growth rate. They also assume that all known skeletons of Archaeopteryx come from juvenile specimens. Because the bones of Archaeopteryx could not be histologically sectioned in a formal skeletochronological (growth ring) analysis, Erickson and colleagues used bone vascularity (porosity) to estimate bone growth rate. They assumed that poorly vascularized bone grows at similar rates in all birds and in Archaeopteryx. The poorly vascularized bone of Archaeopteryx might have grown as slowly as that in a mallard (2.5micrometres per day) or as fast as that in an ostrich (4.2micrometres per day). Using this range of bone growth rates, they calculated how long it would take to "grow" each specimen of Archaeopteryx to the observed size; it may have taken at least 970 days (there were 375 days in a Late Jurassic year) to reach an adult size of 0.8–1 kg (1.8–2.2 lb). The study also found that the avialans Jeholornis and Sapeornis grew relatively slowly, as did the dromaeosaurid Mahakala. The avialans Confuciusornis and Ichthyornis grew relatively quickly, following a growth trend similar to that of modern birds.[112] One of the few modern birds that exhibit slow growth is the flightless kiwi, and the authors speculated that Archaeopteryx and the kiwi had similar basal metabolic rate.[4]
The richness and diversity of the Solnhofen limestones in which all specimens of Archaeopteryx have been found have shed light on an ancient Jurassic Bavaria strikingly different from the present day. The latitude was similar to Florida, though the climate was likely to have been drier, as evidenced by fossils of plants with adaptations for arid conditions and a lack of terrestrial sediments characteristic of rivers. Evidence of plants, although scarce, include cycads and conifers while animals found include a large number of insects, small lizards, pterosaurs, and Compsognathus.[14]
The excellent preservation of Archaeopteryx fossils and other terrestrial fossils found at Solnhofen indicates that they did not travel far before becoming preserved.[114] The Archaeopteryx specimens found were therefore likely to have lived on the low islands surrounding the Solnhofen lagoon rather than to have been corpses that drifted in from farther away. Archaeopteryx skeletons are considerably less numerous in the deposits of Solnhofen than those of pterosaurs, of which seven genera have been found.[115] The pterosaurs included species such as Rhamphorhynchus belonging to the Rhamphorhynchidae, the group which dominated the ecological niche currently occupied by seabirds, and which became extinct at the end of the Jurassic. The pterosaurs, which also included Pterodactylus, were common enough that it is unlikely that the specimens found are vagrants from the larger islands 50 km (31 mi) to the north.[116]
The islands that surrounded the Solnhofen lagoon were low lying, semi-arid, and sub-tropical with a long dry season and little rain.[117] The closest modern analogue for the Solnhofen conditions is said to be Orca Basin in the northern Gulf of Mexico, although it is much deeper than the Solnhofen lagoons.[115] The flora of these islands was adapted to these dry conditions and consisted mostly of low (3 m (10 ft)) shrubs.[116] Contrary to reconstructions of Archaeopteryx climbing large trees, these seem to have been mostly absent from the islands; few trunks have been found in the sediments and fossilized tree pollen also is absent.
The lifestyle of Archaeopteryx is difficult to reconstruct and there are several theories regarding it. Some researchers suggest that it was primarily adapted to life on the ground,[118] while other researchers suggest that it was principally arboreal on the basis of the curvature of the claws[119] which has since been questioned.[120] The absence of trees does not preclude Archaeopteryx from an arboreal lifestyle, as several species of bird live exclusively in low shrubs. Various aspects of the morphology of Archaeopteryx point to either an arboreal or ground existence, including the length of its legs and the elongation in its feet; some authorities consider it likely to have been a generalist capable of feeding in both shrubs and open ground, as well as along the shores of the lagoon.[116] It most likely hunted small prey, seizing it with its jaws if it was small enough, or with its claws if it was larger.
^Lowe, P. R. (1935). "On the relationship of the Struthiones to the dinosaurs and to the rest of the avian class, with special reference to the position of Archaeopteryx". Ibis. 5 (2): 398–432. doi:10.1111/j.1474-919X.1935.tb02979.x.
|
] Newer research shows that while the Archaeopteryx brain was more complex than that of more primitive theropods, it had a more generalized brain volume among Maniraptora dinosaurs, even smaller than that of other non-avian dinosaurs in several instances, which indicates the neurological development required for flight was already a common trait in the maniraptoran clade.[107]
Archaeopteryx continues to play an important part in scientific debates about the origin and evolution of birds. Some scientists see it as a semi-arboreal climbing animal, following the idea that birds evolved from tree-dwelling gliders (the "trees down" hypothesis for the evolution of flight proposed by O. C. Marsh). Other scientists see Archaeopteryx as running quickly along the ground, supporting the idea that birds evolved flight by running (the "ground up" hypothesis proposed by Samuel Wendell Williston). Still others suggest that Archaeopteryx might have been at home both in the trees and on the ground, like modern crows, and this latter view is what currently is considered best-supported by morphological characters. Altogether, it appears that the species was not particularly specialized for running on the ground or for perching. A scenario outlined by Elżanowski in 2002 suggested that Archaeopteryx used its wings mainly to escape predators by glides punctuated with shallow downstrokes to reach successively higher perches, and alternatively, to cover longer distances (mainly) by gliding down from cliffs or treetops.[21]
In March 2018, scientists reported that Archaeopteryx was likely capable of flight, but in a manner distinct and substantially different from that of modern birds.[109][110] This study on Archaeopteryx's bone histology suggests that it was closest to true flying birds, and in particular to pheasants and other burst flyers.
Studies of Archaeopteryx's feather sheaths revealed that like modern birds, it had a center-out, flight related molting strategy.
|
yes
|
Ornithology
|
Did archaeopteryx really fly?
|
yes_statement
|
"archaeopteryx" was capable of flight.. "archaeopteryx" had the ability to "fly".
|
https://www.eurekalert.org/news-releases/761839
|
The early bird got to fly: Archaeopteryx was | EurekAlert!
|
The early bird got to fly: Archaeopteryx was an active flyer
image: The Munich specimen of the transitional bird <i>Archaeopteryx</i>. It preserves a partial skull (top left), shoulder girdle and both wings slightly raised up (most left to center left), the ribcage (center), and the pelvic girdle and both legs in a 'cycling' posture (right); all connected by the vertebral column from the neck (top left, under the skull) to the tip of the tail (most right). Imprints of its wing feathers are visible radiating from below the shoulder and vague imprints of the tail plumage can be recognized extending from the tip of the tail.view more
Credit: Credits: ESRF/Pascal Goetgheluck
The question of whether the Late Jurassic dino-bird Archaeopteryx was an elaborately feathered ground dweller, a glider, or an active flyer has fascinated palaeontologists for decades. Valuable new information obtained with state-of-the-art synchrotron microtomography at the ESRF, the European Synchrotron (Grenoble, France), allowed an international team of scientists to answer this question in Nature Communications. The wing bones of Archaeopteryx were shaped for incidental active flight, but not for the advanced style of flying mastered by today's birds.
Was Archaeopteryx capable of flying, and if so, how? Although it is common knowledge that modern-day birds descended from extinct dinosaurs, many questions on their early evolution and the development of avian flight remain unanswered. Traditional research methods have thus far been unable to answer the question whether Archaeopteryx flew or not. Using synchrotron microtomography at the ESRF's beamline ID19 to probe inside Archaeopteryx fossils, an international team of scientists from the ESRF, Palacký University, Czech Republic, CNRS and Sorbonne University, France, Uppsala University, Sweden, and Bürgermeister-Müller-Museum Solnhofen, Germany, shed new light on this earliest of birds.
Reconstructing extinct behaviour poses substantial challenges for palaeontologists, especially when it comes to enigmatic animals such as the famous Archaeopteryx from the Late Jurassic sediments of southeastern Germany that is considered the oldest potentially free-flying dinosaur. This well-preserved fossil taxon shows a mosaic anatomy that illustrates the close family relations between extinct raptorial dinosaurs and living dinosaurs: the birds. Most modern bird skeletons are highly specialised for powered flight, yet many of their characteristic adaptations in particularly the shoulder are absent in the Bavarian fossils of Archaeopteryx. Although its feathered wings resemble those of modern birds flying overhead every day, the primitive shoulder structure is incompatible with the modern avian wing beat cycle.
"The cross-sectional architecture of limb bones is strongly influenced by evolutionary adaptation towards optimal strength at minimal mass, and functional adaptation to the forces experienced during life", explains Prof. Jorge Cubo of the Sorbonne University in Paris. "By statistically comparing the bones of living animals that engage in observable habits with those of cryptic fossils, it is possible to bring new information into an old discussion", says senior author Dr. Sophie Sanchez from Uppsala University, Sweden
Archaeopteryx skeletons are preserved in and on limestone slabs that reveal only part of their morphology. Since these fossils are among the most valuable in the world, invasive probing to reveal obscured or internal structures is therefore highly discouraged. "Fortunately, today it is no longer necessary to damage precious fossils", states Dr. Paul Tafforeau, beamline scientist at the ESRF. "The exceptional sensitivity of X-ray imaging techniques for investigating large specimens that is available at the ESRF offers harmless microscopic insight into fossil bones and allows virtual 3D reconstructions of extraordinary quality. Exciting upgrades are underway, including a substantial improvement of the properties of our synchrotron source and a brand new beamline designated for tomography. These developments promise to give even better results on much larger specimens in the future".
Scanning data unexpectedly revealed that the wing bones of Archaeopteryx, contrary to its shoulder girdle, shared important adaptations with those of modern flying birds. "We focused on the middle part of the arm bones because we knew those sections contain clear flight-related signals in birds", says Dr. Emmanuel de Margerie, CNRS, France. "We immediately noticed that the bone walls of Archaeopteryx were much thinner than those of earthbound dinosaurs but looked a lot like conventional bird bones", continues lead author Dennis Voeten of the ESRF. "Data analysis furthermore demonstrated that the bones of Archaeopteryx plot closest to those of birds like pheasants that occasionally use active flight to cross barriers or dodge predators, but not to those of gliding and soaring forms such as many birds of prey and some seabirds that are optimised for enduring flight."
"We know that the region around Solnhofen in southeastern Germany was a tropical archipelago, and such an environment appears highly suitable for island hopping or escape flight", remarks Dr. Martin Röper, Archaeopteryx curator and co-author of the report. "Archaeopteryx shared the Jurassic skies with primitive pterosaurs that would ultimately evolve into the gigantic pterosaurs of the Cretaceous. We found similar differences in wing bone geometry between primitive and advanced pterosaurs as those between actively flying and soaring birds", adds Vincent Beyrand of the ESRF.
Since Archaeopteryx represents the oldest known flying member of the avialan lineage that also includes modern birds, these findings not only illustrate aspects of the lifestyle of Archaeopteryx but also provide insight into the early evolution of dinosaurian flight. "Indeed, we now know that Archaeopteryx was already actively flying around 150 million years ago, which implies that active dinosaurian flight had evolved even earlier!" says Prof. Stanislav Bureš of Palacký University in Olomouc. "However, because Archaeopteryx lacked the pectoral adaptations to fly like modern birds, the way it achieved powered flight must also have been different. We will need to return to the fossils to answer the question on exactly how this Bavarian icon of evolution used its wings", concludes Voeten.
It is now clear that Archaeopteryx is a representative of the first wave of dinosaurian flight strategies that eventually went extinct, leaving only the modern avian flight stroke directly observable today.
Journal
DOI
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
|
The early bird got to fly: Archaeopteryx was an active flyer
image: The Munich specimen of the transitional bird <i>Archaeopteryx</i>. It preserves a partial skull (top left), shoulder girdle and both wings slightly raised up (most left to center left), the ribcage (center), and the pelvic girdle and both legs in a 'cycling' posture (right); all connected by the vertebral column from the neck (top left, under the skull) to the tip of the tail (most right). Imprints of its wing feathers are visible radiating from below the shoulder and vague imprints of the tail plumage can be recognized extending from the tip of the tail.view more
Credit: Credits: ESRF/Pascal Goetgheluck
The question of whether the Late Jurassic dino-bird Archaeopteryx was an elaborately feathered ground dweller, a glider, or an active flyer has fascinated palaeontologists for decades. Valuable new information obtained with state-of-the-art synchrotron microtomography at the ESRF, the European Synchrotron (Grenoble, France), allowed an international team of scientists to answer this question in Nature Communications. The wing bones of Archaeopteryx were shaped for incidental active flight, but not for the advanced style of flying mastered by today's birds.
Was Archaeopteryx capable of flying, and if so, how? Although it is common knowledge that modern-day birds descended from extinct dinosaurs, many questions on their early evolution and the development of avian flight remain unanswered. Traditional research methods have thus far been unable to answer the question whether Archaeopteryx flew or not. Using synchrotron microtomography at the ESRF's beamline ID19 to probe inside Archaeopteryx fossils, an international team of scientists from the ESRF, Palacký University, Czech Republic, CNRS and Sorbonne University, France, Uppsala University, Sweden, and Bürgermeister-Müller-Museum Solnhofen, Germany, shed new light on this earliest of birds.
|
yes
|
Ornithology
|
Did archaeopteryx really fly?
|
yes_statement
|
"archaeopteryx" was capable of flight.. "archaeopteryx" had the ability to "fly".
|
https://www.reuters.com/article/us-science-bird/celebrated-dino-bird-archaeopteryx-could-fly-but-not-very-well-idUSKCN1GP2RA
|
Celebrated dino-bird Archaeopteryx could fly, but not very well ...
|
Celebrated dino-bird Archaeopteryx could fly, but not very well
WASHINGTON (Reuters) - It may not have been a champion aviator, but the famous dino-bird Archaeopteryx was fully capable of flying despite key skeletal differences from its modern cousins, though not exactly gracefully, according to a new study. Think Wright Brothers, not F-22 fighter jet.
Slideshow ( 2 images )
Scientists said on Tuesday they examined Archaeopteryx’s wing architecture using state-of-the-art scanning and compared it to a range of birds, closely related dinosaurs and the extinct flying reptiles called pterosaurs. They concluded it could fly in bursts over relatively short distances like pheasants, peacocks and roadrunners.
Birds evolved in the Jurassic Period from small feathered dinosaurs, and represent the only dinosaur group to have survived the mass extinction event 66 million years ago.
Crow-sized Archaeopteryx, which lived about 150 million years ago in a tropical archipelago that is now Bavaria, combined primitive dinosaur characteristics with traits seen in modern birds.
Its fossils were first discovered in 1861 and it was long considered the earliest-known bird, though there is now a spirited scientific debate about defining the first birds.
“Many researchers have assumed that Archaeopteryx exhibited a very primitive way of flying that would have been equivalent to that of gliding from tree to tree, like extant flying squirrels do,” said paleontologist Sophie Sanchez of Uppsala University in Sweden. “It, therefore, is a big surprise to actually recognize adaptations consistent with active flight.”
“We are convinced that this presents the best indication for active flight in Archaeopteryx brought to light in the last 150 years,” added paleontologist Dennis Voeten of the European Synchrotron Radiation Facility in France, though he called it “a poor flyer.”
Archaeopteryx boasted teeth, a long tail and had no bony, keeled sternum where flight muscles attach. Its flight capabilities may have enabled Archaeopteryx to escape predators or fly among islands.
The researchers focused on a cross-section of the wing bones and their density of blood vessels. They found similarities to birds capable of short-distance flight, not gliding and soaring varieties like birds of prey.
Archaeopteryx was likely able to take off from the ground, but must have used a unique flying style, Sanchez said. It lacked important traits in the shoulders of modern birds, making it impossible to beat its wings the way they do.
“We propose that it would have been able to use its wings to propel its body in a fashion that superficially resembles the stroke of butterfly swimmers,” Sanchez said.
|
Celebrated dino-bird Archaeopteryx could fly, but not very well
WASHINGTON (Reuters) - It may not have been a champion aviator, but the famous dino-bird Archaeopteryx was fully capable of flying despite key skeletal differences from its modern cousins, though not exactly gracefully, according to a new study. Think Wright Brothers, not F-22 fighter jet.
Slideshow ( 2 images )
Scientists said on Tuesday they examined Archaeopteryx’s wing architecture using state-of-the-art scanning and compared it to a range of birds, closely related dinosaurs and the extinct flying reptiles called pterosaurs. They concluded it could fly in bursts over relatively short distances like pheasants, peacocks and roadrunners.
Birds evolved in the Jurassic Period from small feathered dinosaurs, and represent the only dinosaur group to have survived the mass extinction event 66 million years ago.
Crow-sized Archaeopteryx, which lived about 150 million years ago in a tropical archipelago that is now Bavaria, combined primitive dinosaur characteristics with traits seen in modern birds.
Its fossils were first discovered in 1861 and it was long considered the earliest-known bird, though there is now a spirited scientific debate about defining the first birds.
“Many researchers have assumed that Archaeopteryx exhibited a very primitive way of flying that would have been equivalent to that of gliding from tree to tree, like extant flying squirrels do,” said paleontologist Sophie Sanchez of Uppsala University in Sweden. “It, therefore, is a big surprise to actually recognize adaptations consistent with active flight.”
“We are convinced that this presents the best indication for active flight in Archaeopteryx brought to light in the last 150 years,” added paleontologist Dennis Voeten of the European Synchrotron Radiation Facility in France, though he called it “a poor flyer.”
Archaeopteryx boasted teeth, a long tail and had no bony, keeled sternum where flight muscles attach.
|
yes
|
Ornithology
|
Did archaeopteryx really fly?
|
yes_statement
|
"archaeopteryx" was capable of flight.. "archaeopteryx" had the ability to "fly".
|
https://www.nbcnews.com/id/wbna5602644
|
Dino-bird had the brains for flight
|
Dino-bird had the brains for flight
A crow-sized bird that lived 147 million years ago had a brain similar to a modern eagle or parrot and all the equipment for flight, scientists said Wednesday.
An artist's conception shows Archaeopteryx, a creature that had the wings of a bird but the tail and teeth of a dinosaur.John Sibbick / NHM
Aug. 4, 2004, 4:59 PM UTC / Source: Reuters
A bird that lived 147 million years ago had a brain similar to a modern eagle or parrot and all the equipment for flight, scientists said Wednesday.
Archaeopteryx is the most ancient bird known. It had the bony tail and teeth of a dinosaur and the feathers and wings of a bird, and scientists have long assumed that it was capable of flight. But in the latest study, researchers in the United States and Britain looked at the physical specifications behind that assumption.
Using sophisticated computer imaging of the braincase from of fossil of Archaeopteryx found in Germany in 1861, the researchers determined that the creature had all the characteristics and brain power to conquer the skies.
Nature / AP
“Archaeopteryx’s brain, its senses and its ear turned out to be surprisingly more birdlike than we thought,” Angela Milner, a paleontologist at the Natural History Museum in London, said in an interview. “It is regarded as the most primitive bird we know, and its skeleton is almost all dinosaur except that it has feathers and wings, so we were surprised that its brain was already quite an advanced birdlike brain.”
The particular shape of the brain, its inner ear which is linked to balance, and its sensory ability have convinced scientists it was capable of flying.
“It had everything in place in its neurosensory functions and structures that suggest it was well-equipped to fly,” Milner added.
The origins of flight Archaeopteryx was small — about the size of a crow. The evidence showing it was capable of flying, reported in Thursday's issue of the journal Nature, raises new questions about the origins of flight.
“Archaeopteryx’s brain was fully equipped for flight and it had a birdlike brain. Obviously the evolutionary trends that led to that must have happened a lot further back in time than we really thought,” said Milner.
** HOLD FOR RELEASE AT 1 P.M., EDT, AUG. 4 **Angela Milner, a paleontologist from London's Natural History Museum, places a skull fragment from an ancient Archeopteryx into the CT scanner in the lab operated at the University of Texas by Dr. Timothy Rowe, right, in this photo made in June 2002 in Austin, Texas. Milner and Rowe are among is authors of a study in the journal Nature that found the pre-historic bird had a brain that was \"well-equipped for flying.\" (AP Photo, Courtesy University of Texas at Austin, Marsha Miller)Marsha Miller University Of Texa / UNIVERSITY OF TEXAS
Scientists at the London museum removed the 0.8-inch (20-millimeter) braincase from the rest of the fossil and collaborated with researchers at the University of Texas at Austin, who constructed a three-dimensional model of its brain using computer images.
“This animal had huge eyes and a huge vision region in its brain to go along with that, and a great sense of balance,” said the University of Texas' Timothy Rowe. “Its inner ear looks very much like the ear of a modern bird.”
Significant visual ability and brain power were thought to be needed to coordinate information from the eyes and ears that is essential for flight.
A bird's 'onboard computer' In a commentary in the journal, Lawrence Witmer of Ohio University College of Osteopathic Medicine in Athens, Ohio, described the research as a landmark study.
Image of Archaeopteryx fossil owned by The Natural History Museum.
“The results have implications for both the biology of Archaeopteryx and the evolutionary transition to birds,” he said.
Previous studies have looked at the structure of wings and features for clues about the creature’s ability to fly, Witner noted.
“But flight isn’t just about wings, rudders and flaps. It’s also about the pilot and onboard computer, and those are the missing elements that this new study provides for Archaeopteryx,” he added.
|
Dino-bird had the brains for flight
A crow-sized bird that lived 147 million years ago had a brain similar to a modern eagle or parrot and all the equipment for flight, scientists said Wednesday.
An artist's conception shows Archaeopteryx, a creature that had the wings of a bird but the tail and teeth of a dinosaur. John Sibbick / NHM
Aug. 4, 2004, 4:59 PM UTC / Source: Reuters
A bird that lived 147 million years ago had a brain similar to a modern eagle or parrot and all the equipment for flight, scientists said Wednesday.
Archaeopteryx is the most ancient bird known. It had the bony tail and teeth of a dinosaur and the feathers and wings of a bird, and scientists have long assumed that it was capable of flight. But in the latest study, researchers in the United States and Britain looked at the physical specifications behind that assumption.
Using sophisticated computer imaging of the braincase from of fossil of Archaeopteryx found in Germany in 1861, the researchers determined that the creature had all the characteristics and brain power to conquer the skies.
Nature / AP
“Archaeopteryx’s brain, its senses and its ear turned out to be surprisingly more birdlike than we thought,” Angela Milner, a paleontologist at the Natural History Museum in London, said in an interview. “It is regarded as the most primitive bird we know, and its skeleton is almost all dinosaur except that it has feathers and wings, so we were surprised that its brain was already quite an advanced birdlike brain.”
The particular shape of the brain, its inner ear which is linked to balance, and its sensory ability have convinced scientists it was capable of flying.
“It had everything in place in its neurosensory functions and structures that suggest it was well-equipped to fly,” Milner added.
The origins of flight Archaeopteryx was small — about the size of a crow. The evidence showing it was capable of flying, reported in Thursday's issue of the journal Nature, raises new questions about the origins of flight.
|
yes
|
Ornithology
|
Did archaeopteryx really fly?
|
yes_statement
|
"archaeopteryx" was capable of flight.. "archaeopteryx" had the ability to "fly".
|
https://www.washingtonpost.com/news/speaking-of-science/wp/2018/03/13/this-feathery-dinosaur-probably-flew-but-not-like-any-bird-you-know/
|
This feathery dinosaur probably flew, but not like any bird you know ...
|
This feathery dinosaur probably flew, but not like any bird you know
With this Archaeopteryx specimen, imprints from plumage can be seen near the shoulder and the tail's tip. (Pascal Goetgheluck/ESRF)
Share
In 1861, German paleontologist Christian Erich Hermann von Meyer wrote a short paper about a fossil so unusual he first thought it was a fake. What appeared to be a bird feather was pressed into 150-million-year-old limestone. Von Meyer labeled it Archaeopteryx, meaning old wing, and a full skeleton was found shortly thereafter.
The bones, discovered two years after Charles Darwin published his “On the Origin of Species,” revealed a path to modern birds from their prehistoric ancestors. This discovery was a hint of a revelation to come much later: Birds are living dinosaurs.
During the next 150-plus years, paleontologists discovered 10 more Archaeopteryx skeletons. A picture of the creature emerged, of a dinosaur the size of a crow, weighing little more than a pound and covered in plumage. But feathers, as the penguin and ostrich know, do not necessarily mean flight.
Advertisement
A new report in Nature Communications suggests that Archaeopteryx probably flapped through the air. The dinosaur did so unlike any bird flying today. Archaeopteryx used more shoulder action, the authors of the new report say: Imagine something like a butterfly stroke, according to Dennis Voeten, a researcher at the European Synchrotron Radiation Facility in France and the study's lead author.
Not everything that looks like a bird was a bird, especially in the Jurassic period. Recent discoveries have pushed Archaeopteryx away from its perch as a transitional dinosaur-to-bird fossil — there is now a crowd of finely feathered dinosaurs. Archaeopteryx was probably not, Voeten said, a direct tie to sparrows and ostriches but a member of an offshoot lineage.
As scientists have probed Archaeopteryx's family tree, they also questioned its ability to fly. In the second half of the last century, two positions emerged. One camp said, yes, Archaeopteryx flapped its way off the ground. The other camp said, no, Archaeopteryx scrabbled up trees using its clawed wings, then let go and sailed to the ground like a sugar glider. And a few paleontologists suggested other ideas: Perhaps Archaeopteryx was in the process of losing its flight ability, not gaining it.
In the new study, Voeten and his colleagues probed Archaeopteryx fossils using a synchrotron — a powerful source of radiation. The concept is similar to an X-ray, but your dentist's X-ray machine would fail to distinguish fossilized skeletons from the background rock. A synchrotron beam is much more sensitive.
Advertisement
Bones, Voeten pointed out, record our daily stress. “The right upper arm bone of a professional tennis player is thicker than the left upper arm bone,” he said. Likewise, the stress of flying reshapes the wing bones in modern birds. He decided to look for similar evidence in Archaeopteryx.
The study authors examined cross-sections of the Archaeopteryx bones and compared these structures to bones in flying birds, flightless birds, other dinosaurs and modern crocodilians. The Archaeopteryx bone characteristics closely resembled what Voeten called “burst fliers.” These are birds like pheasants, roadrunners and turkeys — animals comfortable on the ground but capable of taking flight with a snap of the wings. The study moves Archaeopteryx from a potential flying animal to a probable one, he concluded.
Still, it did not fly like a pheasant. “The modern bird has a very nifty pulley system,” Voeten said. The muscle groups that move bird wings up and down are attached at the sternum, like the wheel of a pulley. But if you flap your arms to mimic a bird, you use muscles that are anchored at the chest and shoulders. Archaeopteryx wings were attached like our arms, with no chest pulley. “We're sure that it's incapable of flying like a modern bird does,” he said.
Advertisement
Voeten expects that the new study will attract Archaeopteryx flight critics and says, “I warmly welcome them.” He is not beholden to the idea Archaeopteryx could fly, he said. “This is a very famous, notorious debate that I am entering in as a new guy.”
|
But feathers, as the penguin and ostrich know, do not necessarily mean flight.
Advertisement
A new report in Nature Communications suggests that Archaeopteryx probably flapped through the air. The dinosaur did so unlike any bird flying today. Archaeopteryx used more shoulder action, the authors of the new report say: Imagine something like a butterfly stroke, according to Dennis Voeten, a researcher at the European Synchrotron Radiation Facility in France and the study's lead author.
Not everything that looks like a bird was a bird, especially in the Jurassic period. Recent discoveries have pushed Archaeopteryx away from its perch as a transitional dinosaur-to-bird fossil — there is now a crowd of finely feathered dinosaurs. Archaeopteryx was probably not, Voeten said, a direct tie to sparrows and ostriches but a member of an offshoot lineage.
As scientists have probed Archaeopteryx's family tree, they also questioned its ability to fly. In the second half of the last century, two positions emerged. One camp said, yes, Archaeopteryx flapped its way off the ground. The other camp said, no, Archaeopteryx scrabbled up trees using its clawed wings, then let go and sailed to the ground like a sugar glider. And a few paleontologists suggested other ideas: Perhaps Archaeopteryx was in the process of losing its flight ability, not gaining it.
In the new study, Voeten and his colleagues probed Archaeopteryx fossils using a synchrotron — a powerful source of radiation. The concept is similar to an X-ray, but your dentist's X-ray machine would fail to distinguish fossilized skeletons from the background rock. A synchrotron beam is much more sensitive.
Advertisement
Bones, Voeten pointed out, record our daily stress. “The right upper arm bone of a professional tennis player is thicker than the left upper arm bone,” he said.
|
yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.