uid
stringlengths
4
7
premise
stringlengths
19
9.21k
hypothesis
stringlengths
13
488
label
stringclasses
3 values
id_3500
Internet shoppers are at an increased risk of both fraud and theft due to a number of factors. Amongst them are both a decreased in a means of reliable identification and an increase in the use of websites for monetary transaction whether form online reservation, internet banking, or other type of business. This fraud is increasing in severity and frequency in areas such as false business websites, telesales and online shopping and is now a more serious problem than ever before. One solution to this problem would be to limit internet transactions to websites authorised by the banks.
The increased risk of fraud and theft to internet shoppers is wholly due to a decrease in a means of reliable identification and an increase in the used of internet for online banking.
c
id_3501
Internet shoppers are at an increased risk of both fraud and theft due to a number of factors. Amongst them are both a decreased in a means of reliable identification and an increase in the use of websites for monetary transaction whether form online reservation, internet banking, or other type of business. This fraud is increasing in severity and frequency in areas such as false business websites, telesales and online shopping and is now a more serious problem than ever before. One solution to this problem would be to limit internet transactions to websites authorised by the banks.
By restricting internet transaction to only websites authorised by the banks, a decreased in internet fraud might be seen.
e
id_3502
Introduction to the Grounds of Keele University Keele University is situated in 600 acres of landscaped grounds to the west of the Potteries conurbation in North Staffordshire. These well-wooded grounds with their lakes, streams and formal flower beds support a wealth of wildlife. The surrounding countryside of the Staffordshire/ Shropshire/ Cheshire borders is also a rich area for the naturalist and rambler, while the majestic gritstone moorland of North Staffordshire and the limestone dales of: Derbyshire and northeast Staffordshire are not far away. Of the 600 acres some 300 are leased out as Home Farm. Of the remainder, about hall is woodland while the rest comprises the campus buildings and sports fields. The landscape we see today owes much to the work of Ralph Sneyd (1793 to 1870) who began planting on a grand scale in 1830, after inheriting the estate from his father. Throughout the period of its construction, the university has been careful to preserve as many mature trees as possible and to restrict the height of buildings to maintain the feeling of living and working in a landscape. The university has a continuing programme of landscaping, and many ornamental trees have been planted. Keele campus is, then, one of the most picturesque and tranquil in the country, yet is only a short distance from the Potteries and the M6 motorway. Although the landscape is an artificial one, it nonetheless has rich flora and fauna with more than 110 species of birds, 120 species of flowering plants, more than species of 60 trees, 24 species of butterflies, 380 species of moths, 100 species of beetles and 100 species of flies having been recorded so far. Although there is little of great rarity here, a wide variety of common species and a good network of paths from which to see them make Keele an ideal place to visit for the casual observer, as well as for both the novice and the more experienced naturalist.
The originator of the property took over the property from his father after he died.
e
id_3503
Introduction to the Grounds of Keele University Keele University is situated in 600 acres of landscaped grounds to the west of the Potteries conurbation in North Staffordshire. These well-wooded grounds with their lakes, streams and formal flower beds support a wealth of wildlife. The surrounding countryside of the Staffordshire/ Shropshire/ Cheshire borders is also a rich area for the naturalist and rambler, while the majestic gritstone moorland of North Staffordshire and the limestone dales of: Derbyshire and northeast Staffordshire are not far away. Of the 600 acres some 300 are leased out as Home Farm. Of the remainder, about hall is woodland while the rest comprises the campus buildings and sports fields. The landscape we see today owes much to the work of Ralph Sneyd (1793 to 1870) who began planting on a grand scale in 1830, after inheriting the estate from his father. Throughout the period of its construction, the university has been careful to preserve as many mature trees as possible and to restrict the height of buildings to maintain the feeling of living and working in a landscape. The university has a continuing programme of landscaping, and many ornamental trees have been planted. Keele campus is, then, one of the most picturesque and tranquil in the country, yet is only a short distance from the Potteries and the M6 motorway. Although the landscape is an artificial one, it nonetheless has rich flora and fauna with more than 110 species of birds, 120 species of flowering plants, more than species of 60 trees, 24 species of butterflies, 380 species of moths, 100 species of beetles and 100 species of flies having been recorded so far. Although there is little of great rarity here, a wide variety of common species and a good network of paths from which to see them make Keele an ideal place to visit for the casual observer, as well as for both the novice and the more experienced naturalist.
There are many plants and wildlife species not found anywhere else.
c
id_3504
Introduction to the Grounds of Keele University Keele University is situated in 600 acres of landscaped grounds to the west of the Potteries conurbation in North Staffordshire. These well-wooded grounds with their lakes, streams and formal flower beds support a wealth of wildlife. The surrounding countryside of the Staffordshire/ Shropshire/ Cheshire borders is also a rich area for the naturalist and rambler, while the majestic gritstone moorland of North Staffordshire and the limestone dales of: Derbyshire and northeast Staffordshire are not far away. Of the 600 acres some 300 are leased out as Home Farm. Of the remainder, about hall is woodland while the rest comprises the campus buildings and sports fields. The landscape we see today owes much to the work of Ralph Sneyd (1793 to 1870) who began planting on a grand scale in 1830, after inheriting the estate from his father. Throughout the period of its construction, the university has been careful to preserve as many mature trees as possible and to restrict the height of buildings to maintain the feeling of living and working in a landscape. The university has a continuing programme of landscaping, and many ornamental trees have been planted. Keele campus is, then, one of the most picturesque and tranquil in the country, yet is only a short distance from the Potteries and the M6 motorway. Although the landscape is an artificial one, it nonetheless has rich flora and fauna with more than 110 species of birds, 120 species of flowering plants, more than species of 60 trees, 24 species of butterflies, 380 species of moths, 100 species of beetles and 100 species of flies having been recorded so far. Although there is little of great rarity here, a wide variety of common species and a good network of paths from which to see them make Keele an ideal place to visit for the casual observer, as well as for both the novice and the more experienced naturalist.
The grounds had barely any trees when Sneyd took them over.
n
id_3505
Introduction to the Grounds of Keele University Keele University is situated in 600 acres of landscaped grounds to the west of the Potteries conurbation in North Staffordshire. These well-wooded grounds with their lakes, streams and formal flower beds support a wealth of wildlife. The surrounding countryside of the Staffordshire/ Shropshire/ Cheshire borders is also a rich area for the naturalist and rambler, while the majestic gritstone moorland of North Staffordshire and the limestone dales of: Derbyshire and northeast Staffordshire are not far away. Of the 600 acres some 300 are leased out as Home Farm. Of the remainder, about hall is woodland while the rest comprises the campus buildings and sports fields. The landscape we see today owes much to the work of Ralph Sneyd (1793 to 1870) who began planting on a grand scale in 1830, after inheriting the estate from his father. Throughout the period of its construction, the university has been careful to preserve as many mature trees as possible and to restrict the height of buildings to maintain the feeling of living and working in a landscape. The university has a continuing programme of landscaping, and many ornamental trees have been planted. Keele campus is, then, one of the most picturesque and tranquil in the country, yet is only a short distance from the Potteries and the M6 motorway. Although the landscape is an artificial one, it nonetheless has rich flora and fauna with more than 110 species of birds, 120 species of flowering plants, more than species of 60 trees, 24 species of butterflies, 380 species of moths, 100 species of beetles and 100 species of flies having been recorded so far. Although there is little of great rarity here, a wide variety of common species and a good network of paths from which to see them make Keele an ideal place to visit for the casual observer, as well as for both the novice and the more experienced naturalist.
It is so peaceful and quiet in the grounds because they are located far from the disturbances of human activity.
c
id_3506
Introduction to the Grounds of Keele University Keele University is situated in 600 acres of landscaped grounds to the west of the Potteries conurbation in North Staffordshire. These well-wooded grounds with their lakes, streams and formal flower beds support a wealth of wildlife. The surrounding countryside of the Staffordshire/ Shropshire/ Cheshire borders is also a rich area for the naturalist and rambler, while the majestic gritstone moorland of North Staffordshire and the limestone dales of: Derbyshire and northeast Staffordshire are not far away. Of the 600 acres some 300 are leased out as Home Farm. Of the remainder, about hall is woodland while the rest comprises the campus buildings and sports fields. The landscape we see today owes much to the work of Ralph Sneyd (1793 to 1870) who began planting on a grand scale in 1830, after inheriting the estate from his father. Throughout the period of its construction, the university has been careful to preserve as many mature trees as possible and to restrict the height of buildings to maintain the feeling of living and working in a landscape. The university has a continuing programme of landscaping, and many ornamental trees have been planted. Keele campus is, then, one of the most picturesque and tranquil in the country, yet is only a short distance from the Potteries and the M6 motorway. Although the landscape is an artificial one, it nonetheless has rich flora and fauna with more than 110 species of birds, 120 species of flowering plants, more than species of 60 trees, 24 species of butterflies, 380 species of moths, 100 species of beetles and 100 species of flies having been recorded so far. Although there is little of great rarity here, a wide variety of common species and a good network of paths from which to see them make Keele an ideal place to visit for the casual observer, as well as for both the novice and the more experienced naturalist.
The grounds are maintained by students of the university.
n
id_3507
Introduction to the Grounds of Keele University Keele University is situated in 600 acres of landscaped grounds to the west of the Potteries conurbation in North Staffordshire. These well-wooded grounds with their lakes, streams and formal flower beds support a wealth of wildlife. The surrounding countryside of the Staffordshire/ Shropshire/ Cheshire borders is also a rich area for the naturalist and rambler, while the majestic gritstone moorland of North Staffordshire and the limestone dales of: Derbyshire and northeast Staffordshire are not far away. Of the 600 acres some 300 are leased out as Home Farm. Of the remainder, about hall is woodland while the rest comprises the campus buildings and sports fields. The landscape we see today owes much to the work of Ralph Sneyd (1793 to 1870) who began planting on a grand scale in 1830, after inheriting the estate from his father. Throughout the period of its construction, the university has been careful to preserve as many mature trees as possible and to restrict the height of buildings to maintain the feeling of living and working in a landscape. The university has a continuing programme of landscaping, and many ornamental trees have been planted. Keele campus is, then, one of the most picturesque and tranquil in the country, yet is only a short distance from the Potteries and the M6 motorway. Although the landscape is an artificial one, it nonetheless has rich flora and fauna with more than 110 species of birds, 120 species of flowering plants, more than species of 60 trees, 24 species of butterflies, 380 species of moths, 100 species of beetles and 100 species of flies having been recorded so far. Although there is little of great rarity here, a wide variety of common species and a good network of paths from which to see them make Keele an ideal place to visit for the casual observer, as well as for both the novice and the more experienced naturalist.
If you want to see the plant life and insects it is not difficult to move around the grounds.
e
id_3508
Introduction to the Grounds of Keele University Keele University is situated in 600 acres of landscaped grounds to the west of the Potteries conurbation in North Staffordshire. These well-wooded grounds with their lakes, streams and formal flower beds support a wealth of wildlife. The surrounding countryside of the Staffordshire/ Shropshire/ Cheshire borders is also a rich area for the naturalist and rambler, while the majestic gritstone moorland of North Staffordshire and the limestone dales of: Derbyshire and northeast Staffordshire are not far away. Of the 600 acres some 300 are leased out as Home Farm. Of the remainder, about hall is woodland while the rest comprises the campus buildings and sports fields. The landscape we see today owes much to the work of Ralph Sneyd (1793 to 1870) who began planting on a grand scale in 1830, after inheriting the estate from his father. Throughout the period of its construction, the university has been careful to preserve as many mature trees as possible and to restrict the height of buildings to maintain the feeling of living and working in a landscape. The university has a continuing programme of landscaping, and many ornamental trees have been planted. Keele campus is, then, one of the most picturesque and tranquil in the country, yet is only a short distance from the Potteries and the M6 motorway. Although the landscape is an artificial one, it nonetheless has rich flora and fauna with more than 110 species of birds, 120 species of flowering plants, more than species of 60 trees, 24 species of butterflies, 380 species of moths, 100 species of beetles and 100 species of flies having been recorded so far. Although there is little of great rarity here, a wide variety of common species and a good network of paths from which to see them make Keele an ideal place to visit for the casual observer, as well as for both the novice and the more experienced naturalist.
One of the nice things about the grounds of Keele is the naturalness of the landscape and its wealth of rare wildlife.
c
id_3509
Investigating Childrens Language For over 200 years, there has been an interest in the way children learn to speak and understand their first language. Scholars carried out several small-scale studies, especially towards the end of the 19th century, using data they recorded in parental diaries. But detailed, systematic investigation did not begin until the middle decades of the 20th century when the tape recorder came into routine use. This made it possible to keep a permanent record of samples of child speech so that analysts could listen repeatedly to obscure extracts, and thus produce a detailed and accurate description. Since then, the subject has attracted enormous multi-disciplinary interest, notably from linguists and psychologists, who have used a variety of observational and experimental techniques to study the process of language acquisition in depth. Central to the success of this rapidly emerging field lies the ability of researchers to devise satisfactory methods for eliciting linguistic data from children. The problems that have to be faced are quite different from those encountered when working with adults. Many of the linguists routine techniques of enquiry cannot be used with children. It is not possible to carry out certain kinds of experiments, because aspects of childrens cognitive development such as their ability to pay attention or to remember instructions may not be sufficiently advanced. Nor is it easy to get children to make systematic judgments about language, a task that is virtually impossible below the age of three. And anyone who has tried to obtain even the most basic kind of data a tape recording of a representative sample of a childs speech knows how frustrating this can be. Some children, it seems, are innately programmed to switch off as soon as they notice a tape recorder being switched on. Since the 1960s, however, several sophisticated recording techniques and experimental designs have been devised. Children can be observed and recorded through one-way-vision windows or using radio microphones so that the effects of having an investigator in the same room as the child can be eliminated. Large-scale sampling programmes have been carried out, with children sometimes being recorded for several years. Particular attention has been paid to devising experimental techniques that fall well within a childs intellectual level and social experience. Even pre-linguistic infants have been brought into the research: acoustic techniques are used to analyse their vocalisations, and their ability to perceive the world around them is monitored using special recording equipment. The result has been a growing body of reliable data on the stages of language acquisition from birth until puberty. There is no single way of studying childrens language. Linguistics and psychology have each brought their own approach to the subject, and many variations have been introduced to cope with the variety of activities in which children engage, and the great age range that they present. Two main research paradigms are found. One of these is known as naturalistic sampling. A sample of a childs spontaneous use of language is recorded in familiar and comfortable surroundings. One of the best places to make the recording is in the childs own home, but it is not always easy to maintain good acoustic quality, and the presence of the researcher or the recording equipment can be a distraction (especially if the proceedings are being filmed). Alternatively, the recording can be made in a research centre, where the child is allowed to play freely with toys while talking to parents or other children, and the observers and their equipment are unobtrusive. A good quality, representative, naturalistic sample is generally considered an ideal datum for child language study. However, the method has several limitations. These samples are informative about speech production, but they give little guidance about childrens comprehension of what they hear around them. Moreover, samples cannot contain everything, and they can easily miss some important features of a childs linguistic ability. They may also not provide enough instances of a developing feature to enable the analyst to make a decision about the way the child is learning. For such reasons, the description of samples of child speech has to be supplemented by other methods. The other main approach is through experimentation, and the methods of experimental psychology have been widely applied to child language research. The investigator formulates a specific hypothesis about childrens ability to use or understand an aspect of language and devises a relevant task for a group of subjects to undertake. A statistical analysis is made of the subjects behaviour, and the results provide evidence that supports or falsifies the original hypothesis. Using this approach, as well as other methods of controlled observation, researchers have come up with many detailed findings about the production and comprehension of groups of children. However, it is not easy to generalise the findings of these studies. What may obtain in a carefully controlled setting may not apply in the rush of daily interaction. Different kinds of subjects, experimental situations, and statistical procedures may produce different results or interpretations. Experimental research is, therefore, a slow, painstaking business; it may take years before researchers are convinced that all variables have been considered and a finding is genuine.
Attempts to elicit very young childrens opinions about language are likely to fail.
e
id_3510
Investigating Childrens Language For over 200 years, there has been an interest in the way children learn to speak and understand their first language. Scholars carried out several small-scale studies, especially towards the end of the 19th century, using data they recorded in parental diaries. But detailed, systematic investigation did not begin until the middle decades of the 20th century when the tape recorder came into routine use. This made it possible to keep a permanent record of samples of child speech so that analysts could listen repeatedly to obscure extracts, and thus produce a detailed and accurate description. Since then, the subject has attracted enormous multi-disciplinary interest, notably from linguists and psychologists, who have used a variety of observational and experimental techniques to study the process of language acquisition in depth. Central to the success of this rapidly emerging field lies the ability of researchers to devise satisfactory methods for eliciting linguistic data from children. The problems that have to be faced are quite different from those encountered when working with adults. Many of the linguists routine techniques of enquiry cannot be used with children. It is not possible to carry out certain kinds of experiments, because aspects of childrens cognitive development such as their ability to pay attention or to remember instructions may not be sufficiently advanced. Nor is it easy to get children to make systematic judgments about language, a task that is virtually impossible below the age of three. And anyone who has tried to obtain even the most basic kind of data a tape recording of a representative sample of a childs speech knows how frustrating this can be. Some children, it seems, are innately programmed to switch off as soon as they notice a tape recorder being switched on. Since the 1960s, however, several sophisticated recording techniques and experimental designs have been devised. Children can be observed and recorded through one-way-vision windows or using radio microphones so that the effects of having an investigator in the same room as the child can be eliminated. Large-scale sampling programmes have been carried out, with children sometimes being recorded for several years. Particular attention has been paid to devising experimental techniques that fall well within a childs intellectual level and social experience. Even pre-linguistic infants have been brought into the research: acoustic techniques are used to analyse their vocalisations, and their ability to perceive the world around them is monitored using special recording equipment. The result has been a growing body of reliable data on the stages of language acquisition from birth until puberty. There is no single way of studying childrens language. Linguistics and psychology have each brought their own approach to the subject, and many variations have been introduced to cope with the variety of activities in which children engage, and the great age range that they present. Two main research paradigms are found. One of these is known as naturalistic sampling. A sample of a childs spontaneous use of language is recorded in familiar and comfortable surroundings. One of the best places to make the recording is in the childs own home, but it is not always easy to maintain good acoustic quality, and the presence of the researcher or the recording equipment can be a distraction (especially if the proceedings are being filmed). Alternatively, the recording can be made in a research centre, where the child is allowed to play freely with toys while talking to parents or other children, and the observers and their equipment are unobtrusive. A good quality, representative, naturalistic sample is generally considered an ideal datum for child language study. However, the method has several limitations. These samples are informative about speech production, but they give little guidance about childrens comprehension of what they hear around them. Moreover, samples cannot contain everything, and they can easily miss some important features of a childs linguistic ability. They may also not provide enough instances of a developing feature to enable the analyst to make a decision about the way the child is learning. For such reasons, the description of samples of child speech has to be supplemented by other methods. The other main approach is through experimentation, and the methods of experimental psychology have been widely applied to child language research. The investigator formulates a specific hypothesis about childrens ability to use or understand an aspect of language and devises a relevant task for a group of subjects to undertake. A statistical analysis is made of the subjects behaviour, and the results provide evidence that supports or falsifies the original hypothesis. Using this approach, as well as other methods of controlled observation, researchers have come up with many detailed findings about the production and comprehension of groups of children. However, it is not easy to generalise the findings of these studies. What may obtain in a carefully controlled setting may not apply in the rush of daily interaction. Different kinds of subjects, experimental situations, and statistical procedures may produce different results or interpretations. Experimental research is, therefore, a slow, painstaking business; it may take years before researchers are convinced that all variables have been considered and a finding is genuine.
In the 19th century, researchers studied their own childrens language.
e
id_3511
Investigating Childrens Language For over 200 years, there has been an interest in the way children learn to speak and understand their first language. Scholars carried out several small-scale studies, especially towards the end of the 19th century, using data they recorded in parental diaries. But detailed, systematic investigation did not begin until the middle decades of the 20th century when the tape recorder came into routine use. This made it possible to keep a permanent record of samples of child speech so that analysts could listen repeatedly to obscure extracts, and thus produce a detailed and accurate description. Since then, the subject has attracted enormous multi-disciplinary interest, notably from linguists and psychologists, who have used a variety of observational and experimental techniques to study the process of language acquisition in depth. Central to the success of this rapidly emerging field lies the ability of researchers to devise satisfactory methods for eliciting linguistic data from children. The problems that have to be faced are quite different from those encountered when working with adults. Many of the linguists routine techniques of enquiry cannot be used with children. It is not possible to carry out certain kinds of experiments, because aspects of childrens cognitive development such as their ability to pay attention or to remember instructions may not be sufficiently advanced. Nor is it easy to get children to make systematic judgments about language, a task that is virtually impossible below the age of three. And anyone who has tried to obtain even the most basic kind of data a tape recording of a representative sample of a childs speech knows how frustrating this can be. Some children, it seems, are innately programmed to switch off as soon as they notice a tape recorder being switched on. Since the 1960s, however, several sophisticated recording techniques and experimental designs have been devised. Children can be observed and recorded through one-way-vision windows or using radio microphones so that the effects of having an investigator in the same room as the child can be eliminated. Large-scale sampling programmes have been carried out, with children sometimes being recorded for several years. Particular attention has been paid to devising experimental techniques that fall well within a childs intellectual level and social experience. Even pre-linguistic infants have been brought into the research: acoustic techniques are used to analyse their vocalisations, and their ability to perceive the world around them is monitored using special recording equipment. The result has been a growing body of reliable data on the stages of language acquisition from birth until puberty. There is no single way of studying childrens language. Linguistics and psychology have each brought their own approach to the subject, and many variations have been introduced to cope with the variety of activities in which children engage, and the great age range that they present. Two main research paradigms are found. One of these is known as naturalistic sampling. A sample of a childs spontaneous use of language is recorded in familiar and comfortable surroundings. One of the best places to make the recording is in the childs own home, but it is not always easy to maintain good acoustic quality, and the presence of the researcher or the recording equipment can be a distraction (especially if the proceedings are being filmed). Alternatively, the recording can be made in a research centre, where the child is allowed to play freely with toys while talking to parents or other children, and the observers and their equipment are unobtrusive. A good quality, representative, naturalistic sample is generally considered an ideal datum for child language study. However, the method has several limitations. These samples are informative about speech production, but they give little guidance about childrens comprehension of what they hear around them. Moreover, samples cannot contain everything, and they can easily miss some important features of a childs linguistic ability. They may also not provide enough instances of a developing feature to enable the analyst to make a decision about the way the child is learning. For such reasons, the description of samples of child speech has to be supplemented by other methods. The other main approach is through experimentation, and the methods of experimental psychology have been widely applied to child language research. The investigator formulates a specific hypothesis about childrens ability to use or understand an aspect of language and devises a relevant task for a group of subjects to undertake. A statistical analysis is made of the subjects behaviour, and the results provide evidence that supports or falsifies the original hypothesis. Using this approach, as well as other methods of controlled observation, researchers have come up with many detailed findings about the production and comprehension of groups of children. However, it is not easy to generalise the findings of these studies. What may obtain in a carefully controlled setting may not apply in the rush of daily interaction. Different kinds of subjects, experimental situations, and statistical procedures may produce different results or interpretations. Experimental research is, therefore, a slow, painstaking business; it may take years before researchers are convinced that all variables have been considered and a finding is genuine.
Many children enjoy the interaction with the researcher.
n
id_3512
Investigating Childrens Language For over 200 years, there has been an interest in the way children learn to speak and understand their first language. Scholars carried out several small-scale studies, especially towards the end of the 19th century, using data they recorded in parental diaries. But detailed, systematic investigation did not begin until the middle decades of the 20th century when the tape recorder came into routine use. This made it possible to keep a permanent record of samples of child speech so that analysts could listen repeatedly to obscure extracts, and thus produce a detailed and accurate description. Since then, the subject has attracted enormous multi-disciplinary interest, notably from linguists and psychologists, who have used a variety of observational and experimental techniques to study the process of language acquisition in depth. Central to the success of this rapidly emerging field lies the ability of researchers to devise satisfactory methods for eliciting linguistic data from children. The problems that have to be faced are quite different from those encountered when working with adults. Many of the linguists routine techniques of enquiry cannot be used with children. It is not possible to carry out certain kinds of experiments, because aspects of childrens cognitive development such as their ability to pay attention or to remember instructions may not be sufficiently advanced. Nor is it easy to get children to make systematic judgments about language, a task that is virtually impossible below the age of three. And anyone who has tried to obtain even the most basic kind of data a tape recording of a representative sample of a childs speech knows how frustrating this can be. Some children, it seems, are innately programmed to switch off as soon as they notice a tape recorder being switched on. Since the 1960s, however, several sophisticated recording techniques and experimental designs have been devised. Children can be observed and recorded through one-way-vision windows or using radio microphones so that the effects of having an investigator in the same room as the child can be eliminated. Large-scale sampling programmes have been carried out, with children sometimes being recorded for several years. Particular attention has been paid to devising experimental techniques that fall well within a childs intellectual level and social experience. Even pre-linguistic infants have been brought into the research: acoustic techniques are used to analyse their vocalisations, and their ability to perceive the world around them is monitored using special recording equipment. The result has been a growing body of reliable data on the stages of language acquisition from birth until puberty. There is no single way of studying childrens language. Linguistics and psychology have each brought their own approach to the subject, and many variations have been introduced to cope with the variety of activities in which children engage, and the great age range that they present. Two main research paradigms are found. One of these is known as naturalistic sampling. A sample of a childs spontaneous use of language is recorded in familiar and comfortable surroundings. One of the best places to make the recording is in the childs own home, but it is not always easy to maintain good acoustic quality, and the presence of the researcher or the recording equipment can be a distraction (especially if the proceedings are being filmed). Alternatively, the recording can be made in a research centre, where the child is allowed to play freely with toys while talking to parents or other children, and the observers and their equipment are unobtrusive. A good quality, representative, naturalistic sample is generally considered an ideal datum for child language study. However, the method has several limitations. These samples are informative about speech production, but they give little guidance about childrens comprehension of what they hear around them. Moreover, samples cannot contain everything, and they can easily miss some important features of a childs linguistic ability. They may also not provide enough instances of a developing feature to enable the analyst to make a decision about the way the child is learning. For such reasons, the description of samples of child speech has to be supplemented by other methods. The other main approach is through experimentation, and the methods of experimental psychology have been widely applied to child language research. The investigator formulates a specific hypothesis about childrens ability to use or understand an aspect of language and devises a relevant task for a group of subjects to undertake. A statistical analysis is made of the subjects behaviour, and the results provide evidence that supports or falsifies the original hypothesis. Using this approach, as well as other methods of controlled observation, researchers have come up with many detailed findings about the production and comprehension of groups of children. However, it is not easy to generalise the findings of these studies. What may obtain in a carefully controlled setting may not apply in the rush of daily interaction. Different kinds of subjects, experimental situations, and statistical procedures may produce different results or interpretations. Experimental research is, therefore, a slow, painstaking business; it may take years before researchers are convinced that all variables have been considered and a finding is genuine.
Radio microphones are used because they enable researchers to communicate with a number of children in different rooms.
c
id_3513
Investors recently sent gold, the traditional safe haven in times of trouble, to record trading highs of over $1,800 per ounce. A quarter of respondents to apoll reflected the bearish view held by George Soros, who continued his retreat from gold by selling off the mining company, NovaGold Resources, in the second quarter. They predicted a fall in the gold price to $1,500 per troy ounce. Almost the same proportion of our readers, 28%, felt there was more to come from gold, believing the price will climb to $2,500 by the end of 2012. There are also signs to encourage the bulls. Ben Bernanke, governor of theFed, disappointing hopes that he would announce a third round of quantitative easing during his Jackson Hole speech last week, but the prospects for another round of banknotes printing remain alive. A third group of our readers,27%, meanwhile, told us that they did not foresee much movement at all, predicting that gold would trade at around $2,000 for the foreseeable future as a hedge against continuing uncertainty. Only 7% of readers would agree with the obvious: that gold is, in the final analysis, just another yellow metal people dig out of the ground - and should be valued like any other non-productive rock.
Refraining from printing more dollars could result in a drop in the price of gold.
n
id_3514
Investors recently sent gold, the traditional safe haven in times of trouble, to record trading highs of over $1,800 per ounce. A quarter of respondents to apoll reflected the bearish view held by George Soros, who continued his retreat from gold by selling off the mining company, NovaGold Resources, in the second quarter. They predicted a fall in the gold price to $1,500 per troy ounce. Almost the same proportion of our readers, 28%, felt there was more to come from gold, believing the price will climb to $2,500 by the end of 2012. There are also signs to encourage the bulls. Ben Bernanke, governor of theFed, disappointing hopes that he would announce a third round of quantitative easing during his Jackson Hole speech last week, but the prospects for another round of banknotes printing remain alive. A third group of our readers,27%, meanwhile, told us that they did not foresee much movement at all, predicting that gold would trade at around $2,000 for the foreseeable future as a hedge against continuing uncertainty. Only 7% of readers would agree with the obvious: that gold is, in the final analysis, just another yellow metal people dig out of the ground - and should be valued like any other non-productive rock.
7% of the readers polled believe that gold prices today are slightly too high.
n
id_3515
Investors recently sent gold, the traditional safe haven in times of trouble, to record trading highs of over $1,800 per ounce. A quarter of respondents to apoll reflected the bearish view held by George Soros, who continued his retreat from gold by selling off the mining company, NovaGold Resources, in the second quarter. They predicted a fall in the gold price to $1,500 per troy ounce. Almost the same proportion of our readers, 28%, felt there was more to come from gold, believing the price will climb to $2,500 by the end of 2012. There are also signs to encourage the bulls. Ben Bernanke, governor of theFed, disappointing hopes that he would announce a third round of quantitative easing during his Jackson Hole speech last week, but the prospects for another round of banknotes printing remain alive. A third group of our readers,27%, meanwhile, told us that they did not foresee much movement at all, predicting that gold would trade at around $2,000 for the foreseeable future as a hedge against continuing uncertainty. Only 7% of readers would agree with the obvious: that gold is, in the final analysis, just another yellow metal people dig out of the ground - and should be valued like any other non-productive rock.
The mining company NovaGold was sold because it was believed that gold would depreciate.
e
id_3516
Investors recently sent gold, the traditional safe haven in times of trouble, to record trading highs of over $1,800 per ounce. A quarter of respondents to apoll reflected the bearish view held by George Soros, who continued his retreat from gold by selling off the mining company, NovaGold Resources, in the second quarter. They predicted a fall in the gold price to $1,500 per troy ounce. Almost the same proportion of our readers, 28%, felt there was more to come from gold, believing the price will climb to $2,500 by the end of 2012. There are also signs to encourage the bulls. Ben Bernanke, governor of theFed, disappointing hopes that he would announce a third round of quantitative easing during his Jackson Hole speech last week, but the prospects for another round of banknotes printing remain alive. A third group of our readers,27%, meanwhile, told us that they did not foresee much movement at all, predicting that gold would trade at around $2,000 for the foreseeable future as a hedge against continuing uncertainty. Only 7% of readers would agree with the obvious: that gold is, in the final analysis, just another yellow metal people dig out of the ground - and should be valued like any other non-productive rock.
Approximately 13% of those surveyed refused to respond.
n
id_3517
Is Technology Harming our Childrens Health? Technology is moving at such a breakneck speed that it is enough to make your head spin. It can be difficult to keep up. However, with each new technological marvel come consequences. Much of the research conducted has shown the extent of the damage being done to our health by technology. It is a scary thought, and with teenagers and children being heavy consumers and users of these gadgets, they run the risk of being harmed the most. The digital revolution in music has enabled people to download, store and listen to songs on a tiny, portable device called an MP3 player. The process is quick and afterwards you can have access to a library of thousands of songs that can fit into your palm. But experts say that continuously listening to loud music on these small music players can permanently damage hair cells in the inner ear, resulting in hearing loss. Tor instance, old-fashioned headphones have been replaced with smaller ones that fit neatly into the ear, instead of over them, which intensifies the sound. In addition to that, digital music does not distort and keeps its crystal clear sound, even on loud settings, which encourages children to crank up the volume. Combine that with the fact that many children will spend hours listening to their iPods, and you have the recipe for hearing loss. Put into further perspective, most MP3 players can reach levels of 120 decibels, which is louder than a chainsaw or lawnmower. When you consider 85 decibels is the maximum safe decibel level set by hearing experts over the course of a working day, and that children will listen to music at higher decibel levels than that for long periods of time, hearing will invariably suffer. Apart from hearing damage, there are other serious health risks. We are living in a wireless age. Calls can be made and received on mobiles from anywhere and the internet can be accessed without the need for cables. The advantages are enormous, bringing ease and convenience to our lives. It is clear that mobiles and wireless technology are here to stay but are we paying the price for new technology? Studies have shown that the rapid expansion in the use of wireless technology has brought with it a new form of radiation called electropollution. Compared to two generations ago, we are exposed to 100 million times more radiation. The human body consists of trillions of cells which use faint electromagnetic signals to communicate with each other, so that the necessary biological and physiological changes can happen. It is a delicate, natural balance. But this balance is being upset by the constant exposure to electromagnetic radiation (EMR) that we face in our daily lives and it is playing havoc with our bodies. EMR can disrupt and alter the way in which our cells communicate and this can result in abnormal cell behaviour. Some studies have shown that exposure to wireless technology can affect our enzyme production, immune systems, nervous system and even our moods and behaviour. The most dangerous part of the phone is around the antenna. This area emits extremely potent radiation which has been shown to cause genetic damage and an increase in the risk of cancer. Research shows that teenagers and young adults are the largest group of mobile phone users. According to a recent Eurobarometer survey, 70 per cent of Europeans aged 12-13 own a mobile phone and the number of children five to nine years old owning mobiles has greatly increased over the years. Children are especially vulnerable because their brains and nervous systems are not as immune to attack as adults. Sir William Stewart, chairman of the National Radiological Protection Board, says there is mounting evidence to prove the harmful effects of wireless technologies and that families should monitor their childrens use of them. Besides the physical and biological damage, technology can also have serious mental implications for children. It can be the cause of severe, addictive behaviour. In one case, two children had to be admitted into a mental health clinic in Northern Spain because of their addiction to mobile phones. An average of six hours a day would be spent talking, texting and playing games on their phones. The children could not be separated from their phones and showed disturbed behaviour that was making them fail at school. They regularly deceived family members to obtain money to buy phone cards to fund their destructive habit. There have been other cases of phone addiction like this. Technology may also be changing our brain patterns. Professor Greenfield, a top specialist in brain development, says that, thanks to technology, teenage minds are developing differently from those of previous generations. Her main concern is over computer games. She claims that living in a virtual world where actions are rewarded without needing to think about the moral implications makes young people lose awareness of who they are. She claims that technology brings a decline in linguistic creativity. As technology keeps moving at a rapid pace and everyone clamours for the new must- have gadget of the moment, we cannot easily perceive the long-term effects on our health. Unfortunately, it is the most vulnerable members of our society that will be affected.
Wireless technology is a permanent part of our lives.
e
id_3518
Is Technology Harming our Childrens Health? Technology is moving at such a breakneck speed that it is enough to make your head spin. It can be difficult to keep up. However, with each new technological marvel come consequences. Much of the research conducted has shown the extent of the damage being done to our health by technology. It is a scary thought, and with teenagers and children being heavy consumers and users of these gadgets, they run the risk of being harmed the most. The digital revolution in music has enabled people to download, store and listen to songs on a tiny, portable device called an MP3 player. The process is quick and afterwards you can have access to a library of thousands of songs that can fit into your palm. But experts say that continuously listening to loud music on these small music players can permanently damage hair cells in the inner ear, resulting in hearing loss. Tor instance, old-fashioned headphones have been replaced with smaller ones that fit neatly into the ear, instead of over them, which intensifies the sound. In addition to that, digital music does not distort and keeps its crystal clear sound, even on loud settings, which encourages children to crank up the volume. Combine that with the fact that many children will spend hours listening to their iPods, and you have the recipe for hearing loss. Put into further perspective, most MP3 players can reach levels of 120 decibels, which is louder than a chainsaw or lawnmower. When you consider 85 decibels is the maximum safe decibel level set by hearing experts over the course of a working day, and that children will listen to music at higher decibel levels than that for long periods of time, hearing will invariably suffer. Apart from hearing damage, there are other serious health risks. We are living in a wireless age. Calls can be made and received on mobiles from anywhere and the internet can be accessed without the need for cables. The advantages are enormous, bringing ease and convenience to our lives. It is clear that mobiles and wireless technology are here to stay but are we paying the price for new technology? Studies have shown that the rapid expansion in the use of wireless technology has brought with it a new form of radiation called electropollution. Compared to two generations ago, we are exposed to 100 million times more radiation. The human body consists of trillions of cells which use faint electromagnetic signals to communicate with each other, so that the necessary biological and physiological changes can happen. It is a delicate, natural balance. But this balance is being upset by the constant exposure to electromagnetic radiation (EMR) that we face in our daily lives and it is playing havoc with our bodies. EMR can disrupt and alter the way in which our cells communicate and this can result in abnormal cell behaviour. Some studies have shown that exposure to wireless technology can affect our enzyme production, immune systems, nervous system and even our moods and behaviour. The most dangerous part of the phone is around the antenna. This area emits extremely potent radiation which has been shown to cause genetic damage and an increase in the risk of cancer. Research shows that teenagers and young adults are the largest group of mobile phone users. According to a recent Eurobarometer survey, 70 per cent of Europeans aged 12-13 own a mobile phone and the number of children five to nine years old owning mobiles has greatly increased over the years. Children are especially vulnerable because their brains and nervous systems are not as immune to attack as adults. Sir William Stewart, chairman of the National Radiological Protection Board, says there is mounting evidence to prove the harmful effects of wireless technologies and that families should monitor their childrens use of them. Besides the physical and biological damage, technology can also have serious mental implications for children. It can be the cause of severe, addictive behaviour. In one case, two children had to be admitted into a mental health clinic in Northern Spain because of their addiction to mobile phones. An average of six hours a day would be spent talking, texting and playing games on their phones. The children could not be separated from their phones and showed disturbed behaviour that was making them fail at school. They regularly deceived family members to obtain money to buy phone cards to fund their destructive habit. There have been other cases of phone addiction like this. Technology may also be changing our brain patterns. Professor Greenfield, a top specialist in brain development, says that, thanks to technology, teenage minds are developing differently from those of previous generations. Her main concern is over computer games. She claims that living in a virtual world where actions are rewarded without needing to think about the moral implications makes young people lose awareness of who they are. She claims that technology brings a decline in linguistic creativity. As technology keeps moving at a rapid pace and everyone clamours for the new must- have gadget of the moment, we cannot easily perceive the long-term effects on our health. Unfortunately, it is the most vulnerable members of our society that will be affected.
Using technology always helps with academic success.
c
id_3519
Is Technology Harming our Childrens Health? Technology is moving at such a breakneck speed that it is enough to make your head spin. It can be difficult to keep up. However, with each new technological marvel come consequences. Much of the research conducted has shown the extent of the damage being done to our health by technology. It is a scary thought, and with teenagers and children being heavy consumers and users of these gadgets, they run the risk of being harmed the most. The digital revolution in music has enabled people to download, store and listen to songs on a tiny, portable device called an MP3 player. The process is quick and afterwards you can have access to a library of thousands of songs that can fit into your palm. But experts say that continuously listening to loud music on these small music players can permanently damage hair cells in the inner ear, resulting in hearing loss. Tor instance, old-fashioned headphones have been replaced with smaller ones that fit neatly into the ear, instead of over them, which intensifies the sound. In addition to that, digital music does not distort and keeps its crystal clear sound, even on loud settings, which encourages children to crank up the volume. Combine that with the fact that many children will spend hours listening to their iPods, and you have the recipe for hearing loss. Put into further perspective, most MP3 players can reach levels of 120 decibels, which is louder than a chainsaw or lawnmower. When you consider 85 decibels is the maximum safe decibel level set by hearing experts over the course of a working day, and that children will listen to music at higher decibel levels than that for long periods of time, hearing will invariably suffer. Apart from hearing damage, there are other serious health risks. We are living in a wireless age. Calls can be made and received on mobiles from anywhere and the internet can be accessed without the need for cables. The advantages are enormous, bringing ease and convenience to our lives. It is clear that mobiles and wireless technology are here to stay but are we paying the price for new technology? Studies have shown that the rapid expansion in the use of wireless technology has brought with it a new form of radiation called electropollution. Compared to two generations ago, we are exposed to 100 million times more radiation. The human body consists of trillions of cells which use faint electromagnetic signals to communicate with each other, so that the necessary biological and physiological changes can happen. It is a delicate, natural balance. But this balance is being upset by the constant exposure to electromagnetic radiation (EMR) that we face in our daily lives and it is playing havoc with our bodies. EMR can disrupt and alter the way in which our cells communicate and this can result in abnormal cell behaviour. Some studies have shown that exposure to wireless technology can affect our enzyme production, immune systems, nervous system and even our moods and behaviour. The most dangerous part of the phone is around the antenna. This area emits extremely potent radiation which has been shown to cause genetic damage and an increase in the risk of cancer. Research shows that teenagers and young adults are the largest group of mobile phone users. According to a recent Eurobarometer survey, 70 per cent of Europeans aged 12-13 own a mobile phone and the number of children five to nine years old owning mobiles has greatly increased over the years. Children are especially vulnerable because their brains and nervous systems are not as immune to attack as adults. Sir William Stewart, chairman of the National Radiological Protection Board, says there is mounting evidence to prove the harmful effects of wireless technologies and that families should monitor their childrens use of them. Besides the physical and biological damage, technology can also have serious mental implications for children. It can be the cause of severe, addictive behaviour. In one case, two children had to be admitted into a mental health clinic in Northern Spain because of their addiction to mobile phones. An average of six hours a day would be spent talking, texting and playing games on their phones. The children could not be separated from their phones and showed disturbed behaviour that was making them fail at school. They regularly deceived family members to obtain money to buy phone cards to fund their destructive habit. There have been other cases of phone addiction like this. Technology may also be changing our brain patterns. Professor Greenfield, a top specialist in brain development, says that, thanks to technology, teenage minds are developing differently from those of previous generations. Her main concern is over computer games. She claims that living in a virtual world where actions are rewarded without needing to think about the moral implications makes young people lose awareness of who they are. She claims that technology brings a decline in linguistic creativity. As technology keeps moving at a rapid pace and everyone clamours for the new must- have gadget of the moment, we cannot easily perceive the long-term effects on our health. Unfortunately, it is the most vulnerable members of our society that will be affected.
It is possible to become obsessed with technology.
e
id_3520
Is Technology Harming our Childrens Health? Technology is moving at such a breakneck speed that it is enough to make your head spin. It can be difficult to keep up. However, with each new technological marvel come consequences. Much of the research conducted has shown the extent of the damage being done to our health by technology. It is a scary thought, and with teenagers and children being heavy consumers and users of these gadgets, they run the risk of being harmed the most. The digital revolution in music has enabled people to download, store and listen to songs on a tiny, portable device called an MP3 player. The process is quick and afterwards you can have access to a library of thousands of songs that can fit into your palm. But experts say that continuously listening to loud music on these small music players can permanently damage hair cells in the inner ear, resulting in hearing loss. Tor instance, old-fashioned headphones have been replaced with smaller ones that fit neatly into the ear, instead of over them, which intensifies the sound. In addition to that, digital music does not distort and keeps its crystal clear sound, even on loud settings, which encourages children to crank up the volume. Combine that with the fact that many children will spend hours listening to their iPods, and you have the recipe for hearing loss. Put into further perspective, most MP3 players can reach levels of 120 decibels, which is louder than a chainsaw or lawnmower. When you consider 85 decibels is the maximum safe decibel level set by hearing experts over the course of a working day, and that children will listen to music at higher decibel levels than that for long periods of time, hearing will invariably suffer. Apart from hearing damage, there are other serious health risks. We are living in a wireless age. Calls can be made and received on mobiles from anywhere and the internet can be accessed without the need for cables. The advantages are enormous, bringing ease and convenience to our lives. It is clear that mobiles and wireless technology are here to stay but are we paying the price for new technology? Studies have shown that the rapid expansion in the use of wireless technology has brought with it a new form of radiation called electropollution. Compared to two generations ago, we are exposed to 100 million times more radiation. The human body consists of trillions of cells which use faint electromagnetic signals to communicate with each other, so that the necessary biological and physiological changes can happen. It is a delicate, natural balance. But this balance is being upset by the constant exposure to electromagnetic radiation (EMR) that we face in our daily lives and it is playing havoc with our bodies. EMR can disrupt and alter the way in which our cells communicate and this can result in abnormal cell behaviour. Some studies have shown that exposure to wireless technology can affect our enzyme production, immune systems, nervous system and even our moods and behaviour. The most dangerous part of the phone is around the antenna. This area emits extremely potent radiation which has been shown to cause genetic damage and an increase in the risk of cancer. Research shows that teenagers and young adults are the largest group of mobile phone users. According to a recent Eurobarometer survey, 70 per cent of Europeans aged 12-13 own a mobile phone and the number of children five to nine years old owning mobiles has greatly increased over the years. Children are especially vulnerable because their brains and nervous systems are not as immune to attack as adults. Sir William Stewart, chairman of the National Radiological Protection Board, says there is mounting evidence to prove the harmful effects of wireless technologies and that families should monitor their childrens use of them. Besides the physical and biological damage, technology can also have serious mental implications for children. It can be the cause of severe, addictive behaviour. In one case, two children had to be admitted into a mental health clinic in Northern Spain because of their addiction to mobile phones. An average of six hours a day would be spent talking, texting and playing games on their phones. The children could not be separated from their phones and showed disturbed behaviour that was making them fail at school. They regularly deceived family members to obtain money to buy phone cards to fund their destructive habit. There have been other cases of phone addiction like this. Technology may also be changing our brain patterns. Professor Greenfield, a top specialist in brain development, says that, thanks to technology, teenage minds are developing differently from those of previous generations. Her main concern is over computer games. She claims that living in a virtual world where actions are rewarded without needing to think about the moral implications makes young people lose awareness of who they are. She claims that technology brings a decline in linguistic creativity. As technology keeps moving at a rapid pace and everyone clamours for the new must- have gadget of the moment, we cannot easily perceive the long-term effects on our health. Unfortunately, it is the most vulnerable members of our society that will be affected.
Exposure to EMR can lead to criminal behaviour.
n
id_3521
Is Technology Harming our Childrens Health? Technology is moving at such a breakneck speed that it is enough to make your head spin. It can be difficult to keep up. However, with each new technological marvel come consequences. Much of the research conducted has shown the extent of the damage being done to our health by technology. It is a scary thought, and with teenagers and children being heavy consumers and users of these gadgets, they run the risk of being harmed the most. The digital revolution in music has enabled people to download, store and listen to songs on a tiny, portable device called an MP3 player. The process is quick and afterwards you can have access to a library of thousands of songs that can fit into your palm. But experts say that continuously listening to loud music on these small music players can permanently damage hair cells in the inner ear, resulting in hearing loss. Tor instance, old-fashioned headphones have been replaced with smaller ones that fit neatly into the ear, instead of over them, which intensifies the sound. In addition to that, digital music does not distort and keeps its crystal clear sound, even on loud settings, which encourages children to crank up the volume. Combine that with the fact that many children will spend hours listening to their iPods, and you have the recipe for hearing loss. Put into further perspective, most MP3 players can reach levels of 120 decibels, which is louder than a chainsaw or lawnmower. When you consider 85 decibels is the maximum safe decibel level set by hearing experts over the course of a working day, and that children will listen to music at higher decibel levels than that for long periods of time, hearing will invariably suffer. Apart from hearing damage, there are other serious health risks. We are living in a wireless age. Calls can be made and received on mobiles from anywhere and the internet can be accessed without the need for cables. The advantages are enormous, bringing ease and convenience to our lives. It is clear that mobiles and wireless technology are here to stay but are we paying the price for new technology? Studies have shown that the rapid expansion in the use of wireless technology has brought with it a new form of radiation called electropollution. Compared to two generations ago, we are exposed to 100 million times more radiation. The human body consists of trillions of cells which use faint electromagnetic signals to communicate with each other, so that the necessary biological and physiological changes can happen. It is a delicate, natural balance. But this balance is being upset by the constant exposure to electromagnetic radiation (EMR) that we face in our daily lives and it is playing havoc with our bodies. EMR can disrupt and alter the way in which our cells communicate and this can result in abnormal cell behaviour. Some studies have shown that exposure to wireless technology can affect our enzyme production, immune systems, nervous system and even our moods and behaviour. The most dangerous part of the phone is around the antenna. This area emits extremely potent radiation which has been shown to cause genetic damage and an increase in the risk of cancer. Research shows that teenagers and young adults are the largest group of mobile phone users. According to a recent Eurobarometer survey, 70 per cent of Europeans aged 12-13 own a mobile phone and the number of children five to nine years old owning mobiles has greatly increased over the years. Children are especially vulnerable because their brains and nervous systems are not as immune to attack as adults. Sir William Stewart, chairman of the National Radiological Protection Board, says there is mounting evidence to prove the harmful effects of wireless technologies and that families should monitor their childrens use of them. Besides the physical and biological damage, technology can also have serious mental implications for children. It can be the cause of severe, addictive behaviour. In one case, two children had to be admitted into a mental health clinic in Northern Spain because of their addiction to mobile phones. An average of six hours a day would be spent talking, texting and playing games on their phones. The children could not be separated from their phones and showed disturbed behaviour that was making them fail at school. They regularly deceived family members to obtain money to buy phone cards to fund their destructive habit. There have been other cases of phone addiction like this. Technology may also be changing our brain patterns. Professor Greenfield, a top specialist in brain development, says that, thanks to technology, teenage minds are developing differently from those of previous generations. Her main concern is over computer games. She claims that living in a virtual world where actions are rewarded without needing to think about the moral implications makes young people lose awareness of who they are. She claims that technology brings a decline in linguistic creativity. As technology keeps moving at a rapid pace and everyone clamours for the new must- have gadget of the moment, we cannot easily perceive the long-term effects on our health. Unfortunately, it is the most vulnerable members of our society that will be affected.
There are considerable benefits to our wireless world.
e
id_3522
Is There Really a War on Drugs? In our contemporary society, the media constantly bombards us with horror stories about drugs like crack-cocaine. From them, and probably from no other source, we learn that crack is immediately addictive in every case, we learn that it causes corruption, crazed violence, and almost always leads to death. The government tells us that we are busy fighting a war on drugs and so it gives us various iconic models to despise and detest: we learn to stereotype inner-city minorities as being of drug-infested wastelands and we learn to witchhunt drug users within our own communities under the belief that they represent moral sin and pure evil. I believe that these titles and ideals are preposterous and based entirely upon unnecessary and even detrimental ideals promoted by the government to achieve purposes other than those they claim. In Craig Renarmans and Harry Levines article entitled The Crack Attack: Politics and Media in Americas Latest Drug Scare, the authors attempt to expose and to deal with some of the societal problems that have resulted from the over-exaggeration of crack-cocaine as an epidemic problem in our country. Without detracting attention away from the serious health risks for those few individuals who do use the drug, Renarman and Levine demonstrate how minimally detrimental the current epidemic actually is. Early in the article, the authors summarize crack-cocaines evolutionary history in the U. S. They specifically discuss how the crack-related deaths of two star-athletes which first called wide-spread attention to the problem during the mid-1980s. Since then, the government has reportedly used crack-cocaine as a political scapegoat for many of the nations larger inner-city problems. Thefts, violence, and even socioeconomic depression have been blamed on crack. They assert that the government has invested considerably in studies whose results could be used to wage the constant war on drugs while to politicians, that war has amounted to nothing more than a perceptual war on poverty and urban crime. Since politicians have had little else of marketable interest to debate over the years, this aggressive attack on drugs has existed as one of their only colorful means by which to create debate, controversy, and campaign fuel. In other words, when balancing the budget and maintaining an effective foreign policy became too boring to handle, Reinarman and Levine assert that the crack epidemic became the focus of politicians with the intent of luring public interest to their flashy anti-drug campaigns. Finally, in addition to the medias excess attention on the war against drugs, Reinarman and Levine make the point the constant coverage of crack in the news media has only been counterproductive to the alleged goals of any anti-drug program. With descriptions of the crack high that glorify it considerably- the politically-charged media campaigns to fight drugs have worked somewhat ironically as huge advertising campaigns for crack-increasing public awareness and stimulating the interests of venturous junkies. While Reinarman and Levine are rather adamant about their findings, they do maintain an overt respect for the reality that crack has had other causal factors and outcomes besides those described by them. Their main concern seems to be calling for a more realistic spotlight to be placed upon the problem- so that we can begin to deal with it as no more and no less than what should be. The war on drugs is indeed based upon an exaggeration of facts. Although it is also evident that substances such as crack-cocaine may serve to pose great health risks to those that use them, there is not any widespread epidemic use of the drug nor any validity to the apparent myths that it causes such immediate devastation and is life-wrecking in every single case. It is obvious that we do indeed need to maintain a greater and more focused emphasis on the important and more widespread problems in society. Important energies and well-needed monies are being diverted from them to fight in an almost-imaginary battle against a controlled substance. Conclusively, we should allow drugs like crack-cocaine receive their due attention as social problems, but let them receive no more than that!
Drug users within our won communities represent moral sin and pure evil.
c
id_3523
Is There Really a War on Drugs? In our contemporary society, the media constantly bombards us with horror stories about drugs like crack-cocaine. From them, and probably from no other source, we learn that crack is immediately addictive in every case, we learn that it causes corruption, crazed violence, and almost always leads to death. The government tells us that we are busy fighting a war on drugs and so it gives us various iconic models to despise and detest: we learn to stereotype inner-city minorities as being of drug-infested wastelands and we learn to witchhunt drug users within our own communities under the belief that they represent moral sin and pure evil. I believe that these titles and ideals are preposterous and based entirely upon unnecessary and even detrimental ideals promoted by the government to achieve purposes other than those they claim. In Craig Renarmans and Harry Levines article entitled The Crack Attack: Politics and Media in Americas Latest Drug Scare, the authors attempt to expose and to deal with some of the societal problems that have resulted from the over-exaggeration of crack-cocaine as an epidemic problem in our country. Without detracting attention away from the serious health risks for those few individuals who do use the drug, Renarman and Levine demonstrate how minimally detrimental the current epidemic actually is. Early in the article, the authors summarize crack-cocaines evolutionary history in the U. S. They specifically discuss how the crack-related deaths of two star-athletes which first called wide-spread attention to the problem during the mid-1980s. Since then, the government has reportedly used crack-cocaine as a political scapegoat for many of the nations larger inner-city problems. Thefts, violence, and even socioeconomic depression have been blamed on crack. They assert that the government has invested considerably in studies whose results could be used to wage the constant war on drugs while to politicians, that war has amounted to nothing more than a perceptual war on poverty and urban crime. Since politicians have had little else of marketable interest to debate over the years, this aggressive attack on drugs has existed as one of their only colorful means by which to create debate, controversy, and campaign fuel. In other words, when balancing the budget and maintaining an effective foreign policy became too boring to handle, Reinarman and Levine assert that the crack epidemic became the focus of politicians with the intent of luring public interest to their flashy anti-drug campaigns. Finally, in addition to the medias excess attention on the war against drugs, Reinarman and Levine make the point the constant coverage of crack in the news media has only been counterproductive to the alleged goals of any anti-drug program. With descriptions of the crack high that glorify it considerably- the politically-charged media campaigns to fight drugs have worked somewhat ironically as huge advertising campaigns for crack-increasing public awareness and stimulating the interests of venturous junkies. While Reinarman and Levine are rather adamant about their findings, they do maintain an overt respect for the reality that crack has had other causal factors and outcomes besides those described by them. Their main concern seems to be calling for a more realistic spotlight to be placed upon the problem- so that we can begin to deal with it as no more and no less than what should be. The war on drugs is indeed based upon an exaggeration of facts. Although it is also evident that substances such as crack-cocaine may serve to pose great health risks to those that use them, there is not any widespread epidemic use of the drug nor any validity to the apparent myths that it causes such immediate devastation and is life-wrecking in every single case. It is obvious that we do indeed need to maintain a greater and more focused emphasis on the important and more widespread problems in society. Important energies and well-needed monies are being diverted from them to fight in an almost-imaginary battle against a controlled substance. Conclusively, we should allow drugs like crack-cocaine receive their due attention as social problems, but let them receive no more than that!
In our contemporary society, people all over the world should launch a war on drugs.
c
id_3524
Is There Really a War on Drugs? In our contemporary society, the media constantly bombards us with horror stories about drugs like crack-cocaine. From them, and probably from no other source, we learn that crack is immediately addictive in every case, we learn that it causes corruption, crazed violence, and almost always leads to death. The government tells us that we are busy fighting a war on drugs and so it gives us various iconic models to despise and detest: we learn to stereotype inner-city minorities as being of drug-infested wastelands and we learn to witchhunt drug users within our own communities under the belief that they represent moral sin and pure evil. I believe that these titles and ideals are preposterous and based entirely upon unnecessary and even detrimental ideals promoted by the government to achieve purposes other than those they claim. In Craig Renarmans and Harry Levines article entitled The Crack Attack: Politics and Media in Americas Latest Drug Scare, the authors attempt to expose and to deal with some of the societal problems that have resulted from the over-exaggeration of crack-cocaine as an epidemic problem in our country. Without detracting attention away from the serious health risks for those few individuals who do use the drug, Renarman and Levine demonstrate how minimally detrimental the current epidemic actually is. Early in the article, the authors summarize crack-cocaines evolutionary history in the U. S. They specifically discuss how the crack-related deaths of two star-athletes which first called wide-spread attention to the problem during the mid-1980s. Since then, the government has reportedly used crack-cocaine as a political scapegoat for many of the nations larger inner-city problems. Thefts, violence, and even socioeconomic depression have been blamed on crack. They assert that the government has invested considerably in studies whose results could be used to wage the constant war on drugs while to politicians, that war has amounted to nothing more than a perceptual war on poverty and urban crime. Since politicians have had little else of marketable interest to debate over the years, this aggressive attack on drugs has existed as one of their only colorful means by which to create debate, controversy, and campaign fuel. In other words, when balancing the budget and maintaining an effective foreign policy became too boring to handle, Reinarman and Levine assert that the crack epidemic became the focus of politicians with the intent of luring public interest to their flashy anti-drug campaigns. Finally, in addition to the medias excess attention on the war against drugs, Reinarman and Levine make the point the constant coverage of crack in the news media has only been counterproductive to the alleged goals of any anti-drug program. With descriptions of the crack high that glorify it considerably- the politically-charged media campaigns to fight drugs have worked somewhat ironically as huge advertising campaigns for crack-increasing public awareness and stimulating the interests of venturous junkies. While Reinarman and Levine are rather adamant about their findings, they do maintain an overt respect for the reality that crack has had other causal factors and outcomes besides those described by them. Their main concern seems to be calling for a more realistic spotlight to be placed upon the problem- so that we can begin to deal with it as no more and no less than what should be. The war on drugs is indeed based upon an exaggeration of facts. Although it is also evident that substances such as crack-cocaine may serve to pose great health risks to those that use them, there is not any widespread epidemic use of the drug nor any validity to the apparent myths that it causes such immediate devastation and is life-wrecking in every single case. It is obvious that we do indeed need to maintain a greater and more focused emphasis on the important and more widespread problems in society. Important energies and well-needed monies are being diverted from them to fight in an almost-imaginary battle against a controlled substance. Conclusively, we should allow drugs like crack-cocaine receive their due attention as social problems, but let them receive no more than that!
Drug use may lead to poverty and divorce.
n
id_3525
Is There Really a War on Drugs? In our contemporary society, the media constantly bombards us with horror stories about drugs like crack-cocaine. From them, and probably from no other source, we learn that crack is immediately addictive in every case, we learn that it causes corruption, crazed violence, and almost always leads to death. The government tells us that we are busy fighting a war on drugs and so it gives us various iconic models to despise and detest: we learn to stereotype inner-city minorities as being of drug-infested wastelands and we learn to witchhunt drug users within our own communities under the belief that they represent moral sin and pure evil. I believe that these titles and ideals are preposterous and based entirely upon unnecessary and even detrimental ideals promoted by the government to achieve purposes other than those they claim. In Craig Renarmans and Harry Levines article entitled The Crack Attack: Politics and Media in Americas Latest Drug Scare, the authors attempt to expose and to deal with some of the societal problems that have resulted from the over-exaggeration of crack-cocaine as an epidemic problem in our country. Without detracting attention away from the serious health risks for those few individuals who do use the drug, Renarman and Levine demonstrate how minimally detrimental the current epidemic actually is. Early in the article, the authors summarize crack-cocaines evolutionary history in the U. S. They specifically discuss how the crack-related deaths of two star-athletes which first called wide-spread attention to the problem during the mid-1980s. Since then, the government has reportedly used crack-cocaine as a political scapegoat for many of the nations larger inner-city problems. Thefts, violence, and even socioeconomic depression have been blamed on crack. They assert that the government has invested considerably in studies whose results could be used to wage the constant war on drugs while to politicians, that war has amounted to nothing more than a perceptual war on poverty and urban crime. Since politicians have had little else of marketable interest to debate over the years, this aggressive attack on drugs has existed as one of their only colorful means by which to create debate, controversy, and campaign fuel. In other words, when balancing the budget and maintaining an effective foreign policy became too boring to handle, Reinarman and Levine assert that the crack epidemic became the focus of politicians with the intent of luring public interest to their flashy anti-drug campaigns. Finally, in addition to the medias excess attention on the war against drugs, Reinarman and Levine make the point the constant coverage of crack in the news media has only been counterproductive to the alleged goals of any anti-drug program. With descriptions of the crack high that glorify it considerably- the politically-charged media campaigns to fight drugs have worked somewhat ironically as huge advertising campaigns for crack-increasing public awareness and stimulating the interests of venturous junkies. While Reinarman and Levine are rather adamant about their findings, they do maintain an overt respect for the reality that crack has had other causal factors and outcomes besides those described by them. Their main concern seems to be calling for a more realistic spotlight to be placed upon the problem- so that we can begin to deal with it as no more and no less than what should be. The war on drugs is indeed based upon an exaggeration of facts. Although it is also evident that substances such as crack-cocaine may serve to pose great health risks to those that use them, there is not any widespread epidemic use of the drug nor any validity to the apparent myths that it causes such immediate devastation and is life-wrecking in every single case. It is obvious that we do indeed need to maintain a greater and more focused emphasis on the important and more widespread problems in society. Important energies and well-needed monies are being diverted from them to fight in an almost-imaginary battle against a controlled substance. Conclusively, we should allow drugs like crack-cocaine receive their due attention as social problems, but let them receive no more than that!
The war on drugs waged by the government is really a perceptual war on poverty and urban crimes.
e
id_3526
Is There Really a War on Drugs? In our contemporary society, the media constantly bombards us with horror stories about drugs like crack-cocaine. From them, and probably from no other source, we learn that crack is immediately addictive in every case, we learn that it causes corruption, crazed violence, and almost always leads to death. The government tells us that we are busy fighting a war on drugs and so it gives us various iconic models to despise and detest: we learn to stereotype inner-city minorities as being of drug-infested wastelands and we learn to witchhunt drug users within our own communities under the belief that they represent moral sin and pure evil. I believe that these titles and ideals are preposterous and based entirely upon unnecessary and even detrimental ideals promoted by the government to achieve purposes other than those they claim. In Craig Renarmans and Harry Levines article entitled The Crack Attack: Politics and Media in Americas Latest Drug Scare, the authors attempt to expose and to deal with some of the societal problems that have resulted from the over-exaggeration of crack-cocaine as an epidemic problem in our country. Without detracting attention away from the serious health risks for those few individuals who do use the drug, Renarman and Levine demonstrate how minimally detrimental the current epidemic actually is. Early in the article, the authors summarize crack-cocaines evolutionary history in the U. S. They specifically discuss how the crack-related deaths of two star-athletes which first called wide-spread attention to the problem during the mid-1980s. Since then, the government has reportedly used crack-cocaine as a political scapegoat for many of the nations larger inner-city problems. Thefts, violence, and even socioeconomic depression have been blamed on crack. They assert that the government has invested considerably in studies whose results could be used to wage the constant war on drugs while to politicians, that war has amounted to nothing more than a perceptual war on poverty and urban crime. Since politicians have had little else of marketable interest to debate over the years, this aggressive attack on drugs has existed as one of their only colorful means by which to create debate, controversy, and campaign fuel. In other words, when balancing the budget and maintaining an effective foreign policy became too boring to handle, Reinarman and Levine assert that the crack epidemic became the focus of politicians with the intent of luring public interest to their flashy anti-drug campaigns. Finally, in addition to the medias excess attention on the war against drugs, Reinarman and Levine make the point the constant coverage of crack in the news media has only been counterproductive to the alleged goals of any anti-drug program. With descriptions of the crack high that glorify it considerably- the politically-charged media campaigns to fight drugs have worked somewhat ironically as huge advertising campaigns for crack-increasing public awareness and stimulating the interests of venturous junkies. While Reinarman and Levine are rather adamant about their findings, they do maintain an overt respect for the reality that crack has had other causal factors and outcomes besides those described by them. Their main concern seems to be calling for a more realistic spotlight to be placed upon the problem- so that we can begin to deal with it as no more and no less than what should be. The war on drugs is indeed based upon an exaggeration of facts. Although it is also evident that substances such as crack-cocaine may serve to pose great health risks to those that use them, there is not any widespread epidemic use of the drug nor any validity to the apparent myths that it causes such immediate devastation and is life-wrecking in every single case. It is obvious that we do indeed need to maintain a greater and more focused emphasis on the important and more widespread problems in society. Important energies and well-needed monies are being diverted from them to fight in an almost-imaginary battle against a controlled substance. Conclusively, we should allow drugs like crack-cocaine receive their due attention as social problems, but let them receive no more than that!
We should not pay too much attention to drug users, instead, we should fight against the drug dealers.
n
id_3527
Is There Really a War on Drugs? In our contemporary society, the media constantly bombards us with horror stories about drugs like crack-cocaine. From them, and probably from no other source, we learn that crack is immediately addictive in every case, we learn that it causes corruption, crazed violence, and almost always leads to death. The government tells us that we are busy fighting a war on drugs and so it gives us various iconic models to despise and detest: we learn to stereotype inner-city minorities as being of drug-infested wastelands and we learn to witchhunt drug users within our own communities under the belief that they represent moral sin and pure evil. I believe that these titles and ideals are preposterous and based entirely upon unnecessary and even detrimental ideals promoted by the government to achieve purposes other than those they claim. In Craig Renarmans and Harry Levines article entitled The Crack Attack: Politics and Media in Americas Latest Drug Scare, the authors attempt to expose and to deal with some of the societal problems that have resulted from the over-exaggeration of crack-cocaine as an epidemic problem in our country. Without detracting attention away from the serious health risks for those few individuals who do use the drug, Renarman and Levine demonstrate how minimally detrimental the current epidemic actually is. Early in the article, the authors summarize crack-cocaines evolutionary history in the U. S. They specifically discuss how the crack-related deaths of two star-athletes which first called wide-spread attention to the problem during the mid-1980s. Since then, the government has reportedly used crack-cocaine as a political scapegoat for many of the nations larger inner-city problems. Thefts, violence, and even socioeconomic depression have been blamed on crack. They assert that the government has invested considerably in studies whose results could be used to wage the constant war on drugs while to politicians, that war has amounted to nothing more than a perceptual war on poverty and urban crime. Since politicians have had little else of marketable interest to debate over the years, this aggressive attack on drugs has existed as one of their only colorful means by which to create debate, controversy, and campaign fuel. In other words, when balancing the budget and maintaining an effective foreign policy became too boring to handle, Reinarman and Levine assert that the crack epidemic became the focus of politicians with the intent of luring public interest to their flashy anti-drug campaigns. Finally, in addition to the medias excess attention on the war against drugs, Reinarman and Levine make the point the constant coverage of crack in the news media has only been counterproductive to the alleged goals of any anti-drug program. With descriptions of the crack high that glorify it considerably- the politically-charged media campaigns to fight drugs have worked somewhat ironically as huge advertising campaigns for crack-increasing public awareness and stimulating the interests of venturous junkies. While Reinarman and Levine are rather adamant about their findings, they do maintain an overt respect for the reality that crack has had other causal factors and outcomes besides those described by them. Their main concern seems to be calling for a more realistic spotlight to be placed upon the problem- so that we can begin to deal with it as no more and no less than what should be. The war on drugs is indeed based upon an exaggeration of facts. Although it is also evident that substances such as crack-cocaine may serve to pose great health risks to those that use them, there is not any widespread epidemic use of the drug nor any validity to the apparent myths that it causes such immediate devastation and is life-wrecking in every single case. It is obvious that we do indeed need to maintain a greater and more focused emphasis on the important and more widespread problems in society. Important energies and well-needed monies are being diverted from them to fight in an almost-imaginary battle against a controlled substance. Conclusively, we should allow drugs like crack-cocaine receive their due attention as social problems, but let them receive no more than that!
Drugs like crack-cocaine have received much more attention than is necessary.
e
id_3528
Is There Really a War on Drugs? In our contemporary society, the media constantly bombards us with horror stories about drugs like crack-cocaine. From them, and probably from no other source, we learn that crack is immediately addictive in every case, we learn that it causes corruption, crazed violence, and almost always leads to death. The government tells us that we are busy fighting a war on drugs and so it gives us various iconic models to despise and detest: we learn to stereotype inner-city minorities as being of drug-infested wastelands and we learn to witchhunt drug users within our own communities under the belief that they represent moral sin and pure evil. I believe that these titles and ideals are preposterous and based entirely upon unnecessary and even detrimental ideals promoted by the government to achieve purposes other than those they claim. In Craig Renarmans and Harry Levines article entitled The Crack Attack: Politics and Media in Americas Latest Drug Scare, the authors attempt to expose and to deal with some of the societal problems that have resulted from the over-exaggeration of crack-cocaine as an epidemic problem in our country. Without detracting attention away from the serious health risks for those few individuals who do use the drug, Renarman and Levine demonstrate how minimally detrimental the current epidemic actually is. Early in the article, the authors summarize crack-cocaines evolutionary history in the U. S. They specifically discuss how the crack-related deaths of two star-athletes which first called wide-spread attention to the problem during the mid-1980s. Since then, the government has reportedly used crack-cocaine as a political scapegoat for many of the nations larger inner-city problems. Thefts, violence, and even socioeconomic depression have been blamed on crack. They assert that the government has invested considerably in studies whose results could be used to wage the constant war on drugs while to politicians, that war has amounted to nothing more than a perceptual war on poverty and urban crime. Since politicians have had little else of marketable interest to debate over the years, this aggressive attack on drugs has existed as one of their only colorful means by which to create debate, controversy, and campaign fuel. In other words, when balancing the budget and maintaining an effective foreign policy became too boring to handle, Reinarman and Levine assert that the crack epidemic became the focus of politicians with the intent of luring public interest to their flashy anti-drug campaigns. Finally, in addition to the medias excess attention on the war against drugs, Reinarman and Levine make the point the constant coverage of crack in the news media has only been counterproductive to the alleged goals of any anti-drug program. With descriptions of the crack high that glorify it considerably- the politically-charged media campaigns to fight drugs have worked somewhat ironically as huge advertising campaigns for crack-increasing public awareness and stimulating the interests of venturous junkies. While Reinarman and Levine are rather adamant about their findings, they do maintain an overt respect for the reality that crack has had other causal factors and outcomes besides those described by them. Their main concern seems to be calling for a more realistic spotlight to be placed upon the problem- so that we can begin to deal with it as no more and no less than what should be. The war on drugs is indeed based upon an exaggeration of facts. Although it is also evident that substances such as crack-cocaine may serve to pose great health risks to those that use them, there is not any widespread epidemic use of the drug nor any validity to the apparent myths that it causes such immediate devastation and is life-wrecking in every single case. It is obvious that we do indeed need to maintain a greater and more focused emphasis on the important and more widespread problems in society. Important energies and well-needed monies are being diverted from them to fight in an almost-imaginary battle against a controlled substance. Conclusively, we should allow drugs like crack-cocaine receive their due attention as social problems, but let them receive no more than that!
We should spend more money and maintain a more focused emphasis on the importance and more wide-spread problems in society rather than on an almost imaginary battle against drugs.
e
id_3529
Is Your Child at School Today? School Attendance Information for Parents/Carers Introduction Receiving a good full-time education will give your child the best possible start in life. Attending school regularly and punctually is essential if children are to make the most of the opportunities available to them. The law says that parents must ensure that their child regularly attends the school where he/she is registered. What you can do to help Make sure your child arrives at school on time. This encourages habits of good timekeeping and lessens any possible classroom disruption. If your child arrives after the register has closed without a good reason, this will be recorded as an unauthorised absence for that session. If your child has to miss school it is vital that you let the school know why, preferably on the first morning of absence. (Your childs school will have an attendance policy explaining how this should be done. ) If you know or think that your child is having difficulties attending school you should contact the school. It is better to do this sooner rather than later, as most problems can be dealt with very quickly. Authorised and Unauthorised Absence If your child is absent and the school either does not receive an explanation from you, or considers the explanation unsatisfactory, it will record your childs absence as unauthorised, that is, as truancy. Most absences for acceptable reasons will be authorised by your childs school: Sickness Unavoidable medical or dental appointments (if possible, arrange these for after school or during school holidays) An interview with a prospective employer or college Exceptional family circumstances, such as bereavement Days of religious observance. Your childs school will not authorise absence for the following reasons: Shopping during school hours Day trips Holidays which have not been agreed Birthdays Looking after brothers or sisters or ill relatives.
Children must go to the school where they are registered.
e
id_3530
Is Your Child at School Today? School Attendance Information for Parents/Carers Introduction Receiving a good full-time education will give your child the best possible start in life. Attending school regularly and punctually is essential if children are to make the most of the opportunities available to them. The law says that parents must ensure that their child regularly attends the school where he/she is registered. What you can do to help Make sure your child arrives at school on time. This encourages habits of good timekeeping and lessens any possible classroom disruption. If your child arrives after the register has closed without a good reason, this will be recorded as an unauthorised absence for that session. If your child has to miss school it is vital that you let the school know why, preferably on the first morning of absence. (Your childs school will have an attendance policy explaining how this should be done. ) If you know or think that your child is having difficulties attending school you should contact the school. It is better to do this sooner rather than later, as most problems can be dealt with very quickly. Authorised and Unauthorised Absence If your child is absent and the school either does not receive an explanation from you, or considers the explanation unsatisfactory, it will record your childs absence as unauthorised, that is, as truancy. Most absences for acceptable reasons will be authorised by your childs school: Sickness Unavoidable medical or dental appointments (if possible, arrange these for after school or during school holidays) An interview with a prospective employer or college Exceptional family circumstances, such as bereavement Days of religious observance. Your childs school will not authorise absence for the following reasons: Shopping during school hours Day trips Holidays which have not been agreed Birthdays Looking after brothers or sisters or ill relatives.
All arrivals after the register has closed are recorded as unauthorised absences.
c
id_3531
Is Your Child at School Today? School Attendance Information for Parents/Carers Introduction Receiving a good full-time education will give your child the best possible start in life. Attending school regularly and punctually is essential if children are to make the most of the opportunities available to them. The law says that parents must ensure that their child regularly attends the school where he/she is registered. What you can do to help Make sure your child arrives at school on time. This encourages habits of good timekeeping and lessens any possible classroom disruption. If your child arrives after the register has closed without a good reason, this will be recorded as an unauthorised absence for that session. If your child has to miss school it is vital that you let the school know why, preferably on the first morning of absence. (Your childs school will have an attendance policy explaining how this should be done. ) If you know or think that your child is having difficulties attending school you should contact the school. It is better to do this sooner rather than later, as most problems can be dealt with very quickly. Authorised and Unauthorised Absence If your child is absent and the school either does not receive an explanation from you, or considers the explanation unsatisfactory, it will record your childs absence as unauthorised, that is, as truancy. Most absences for acceptable reasons will be authorised by your childs school: Sickness Unavoidable medical or dental appointments (if possible, arrange these for after school or during school holidays) An interview with a prospective employer or college Exceptional family circumstances, such as bereavement Days of religious observance. Your childs school will not authorise absence for the following reasons: Shopping during school hours Day trips Holidays which have not been agreed Birthdays Looking after brothers or sisters or ill relatives.
If your child is absent from school, you must send the school a letter to explain why.
n
id_3532
Is Your Child at School Today? School Attendance Information for Parents/Carers Introduction Receiving a good full-time education will give your child the best possible start in life. Attending school regularly and punctually is essential if children are to make the most of the opportunities available to them. The law says that parents must ensure that their child regularly attends the school where he/she is registered. What you can do to help Make sure your child arrives at school on time. This encourages habits of good timekeeping and lessens any possible classroom disruption. If your child arrives after the register has closed without a good reason, this will be recorded as an unauthorised absence for that session. If your child has to miss school it is vital that you let the school know why, preferably on the first morning of absence. (Your childs school will have an attendance policy explaining how this should be done. ) If you know or think that your child is having difficulties attending school you should contact the school. It is better to do this sooner rather than later, as most problems can be dealt with very quickly. Authorised and Unauthorised Absence If your child is absent and the school either does not receive an explanation from you, or considers the explanation unsatisfactory, it will record your childs absence as unauthorised, that is, as truancy. Most absences for acceptable reasons will be authorised by your childs school: Sickness Unavoidable medical or dental appointments (if possible, arrange these for after school or during school holidays) An interview with a prospective employer or college Exceptional family circumstances, such as bereavement Days of religious observance. Your childs school will not authorise absence for the following reasons: Shopping during school hours Day trips Holidays which have not been agreed Birthdays Looking after brothers or sisters or ill relatives.
Staff who think a child is having difficulties at school will contact the parents.
n
id_3533
Is Your Child at School Today? School Attendance Information for Parents/Carers Introduction Receiving a good full-time education will give your child the best possible start in life. Attending school regularly and punctually is essential if children are to make the most of the opportunities available to them. The law says that parents must ensure that their child regularly attends the school where he/she is registered. What you can do to help Make sure your child arrives at school on time. This encourages habits of good timekeeping and lessens any possible classroom disruption. If your child arrives after the register has closed without a good reason, this will be recorded as an unauthorised absence for that session. If your child has to miss school it is vital that you let the school know why, preferably on the first morning of absence. (Your childs school will have an attendance policy explaining how this should be done. ) If you know or think that your child is having difficulties attending school you should contact the school. It is better to do this sooner rather than later, as most problems can be dealt with very quickly. Authorised and Unauthorised Absence If your child is absent and the school either does not receive an explanation from you, or considers the explanation unsatisfactory, it will record your childs absence as unauthorised, that is, as truancy. Most absences for acceptable reasons will be authorised by your childs school: Sickness Unavoidable medical or dental appointments (if possible, arrange these for after school or during school holidays) An interview with a prospective employer or college Exceptional family circumstances, such as bereavement Days of religious observance. Your childs school will not authorise absence for the following reasons: Shopping during school hours Day trips Holidays which have not been agreed Birthdays Looking after brothers or sisters or ill relatives.
Schools will contact other authorities about children who take frequent unauthorised absences.
n
id_3534
Is free internet access as much a universal human right as access to clean water and healthcare? Many leading experts believe that the 80% of the worlds population that is not connected to the web should have access to information through free low-bandwidth connection via mobile phones. The one fifth of the world connected to the internet, however, faces a very different problem: an insatiable appetite for bandwidth that outstrips availability. Bandwidth refers to the capacity to transfer data through a channel. Emails, for example, require less bandwidth than video. Information traffic jams result when too many users try to move information at the same time, exceeding the channels capacity. The popularity of mobile web devices means demand for wireless channels is growing rapidly, but bandwidth supply is limited resulting in high charges for use. With bandwidth controlled by a handful of private suppliers, bandwidth is the subject of government debate in many countries, including the United States. Bandwidth suppliers are in favour of introducing tiered pricing structures, whereby customers paying higher rates would receive faster service. Critics believe that a tiered system violates the principle of net neutrality whereby all data is treated as equal and would allow suppliers to profiteer from controlling a scarce resource. Suppliers argue that they are funding huge infrastructure updates such as switching from copper wires to expensive fiberoptics in order to improve services.
Proponents of net neutrality are against the prioritising of certain web traffic.
e
id_3535
Is free internet access as much a universal human right as access to clean water and healthcare? Many leading experts believe that the 80% of the worlds population that is not connected to the web should have access to information through free low-bandwidth connection via mobile phones. The one fifth of the world connected to the internet, however, faces a very different problem: an insatiable appetite for bandwidth that outstrips availability. Bandwidth refers to the capacity to transfer data through a channel. Emails, for example, require less bandwidth than video. Information traffic jams result when too many users try to move information at the same time, exceeding the channels capacity. The popularity of mobile web devices means demand for wireless channels is growing rapidly, but bandwidth supply is limited resulting in high charges for use. With bandwidth controlled by a handful of private suppliers, bandwidth is the subject of government debate in many countries, including the United States. Bandwidth suppliers are in favour of introducing tiered pricing structures, whereby customers paying higher rates would receive faster service. Critics believe that a tiered system violates the principle of net neutrality whereby all data is treated as equal and would allow suppliers to profiteer from controlling a scarce resource. Suppliers argue that they are funding huge infrastructure updates such as switching from copper wires to expensive fiberoptics in order to improve services.
Proposed tiered pricing structures would charge users more for using mobile web devices.
n
id_3536
Is free internet access as much a universal human right as access to clean water and healthcare? Many leading experts believe that the 80% of the worlds population that is not connected to the web should have access to information through free low-bandwidth connection via mobile phones. The one fifth of the world connected to the internet, however, faces a very different problem: an insatiable appetite for bandwidth that outstrips availability. Bandwidth refers to the capacity to transfer data through a channel. Emails, for example, require less bandwidth than video. Information traffic jams result when too many users try to move information at the same time, exceeding the channels capacity. The popularity of mobile web devices means demand for wireless channels is growing rapidly, but bandwidth supply is limited resulting in high charges for use. With bandwidth controlled by a handful of private suppliers, bandwidth is the subject of government debate in many countries, including the United States. Bandwidth suppliers are in favour of introducing tiered pricing structures, whereby customers paying higher rates would receive faster service. Critics believe that a tiered system violates the principle of net neutrality whereby all data is treated as equal and would allow suppliers to profiteer from controlling a scarce resource. Suppliers argue that they are funding huge infrastructure updates such as switching from copper wires to expensive fiberoptics in order to improve services.
The growth of mobile net device use has contributed towards the pressure on bandwidth availability.
e
id_3537
Is free internet access as much a universal human right as access to clean water and healthcare? Many leading experts believe that the 80% of the worlds population that is not connected to the web should have access to information through free low-bandwidth connection via mobile phones. The one fifth of the world connected to the internet, however, faces a very different problem: an insatiable appetite for bandwidth that outstrips availability. Bandwidth refers to the capacity to transfer data through a channel. Emails, for example, require less bandwidth than video. Information traffic jams result when too many users try to move information at the same time, exceeding the channels capacity. The popularity of mobile web devices means demand for wireless channels is growing rapidly, but bandwidth supply is limited resulting in high charges for use. With bandwidth controlled by a handful of private suppliers, bandwidth is the subject of government debate in many countries, including the United States. Bandwidth suppliers are in favour of introducing tiered pricing structures, whereby customers paying higher rates would receive faster service. Critics believe that a tiered system violates the principle of net neutrality whereby all data is treated as equal and would allow suppliers to profiteer from controlling a scarce resource. Suppliers argue that they are funding huge infrastructure updates such as switching from copper wires to expensive fiberoptics in order to improve services.
Access to information via the internet is a basic human right.
n
id_3538
Is free internet access as much a universal human right as access to clean water and healthcare? Many leading experts believe that the 80% of the worlds population that is not connected to the web should have access to information through free low-bandwidth connection via mobile phones. The one fifth of the world connected to the internet, however, faces a very different problem: an insatiable appetite for bandwidth that outstrips availability. Bandwidth refers to the capacity to transfer data through a channel. Emails, for example, require less bandwidth than video. Information traffic jams result when too many users try to move information at the same time, exceeding the channels capacity. The popularity of mobile web devices means demand for wireless channels is growing rapidly, but bandwidth supply is limited resulting in high charges for use. With bandwidth controlled by a handful of private suppliers, bandwidth is the subject of government debate in many countries, including the United States. Bandwidth suppliers are in favour of introducing tiered pricing structures, whereby customers paying higher rates would receive faster service. Critics believe that a tiered system violates the principle of net neutrality whereby all data is treated as equal and would allow suppliers to profiteer from controlling a scarce resource. Suppliers argue that they are funding huge infrastructure updates such as switching from copper wires to expensive fiberoptics in order to improve services.
The main argument in the passage is that internet users are not leaving enough bandwidth for 80% of the worlds population.
c
id_3539
Is it any wonder that there are teacher shortages? Daily, the press carries reports of schools going on four-day weeks simply because they cannot recruit enough teachers. But why? There is no straightforward answer. For a start, fewer students are entering teacher-training courses when they leave school. But can you blame young people after the barracking faced by the teaching profession in the UK over the last decade? The attack, relentless in the extreme, has been on several fronts. Government inspectors, by accident or design, have been feeding the media a constant stream of negative information about the teaching establishments in this country. Teachers also come in for a lot of flak from politicians. And the government wonders why there are problems in schools. The governments obvious contempt for the teaching profession was recently revealed by one of the most powerful people in government when she referred to schools as bog standard comprehensives. Hardly the sort of comment to inspire parents or careers advisers seeking to direct young peoples future. Would you want to spend your working life in a dead-end profession? The government doesnt seem to want you to either. On the administrative side, most teachers are weighed down by an increasing flow of bureaucracy. Cynicism would have me believe that this stops teachers from fomenting dissent as they are worn out by useless administrative exercises. Most teachers must then also be cynics! Teacher bashing has, unfortunately, spread to youngsters in schools as the recent catalogue of physical attacks on teachers will testify. If grown-ups have no respect for the teaching profession, young people can hardly be expected to think any differently. The circle is then squared when, as well as experienced, competent teachers being driven out of the profession by the increased pressure and stress; fewer students are applying for teacher-training courses. Increased salaries are certainly welcome, but they are not the complete answer to a sector in crisis. Addressing the standing of the profession in the eyes of the public is crucial to encourage experienced teachers to remain in the classroom and to make it an attractive career option for potential teachers once again. It might also be a good idea for the relevant ministers to go on a fact-finding mission and find out from teachers in schools, rather than relying overmuch on advisers, as to what changes could be brought about to improve the quality of the education service. Initiatives in the educational field surprisingly come from either politicians who know little about classroom practice or educational theorists who know even less, but are more dangerous because they work in the rarefied air of universities largely ignorant of classroom practice. Making sure that nobody without recent classroom experience is employed as a teacher-trainer at any tertiary institution would further enhance the teaching profession. If someone does not have practical experience in the classroom, they cannot in all seriousness propound theories about it. Instead of being given sabbaticals to write books or papers, lecturers in teacher-training establishments should be made to spend a year at the blackboard or, these days, the whiteboard. This would give them practical insights into current classroom practice. Student teachers could then be given the chance to come and watch the specialists in the classroom: a much more worthwhile experience than the latter sitting thinking up ideas far removed from the classroom. Then we would have fewer initiatives like the recent government proposal to teach thinking in school. Prima facie, this is a laudable recommendation. But, as any practising teacher will tell you, this is done in every class. Perhaps someone needs to point out to the academic who thought up the scheme that the wheel has been around for some time. In the educational field, there is surprisingly constant tension between the educational theorists and government officials on the one hand, who would like to see teachers marching in unison to some greater Utopian abstraction and, on the other, practising teachers. Any experienced classroom practitioner knows that the series of initiatives on teaching and learning that successive governments have tried to foist on schools and colleges do not work.
The governments attitude with regard to teachers is of great interest to the general public.
n
id_3540
Is it any wonder that there are teacher shortages? Daily, the press carries reports of schools going on four-day weeks simply because they cannot recruit enough teachers. But why? There is no straightforward answer. For a start, fewer students are entering teacher-training courses when they leave school. But can you blame young people after the barracking faced by the teaching profession in the UK over the last decade? The attack, relentless in the extreme, has been on several fronts. Government inspectors, by accident or design, have been feeding the media a constant stream of negative information about the teaching establishments in this country. Teachers also come in for a lot of flak from politicians. And the government wonders why there are problems in schools. The governments obvious contempt for the teaching profession was recently revealed by one of the most powerful people in government when she referred to schools as bog standard comprehensives. Hardly the sort of comment to inspire parents or careers advisers seeking to direct young peoples future. Would you want to spend your working life in a dead-end profession? The government doesnt seem to want you to either. On the administrative side, most teachers are weighed down by an increasing flow of bureaucracy. Cynicism would have me believe that this stops teachers from fomenting dissent as they are worn out by useless administrative exercises. Most teachers must then also be cynics! Teacher bashing has, unfortunately, spread to youngsters in schools as the recent catalogue of physical attacks on teachers will testify. If grown-ups have no respect for the teaching profession, young people can hardly be expected to think any differently. The circle is then squared when, as well as experienced, competent teachers being driven out of the profession by the increased pressure and stress; fewer students are applying for teacher-training courses. Increased salaries are certainly welcome, but they are not the complete answer to a sector in crisis. Addressing the standing of the profession in the eyes of the public is crucial to encourage experienced teachers to remain in the classroom and to make it an attractive career option for potential teachers once again. It might also be a good idea for the relevant ministers to go on a fact-finding mission and find out from teachers in schools, rather than relying overmuch on advisers, as to what changes could be brought about to improve the quality of the education service. Initiatives in the educational field surprisingly come from either politicians who know little about classroom practice or educational theorists who know even less, but are more dangerous because they work in the rarefied air of universities largely ignorant of classroom practice. Making sure that nobody without recent classroom experience is employed as a teacher-trainer at any tertiary institution would further enhance the teaching profession. If someone does not have practical experience in the classroom, they cannot in all seriousness propound theories about it. Instead of being given sabbaticals to write books or papers, lecturers in teacher-training establishments should be made to spend a year at the blackboard or, these days, the whiteboard. This would give them practical insights into current classroom practice. Student teachers could then be given the chance to come and watch the specialists in the classroom: a much more worthwhile experience than the latter sitting thinking up ideas far removed from the classroom. Then we would have fewer initiatives like the recent government proposal to teach thinking in school. Prima facie, this is a laudable recommendation. But, as any practising teacher will tell you, this is done in every class. Perhaps someone needs to point out to the academic who thought up the scheme that the wheel has been around for some time. In the educational field, there is surprisingly constant tension between the educational theorists and government officials on the one hand, who would like to see teachers marching in unison to some greater Utopian abstraction and, on the other, practising teachers. Any experienced classroom practitioner knows that the series of initiatives on teaching and learning that successive governments have tried to foist on schools and colleges do not work.
More students are entering teacher-training courses.
c
id_3541
Is it any wonder that there are teacher shortages? Daily, the press carries reports of schools going on four-day weeks simply because they cannot recruit enough teachers. But why? There is no straightforward answer. For a start, fewer students are entering teacher-training courses when they leave school. But can you blame young people after the barracking faced by the teaching profession in the UK over the last decade? The attack, relentless in the extreme, has been on several fronts. Government inspectors, by accident or design, have been feeding the media a constant stream of negative information about the teaching establishments in this country. Teachers also come in for a lot of flak from politicians. And the government wonders why there are problems in schools. The governments obvious contempt for the teaching profession was recently revealed by one of the most powerful people in government when she referred to schools as bog standard comprehensives. Hardly the sort of comment to inspire parents or careers advisers seeking to direct young peoples future. Would you want to spend your working life in a dead-end profession? The government doesnt seem to want you to either. On the administrative side, most teachers are weighed down by an increasing flow of bureaucracy. Cynicism would have me believe that this stops teachers from fomenting dissent as they are worn out by useless administrative exercises. Most teachers must then also be cynics! Teacher bashing has, unfortunately, spread to youngsters in schools as the recent catalogue of physical attacks on teachers will testify. If grown-ups have no respect for the teaching profession, young people can hardly be expected to think any differently. The circle is then squared when, as well as experienced, competent teachers being driven out of the profession by the increased pressure and stress; fewer students are applying for teacher-training courses. Increased salaries are certainly welcome, but they are not the complete answer to a sector in crisis. Addressing the standing of the profession in the eyes of the public is crucial to encourage experienced teachers to remain in the classroom and to make it an attractive career option for potential teachers once again. It might also be a good idea for the relevant ministers to go on a fact-finding mission and find out from teachers in schools, rather than relying overmuch on advisers, as to what changes could be brought about to improve the quality of the education service. Initiatives in the educational field surprisingly come from either politicians who know little about classroom practice or educational theorists who know even less, but are more dangerous because they work in the rarefied air of universities largely ignorant of classroom practice. Making sure that nobody without recent classroom experience is employed as a teacher-trainer at any tertiary institution would further enhance the teaching profession. If someone does not have practical experience in the classroom, they cannot in all seriousness propound theories about it. Instead of being given sabbaticals to write books or papers, lecturers in teacher-training establishments should be made to spend a year at the blackboard or, these days, the whiteboard. This would give them practical insights into current classroom practice. Student teachers could then be given the chance to come and watch the specialists in the classroom: a much more worthwhile experience than the latter sitting thinking up ideas far removed from the classroom. Then we would have fewer initiatives like the recent government proposal to teach thinking in school. Prima facie, this is a laudable recommendation. But, as any practising teacher will tell you, this is done in every class. Perhaps someone needs to point out to the academic who thought up the scheme that the wheel has been around for some time. In the educational field, there is surprisingly constant tension between the educational theorists and government officials on the one hand, who would like to see teachers marching in unison to some greater Utopian abstraction and, on the other, practising teachers. Any experienced classroom practitioner knows that the series of initiatives on teaching and learning that successive governments have tried to foist on schools and colleges do not work.
The government is right to be surprised that there are problems in schools.
c
id_3542
Is it any wonder that there are teacher shortages? Daily, the press carries reports of schools going on four-day weeks simply because they cannot recruit enough teachers. But why? There is no straightforward answer. For a start, fewer students are entering teacher-training courses when they leave school. But can you blame young people after the barracking faced by the teaching profession in the UK over the last decade? The attack, relentless in the extreme, has been on several fronts. Government inspectors, by accident or design, have been feeding the media a constant stream of negative information about the teaching establishments in this country. Teachers also come in for a lot of flak from politicians. And the government wonders why there are problems in schools. The governments obvious contempt for the teaching profession was recently revealed by one of the most powerful people in government when she referred to schools as bog standard comprehensives. Hardly the sort of comment to inspire parents or careers advisers seeking to direct young peoples future. Would you want to spend your working life in a dead-end profession? The government doesnt seem to want you to either. On the administrative side, most teachers are weighed down by an increasing flow of bureaucracy. Cynicism would have me believe that this stops teachers from fomenting dissent as they are worn out by useless administrative exercises. Most teachers must then also be cynics! Teacher bashing has, unfortunately, spread to youngsters in schools as the recent catalogue of physical attacks on teachers will testify. If grown-ups have no respect for the teaching profession, young people can hardly be expected to think any differently. The circle is then squared when, as well as experienced, competent teachers being driven out of the profession by the increased pressure and stress; fewer students are applying for teacher-training courses. Increased salaries are certainly welcome, but they are not the complete answer to a sector in crisis. Addressing the standing of the profession in the eyes of the public is crucial to encourage experienced teachers to remain in the classroom and to make it an attractive career option for potential teachers once again. It might also be a good idea for the relevant ministers to go on a fact-finding mission and find out from teachers in schools, rather than relying overmuch on advisers, as to what changes could be brought about to improve the quality of the education service. Initiatives in the educational field surprisingly come from either politicians who know little about classroom practice or educational theorists who know even less, but are more dangerous because they work in the rarefied air of universities largely ignorant of classroom practice. Making sure that nobody without recent classroom experience is employed as a teacher-trainer at any tertiary institution would further enhance the teaching profession. If someone does not have practical experience in the classroom, they cannot in all seriousness propound theories about it. Instead of being given sabbaticals to write books or papers, lecturers in teacher-training establishments should be made to spend a year at the blackboard or, these days, the whiteboard. This would give them practical insights into current classroom practice. Student teachers could then be given the chance to come and watch the specialists in the classroom: a much more worthwhile experience than the latter sitting thinking up ideas far removed from the classroom. Then we would have fewer initiatives like the recent government proposal to teach thinking in school. Prima facie, this is a laudable recommendation. But, as any practising teacher will tell you, this is done in every class. Perhaps someone needs to point out to the academic who thought up the scheme that the wheel has been around for some time. In the educational field, there is surprisingly constant tension between the educational theorists and government officials on the one hand, who would like to see teachers marching in unison to some greater Utopian abstraction and, on the other, practising teachers. Any experienced classroom practitioner knows that the series of initiatives on teaching and learning that successive governments have tried to foist on schools and colleges do not work.
Teachers are too weighed down by administrative duties to stir up trouble.
n
id_3543
Is it any wonder that there are teacher shortages? Daily, the press carries reports of schools going on four-day weeks simply because they cannot recruit enough teachers. But why? There is no straightforward answer. For a start, fewer students are entering teacher-training courses when they leave school. But can you blame young people after the barracking faced by the teaching profession in the UK over the last decade? The attack, relentless in the extreme, has been on several fronts. Government inspectors, by accident or design, have been feeding the media a constant stream of negative information about the teaching establishments in this country. Teachers also come in for a lot of flak from politicians. And the government wonders why there are problems in schools. The governments obvious contempt for the teaching profession was recently revealed by one of the most powerful people in government when she referred to schools as bog standard comprehensives. Hardly the sort of comment to inspire parents or careers advisers seeking to direct young peoples future. Would you want to spend your working life in a dead-end profession? The government doesnt seem to want you to either. On the administrative side, most teachers are weighed down by an increasing flow of bureaucracy. Cynicism would have me believe that this stops teachers from fomenting dissent as they are worn out by useless administrative exercises. Most teachers must then also be cynics! Teacher bashing has, unfortunately, spread to youngsters in schools as the recent catalogue of physical attacks on teachers will testify. If grown-ups have no respect for the teaching profession, young people can hardly be expected to think any differently. The circle is then squared when, as well as experienced, competent teachers being driven out of the profession by the increased pressure and stress; fewer students are applying for teacher-training courses. Increased salaries are certainly welcome, but they are not the complete answer to a sector in crisis. Addressing the standing of the profession in the eyes of the public is crucial to encourage experienced teachers to remain in the classroom and to make it an attractive career option for potential teachers once again. It might also be a good idea for the relevant ministers to go on a fact-finding mission and find out from teachers in schools, rather than relying overmuch on advisers, as to what changes could be brought about to improve the quality of the education service. Initiatives in the educational field surprisingly come from either politicians who know little about classroom practice or educational theorists who know even less, but are more dangerous because they work in the rarefied air of universities largely ignorant of classroom practice. Making sure that nobody without recent classroom experience is employed as a teacher-trainer at any tertiary institution would further enhance the teaching profession. If someone does not have practical experience in the classroom, they cannot in all seriousness propound theories about it. Instead of being given sabbaticals to write books or papers, lecturers in teacher-training establishments should be made to spend a year at the blackboard or, these days, the whiteboard. This would give them practical insights into current classroom practice. Student teachers could then be given the chance to come and watch the specialists in the classroom: a much more worthwhile experience than the latter sitting thinking up ideas far removed from the classroom. Then we would have fewer initiatives like the recent government proposal to teach thinking in school. Prima facie, this is a laudable recommendation. But, as any practising teacher will tell you, this is done in every class. Perhaps someone needs to point out to the academic who thought up the scheme that the wheel has been around for some time. In the educational field, there is surprisingly constant tension between the educational theorists and government officials on the one hand, who would like to see teachers marching in unison to some greater Utopian abstraction and, on the other, practising teachers. Any experienced classroom practitioner knows that the series of initiatives on teaching and learning that successive governments have tried to foist on schools and colleges do not work.
All teachers are cynics.
c
id_3544
Is it any wonder that there are teacher shortages? Daily, the press carries reports of schools going on four-day weeks simply because they cannot recruit enough teachers. But why? There is no straightforward answer. For a start, fewer students are entering teacher-training courses when they leave school. But can you blame young people after the barracking faced by the teaching profession in the UK over the last decade? The attack, relentless in the extreme, has been on several fronts. Government inspectors, by accident or design, have been feeding the media a constant stream of negative information about the teaching establishments in this country. Teachers also come in for a lot of flak from politicians. And the government wonders why there are problems in schools. The governments obvious contempt for the teaching profession was recently revealed by one of the most powerful people in government when she referred to schools as bog standard comprehensives. Hardly the sort of comment to inspire parents or careers advisers seeking to direct young peoples future. Would you want to spend your working life in a dead-end profession? The government doesnt seem to want you to either. On the administrative side, most teachers are weighed down by an increasing flow of bureaucracy. Cynicism would have me believe that this stops teachers from fomenting dissent as they are worn out by useless administrative exercises. Most teachers must then also be cynics! Teacher bashing has, unfortunately, spread to youngsters in schools as the recent catalogue of physical attacks on teachers will testify. If grown-ups have no respect for the teaching profession, young people can hardly be expected to think any differently. The circle is then squared when, as well as experienced, competent teachers being driven out of the profession by the increased pressure and stress; fewer students are applying for teacher-training courses. Increased salaries are certainly welcome, but they are not the complete answer to a sector in crisis. Addressing the standing of the profession in the eyes of the public is crucial to encourage experienced teachers to remain in the classroom and to make it an attractive career option for potential teachers once again. It might also be a good idea for the relevant ministers to go on a fact-finding mission and find out from teachers in schools, rather than relying overmuch on advisers, as to what changes could be brought about to improve the quality of the education service. Initiatives in the educational field surprisingly come from either politicians who know little about classroom practice or educational theorists who know even less, but are more dangerous because they work in the rarefied air of universities largely ignorant of classroom practice. Making sure that nobody without recent classroom experience is employed as a teacher-trainer at any tertiary institution would further enhance the teaching profession. If someone does not have practical experience in the classroom, they cannot in all seriousness propound theories about it. Instead of being given sabbaticals to write books or papers, lecturers in teacher-training establishments should be made to spend a year at the blackboard or, these days, the whiteboard. This would give them practical insights into current classroom practice. Student teachers could then be given the chance to come and watch the specialists in the classroom: a much more worthwhile experience than the latter sitting thinking up ideas far removed from the classroom. Then we would have fewer initiatives like the recent government proposal to teach thinking in school. Prima facie, this is a laudable recommendation. But, as any practising teacher will tell you, this is done in every class. Perhaps someone needs to point out to the academic who thought up the scheme that the wheel has been around for some time. In the educational field, there is surprisingly constant tension between the educational theorists and government officials on the one hand, who would like to see teachers marching in unison to some greater Utopian abstraction and, on the other, practising teachers. Any experienced classroom practitioner knows that the series of initiatives on teaching and learning that successive governments have tried to foist on schools and colleges do not work.
Any experienced classroom practitioner knows that the initiatives on teaching and learning that governments have tried to impose on schools do not work.
e
id_3545
Is it any wonder that there are teacher shortages? Daily, the press carries reports of schools going on four-day weeks simply because they cannot recruit enough teachers. But why? There is no straightforward answer. For a start, fewer students are entering teacher-training courses when they leave school. But can you blame young people after the barracking faced by the teaching profession in the UK over the last decade? The attack, relentless in the extreme, has been on several fronts. Government inspectors, by accident or design, have been feeding the media a constant stream of negative information about the teaching establishments in this country. Teachers also come in for a lot of flak from politicians. And the government wonders why there are problems in schools. The governments obvious contempt for the teaching profession was recently revealed by one of the most powerful people in government when she referred to schools as bog standard comprehensives. Hardly the sort of comment to inspire parents or careers advisers seeking to direct young peoples future. Would you want to spend your working life in a dead-end profession? The government doesnt seem to want you to either. On the administrative side, most teachers are weighed down by an increasing flow of bureaucracy. Cynicism would have me believe that this stops teachers from fomenting dissent as they are worn out by useless administrative exercises. Most teachers must then also be cynics! Teacher bashing has, unfortunately, spread to youngsters in schools as the recent catalogue of physical attacks on teachers will testify. If grown-ups have no respect for the teaching profession, young people can hardly be expected to think any differently. The circle is then squared when, as well as experienced, competent teachers being driven out of the profession by the increased pressure and stress; fewer students are applying for teacher-training courses. Increased salaries are certainly welcome, but they are not the complete answer to a sector in crisis. Addressing the standing of the profession in the eyes of the public is crucial to encourage experienced teachers to remain in the classroom and to make it an attractive career option for potential teachers once again. It might also be a good idea for the relevant ministers to go on a fact-finding mission and find out from teachers in schools, rather than relying overmuch on advisers, as to what changes could be brought about to improve the quality of the education service. Initiatives in the educational field surprisingly come from either politicians who know little about classroom practice or educational theorists who know even less, but are more dangerous because they work in the rarefied air of universities largely ignorant of classroom practice. Making sure that nobody without recent classroom experience is employed as a teacher-trainer at any tertiary institution would further enhance the teaching profession. If someone does not have practical experience in the classroom, they cannot in all seriousness propound theories about it. Instead of being given sabbaticals to write books or papers, lecturers in teacher-training establishments should be made to spend a year at the blackboard or, these days, the whiteboard. This would give them practical insights into current classroom practice. Student teachers could then be given the chance to come and watch the specialists in the classroom: a much more worthwhile experience than the latter sitting thinking up ideas far removed from the classroom. Then we would have fewer initiatives like the recent government proposal to teach thinking in school. Prima facie, this is a laudable recommendation. But, as any practising teacher will tell you, this is done in every class. Perhaps someone needs to point out to the academic who thought up the scheme that the wheel has been around for some time. In the educational field, there is surprisingly constant tension between the educational theorists and government officials on the one hand, who would like to see teachers marching in unison to some greater Utopian abstraction and, on the other, practising teachers. Any experienced classroom practitioner knows that the series of initiatives on teaching and learning that successive governments have tried to foist on schools and colleges do not work.
Politicians are not as dangerous as educational theorists, who know even less than the former about educational theory.
e
id_3546
Is there more to video games than people realize? Many people who spend a lot of time playing video games insist that they have helped them in areas like confidence-building, presentation skills and debating. Yet this way of thinking about video games can be found almost nowhere within the mainstream media, which still tend to treat games as an odd mix of the slightly menacing and the alien. This lack of awareness has become increasingly inappropriate, as video games and the culture that surrounds them have become very big business indeed. Recently, the British government released the Byron report into the effects of electronic media on children. Its conclusions set out a clear, rational basis for exploring the regulation of video games. The ensuing debate, however, has descended into the same old squabbling between partisan factions: the preachers of mental and moral decline, and the innovative game designers. In between are the gamers, busily buying and playing while nonsense is talked over their heads. Susan Greenfield, renowned neuroscientist, outlines her concerns in a new book. Every individuals mind is the product of a brain that has been personalized by the sum total of their experiences; with an increasing quantity of our experiences from very early childhood taking place on screen rather than in the world, there is potentially a profound shift in the way childrens minds work. She suggests that the fast-paced, second-hand experiences created by video games and the Internet may inculcate a worldview that is less empathetic, more risk-taking and less contemplative than what we tend to think of as healthy. Greenfields prose is full of mixed metaphors and self-contradictions and is perhaps the worst enemy of her attempts to persuade. This is unfortunate, because however much technophiles may snort, she is articulating widely held fears that have a basis in fact. Unlike even their immediate antecedents, the latest electronic media are at once domestic and work-related, their mobility blurring the boundaries between these spaces, and video games are at their forefront. A generational divide has opened that is in many ways more profound than the equivalent shifts associated with radio or television, more alienating for those unfamiliar with new technologies, more absorbing for those who are. So how do our lawmakers regulate something that is too fluid to be fully comprehended or controlled? Adam Martin, a lead programmer for an online games developer, says: Computer games teach and people dont even notice theyre being taught. But isnt the kind of learning that goes on in games rather narrow? A large part of the addictiveness of games does come from the fact that as you play you are mastering a set of challenges. But humanitys larger understanding of the world comes primarily through communication and experimentation, through answering the question What if? Games excel at teaching this too. Steven Johnsons thesis is not that electronic games constitute a great, popular art, but that the mean level of mass culture has been demanding steadily more intellectual engagement from consumers. Games, he points out, generate satisfaction via the complexity of their virtual worlds, not by their robotic predictability. Testing the nature and limits of the laws of such imaginary worlds has more in common with scientific methods than with a pointless addiction, while the complexity of the problems children encounter within games exceeds that of anything they might find at school. Greenfield argues that there are ways of thinking that playing video games simply cannot teach. She has a point. We should never forget, for instance, the unique ability of books to engage and expand the human imagination, and to give us the means of more fully expressing our situations in the world. Intriguingly, the video games industry is now growing in ways that have more in common with an old-fashioned world of companionable pastimes than with a cyber future of lonely, isolated obsessives. Games in which friends and relations gather round a console to compete at activities are growing in popularity. The agenda is increasingly being set by the concerns of mainstream consumers what they consider acceptable for their children, what they want to play at parties and across generations. These trends embody a familiar but important truth: games are human products, and lie within our control. This doesnt mean we yet control or understand them fully, but it should remind us that there is nothing inevitable or incomprehensible about them. No matter how deeply it may be felt, instinctive fear is an inappropriate response to technology of any kind. So far, the dire predictions many traditionalists have made about the death of old-fashioned narratives and imaginative thought at the hands of video games cannot be upheld. Television and cinema may be suffering, economically, at the hands of interactive media. But literacy standards have failed to decline. Young people still enjoy sport, going out and listening to music And most research including a recent $1.5m study funded by the US government suggests that even pre- teens are not in the habit of blurring game worlds and real worlds. The sheer pace and scale of the changes we face, however, leave little room for complacency. Richard Battle, a British writer and game researcher, says Times change: accept it; embrace it. Just as, today, we have no living memories of a time before radio, we will soon live in a world in which no one living experienced growing up without computers. It is for this reason that we must try to examine what we stand to lose and gain, before it is too late.
Susan Greenfields way of writing has become more complex over the years.
n
id_3547
Is there more to video games than people realize? Many people who spend a lot of time playing video games insist that they have helped them in areas like confidence-building, presentation skills and debating. Yet this way of thinking about video games can be found almost nowhere within the mainstream media, which still tend to treat games as an odd mix of the slightly menacing and the alien. This lack of awareness has become increasingly inappropriate, as video games and the culture that surrounds them have become very big business indeed. Recently, the British government released the Byron report into the effects of electronic media on children. Its conclusions set out a clear, rational basis for exploring the regulation of video games. The ensuing debate, however, has descended into the same old squabbling between partisan factions: the preachers of mental and moral decline, and the innovative game designers. In between are the gamers, busily buying and playing while nonsense is talked over their heads. Susan Greenfield, renowned neuroscientist, outlines her concerns in a new book. Every individuals mind is the product of a brain that has been personalized by the sum total of their experiences; with an increasing quantity of our experiences from very early childhood taking place on screen rather than in the world, there is potentially a profound shift in the way childrens minds work. She suggests that the fast-paced, second-hand experiences created by video games and the Internet may inculcate a worldview that is less empathetic, more risk-taking and less contemplative than what we tend to think of as healthy. Greenfields prose is full of mixed metaphors and self-contradictions and is perhaps the worst enemy of her attempts to persuade. This is unfortunate, because however much technophiles may snort, she is articulating widely held fears that have a basis in fact. Unlike even their immediate antecedents, the latest electronic media are at once domestic and work-related, their mobility blurring the boundaries between these spaces, and video games are at their forefront. A generational divide has opened that is in many ways more profound than the equivalent shifts associated with radio or television, more alienating for those unfamiliar with new technologies, more absorbing for those who are. So how do our lawmakers regulate something that is too fluid to be fully comprehended or controlled? Adam Martin, a lead programmer for an online games developer, says: Computer games teach and people dont even notice theyre being taught. But isnt the kind of learning that goes on in games rather narrow? A large part of the addictiveness of games does come from the fact that as you play you are mastering a set of challenges. But humanitys larger understanding of the world comes primarily through communication and experimentation, through answering the question What if? Games excel at teaching this too. Steven Johnsons thesis is not that electronic games constitute a great, popular art, but that the mean level of mass culture has been demanding steadily more intellectual engagement from consumers. Games, he points out, generate satisfaction via the complexity of their virtual worlds, not by their robotic predictability. Testing the nature and limits of the laws of such imaginary worlds has more in common with scientific methods than with a pointless addiction, while the complexity of the problems children encounter within games exceeds that of anything they might find at school. Greenfield argues that there are ways of thinking that playing video games simply cannot teach. She has a point. We should never forget, for instance, the unique ability of books to engage and expand the human imagination, and to give us the means of more fully expressing our situations in the world. Intriguingly, the video games industry is now growing in ways that have more in common with an old-fashioned world of companionable pastimes than with a cyber future of lonely, isolated obsessives. Games in which friends and relations gather round a console to compete at activities are growing in popularity. The agenda is increasingly being set by the concerns of mainstream consumers what they consider acceptable for their children, what they want to play at parties and across generations. These trends embody a familiar but important truth: games are human products, and lie within our control. This doesnt mean we yet control or understand them fully, but it should remind us that there is nothing inevitable or incomprehensible about them. No matter how deeply it may be felt, instinctive fear is an inappropriate response to technology of any kind. So far, the dire predictions many traditionalists have made about the death of old-fashioned narratives and imaginative thought at the hands of video games cannot be upheld. Television and cinema may be suffering, economically, at the hands of interactive media. But literacy standards have failed to decline. Young people still enjoy sport, going out and listening to music And most research including a recent $1.5m study funded by the US government suggests that even pre- teens are not in the habit of blurring game worlds and real worlds. The sheer pace and scale of the changes we face, however, leave little room for complacency. Richard Battle, a British writer and game researcher, says Times change: accept it; embrace it. Just as, today, we have no living memories of a time before radio, we will soon live in a world in which no one living experienced growing up without computers. It is for this reason that we must try to examine what we stand to lose and gain, before it is too late.
Being afraid of technological advances is a justifiable reaction.
c
id_3548
Is there more to video games than people realize? Many people who spend a lot of time playing video games insist that they have helped them in areas like confidence-building, presentation skills and debating. Yet this way of thinking about video games can be found almost nowhere within the mainstream media, which still tend to treat games as an odd mix of the slightly menacing and the alien. This lack of awareness has become increasingly inappropriate, as video games and the culture that surrounds them have become very big business indeed. Recently, the British government released the Byron report into the effects of electronic media on children. Its conclusions set out a clear, rational basis for exploring the regulation of video games. The ensuing debate, however, has descended into the same old squabbling between partisan factions: the preachers of mental and moral decline, and the innovative game designers. In between are the gamers, busily buying and playing while nonsense is talked over their heads. Susan Greenfield, renowned neuroscientist, outlines her concerns in a new book. Every individuals mind is the product of a brain that has been personalized by the sum total of their experiences; with an increasing quantity of our experiences from very early childhood taking place on screen rather than in the world, there is potentially a profound shift in the way childrens minds work. She suggests that the fast-paced, second-hand experiences created by video games and the Internet may inculcate a worldview that is less empathetic, more risk-taking and less contemplative than what we tend to think of as healthy. Greenfields prose is full of mixed metaphors and self-contradictions and is perhaps the worst enemy of her attempts to persuade. This is unfortunate, because however much technophiles may snort, she is articulating widely held fears that have a basis in fact. Unlike even their immediate antecedents, the latest electronic media are at once domestic and work-related, their mobility blurring the boundaries between these spaces, and video games are at their forefront. A generational divide has opened that is in many ways more profound than the equivalent shifts associated with radio or television, more alienating for those unfamiliar with new technologies, more absorbing for those who are. So how do our lawmakers regulate something that is too fluid to be fully comprehended or controlled? Adam Martin, a lead programmer for an online games developer, says: Computer games teach and people dont even notice theyre being taught. But isnt the kind of learning that goes on in games rather narrow? A large part of the addictiveness of games does come from the fact that as you play you are mastering a set of challenges. But humanitys larger understanding of the world comes primarily through communication and experimentation, through answering the question What if? Games excel at teaching this too. Steven Johnsons thesis is not that electronic games constitute a great, popular art, but that the mean level of mass culture has been demanding steadily more intellectual engagement from consumers. Games, he points out, generate satisfaction via the complexity of their virtual worlds, not by their robotic predictability. Testing the nature and limits of the laws of such imaginary worlds has more in common with scientific methods than with a pointless addiction, while the complexity of the problems children encounter within games exceeds that of anything they might find at school. Greenfield argues that there are ways of thinking that playing video games simply cannot teach. She has a point. We should never forget, for instance, the unique ability of books to engage and expand the human imagination, and to give us the means of more fully expressing our situations in the world. Intriguingly, the video games industry is now growing in ways that have more in common with an old-fashioned world of companionable pastimes than with a cyber future of lonely, isolated obsessives. Games in which friends and relations gather round a console to compete at activities are growing in popularity. The agenda is increasingly being set by the concerns of mainstream consumers what they consider acceptable for their children, what they want to play at parties and across generations. These trends embody a familiar but important truth: games are human products, and lie within our control. This doesnt mean we yet control or understand them fully, but it should remind us that there is nothing inevitable or incomprehensible about them. No matter how deeply it may be felt, instinctive fear is an inappropriate response to technology of any kind. So far, the dire predictions many traditionalists have made about the death of old-fashioned narratives and imaginative thought at the hands of video games cannot be upheld. Television and cinema may be suffering, economically, at the hands of interactive media. But literacy standards have failed to decline. Young people still enjoy sport, going out and listening to music And most research including a recent $1.5m study funded by the US government suggests that even pre- teens are not in the habit of blurring game worlds and real worlds. The sheer pace and scale of the changes we face, however, leave little room for complacency. Richard Battle, a British writer and game researcher, says Times change: accept it; embrace it. Just as, today, we have no living memories of a time before radio, we will soon live in a world in which no one living experienced growing up without computers. It is for this reason that we must try to examine what we stand to lose and gain, before it is too late.
It is likely that video games will take over the role of certain kinds of books in the future.
n
id_3549
Is there more to video games than people realize? Many people who spend a lot of time playing video games insist that they have helped them in areas like confidence-building, presentation skills and debating. Yet this way of thinking about video games can be found almost nowhere within the mainstream media, which still tend to treat games as an odd mix of the slightly menacing and the alien. This lack of awareness has become increasingly inappropriate, as video games and the culture that surrounds them have become very big business indeed. Recently, the British government released the Byron report into the effects of electronic media on children. Its conclusions set out a clear, rational basis for exploring the regulation of video games. The ensuing debate, however, has descended into the same old squabbling between partisan factions: the preachers of mental and moral decline, and the innovative game designers. In between are the gamers, busily buying and playing while nonsense is talked over their heads. Susan Greenfield, renowned neuroscientist, outlines her concerns in a new book. Every individuals mind is the product of a brain that has been personalized by the sum total of their experiences; with an increasing quantity of our experiences from very early childhood taking place on screen rather than in the world, there is potentially a profound shift in the way childrens minds work. She suggests that the fast-paced, second-hand experiences created by video games and the Internet may inculcate a worldview that is less empathetic, more risk-taking and less contemplative than what we tend to think of as healthy. Greenfields prose is full of mixed metaphors and self-contradictions and is perhaps the worst enemy of her attempts to persuade. This is unfortunate, because however much technophiles may snort, she is articulating widely held fears that have a basis in fact. Unlike even their immediate antecedents, the latest electronic media are at once domestic and work-related, their mobility blurring the boundaries between these spaces, and video games are at their forefront. A generational divide has opened that is in many ways more profound than the equivalent shifts associated with radio or television, more alienating for those unfamiliar with new technologies, more absorbing for those who are. So how do our lawmakers regulate something that is too fluid to be fully comprehended or controlled? Adam Martin, a lead programmer for an online games developer, says: Computer games teach and people dont even notice theyre being taught. But isnt the kind of learning that goes on in games rather narrow? A large part of the addictiveness of games does come from the fact that as you play you are mastering a set of challenges. But humanitys larger understanding of the world comes primarily through communication and experimentation, through answering the question What if? Games excel at teaching this too. Steven Johnsons thesis is not that electronic games constitute a great, popular art, but that the mean level of mass culture has been demanding steadily more intellectual engagement from consumers. Games, he points out, generate satisfaction via the complexity of their virtual worlds, not by their robotic predictability. Testing the nature and limits of the laws of such imaginary worlds has more in common with scientific methods than with a pointless addiction, while the complexity of the problems children encounter within games exceeds that of anything they might find at school. Greenfield argues that there are ways of thinking that playing video games simply cannot teach. She has a point. We should never forget, for instance, the unique ability of books to engage and expand the human imagination, and to give us the means of more fully expressing our situations in the world. Intriguingly, the video games industry is now growing in ways that have more in common with an old-fashioned world of companionable pastimes than with a cyber future of lonely, isolated obsessives. Games in which friends and relations gather round a console to compete at activities are growing in popularity. The agenda is increasingly being set by the concerns of mainstream consumers what they consider acceptable for their children, what they want to play at parties and across generations. These trends embody a familiar but important truth: games are human products, and lie within our control. This doesnt mean we yet control or understand them fully, but it should remind us that there is nothing inevitable or incomprehensible about them. No matter how deeply it may be felt, instinctive fear is an inappropriate response to technology of any kind. So far, the dire predictions many traditionalists have made about the death of old-fashioned narratives and imaginative thought at the hands of video games cannot be upheld. Television and cinema may be suffering, economically, at the hands of interactive media. But literacy standards have failed to decline. Young people still enjoy sport, going out and listening to music And most research including a recent $1.5m study funded by the US government suggests that even pre- teens are not in the habit of blurring game worlds and real worlds. The sheer pace and scale of the changes we face, however, leave little room for complacency. Richard Battle, a British writer and game researcher, says Times change: accept it; embrace it. Just as, today, we have no living memories of a time before radio, we will soon live in a world in which no one living experienced growing up without computers. It is for this reason that we must try to examine what we stand to lose and gain, before it is too late.
More sociable games are being brought out to satisfy the demands of the buying public.
e
id_3550
Is there more to video games than people realize? Many people who spend a lot of time playing video games insist that they have helped them in areas like confidence-building, presentation skills and debating. Yet this way of thinking about video games can be found almost nowhere within the mainstream media, which still tend to treat games as an odd mix of the slightly menacing and the alien. This lack of awareness has become increasingly inappropriate, as video games and the culture that surrounds them have become very big business indeed. Recently, the British government released the Byron report into the effects of electronic media on children. Its conclusions set out a clear, rational basis for exploring the regulation of video games. The ensuing debate, however, has descended into the same old squabbling between partisan factions: the preachers of mental and moral decline, and the innovative game designers. In between are the gamers, busily buying and playing while nonsense is talked over their heads. Susan Greenfield, renowned neuroscientist, outlines her concerns in a new book. Every individuals mind is the product of a brain that has been personalized by the sum total of their experiences; with an increasing quantity of our experiences from very early childhood taking place on screen rather than in the world, there is potentially a profound shift in the way childrens minds work. She suggests that the fast-paced, second-hand experiences created by video games and the Internet may inculcate a worldview that is less empathetic, more risk-taking and less contemplative than what we tend to think of as healthy. Greenfields prose is full of mixed metaphors and self-contradictions and is perhaps the worst enemy of her attempts to persuade. This is unfortunate, because however much technophiles may snort, she is articulating widely held fears that have a basis in fact. Unlike even their immediate antecedents, the latest electronic media are at once domestic and work-related, their mobility blurring the boundaries between these spaces, and video games are at their forefront. A generational divide has opened that is in many ways more profound than the equivalent shifts associated with radio or television, more alienating for those unfamiliar with new technologies, more absorbing for those who are. So how do our lawmakers regulate something that is too fluid to be fully comprehended or controlled? Adam Martin, a lead programmer for an online games developer, says: Computer games teach and people dont even notice theyre being taught. But isnt the kind of learning that goes on in games rather narrow? A large part of the addictiveness of games does come from the fact that as you play you are mastering a set of challenges. But humanitys larger understanding of the world comes primarily through communication and experimentation, through answering the question What if? Games excel at teaching this too. Steven Johnsons thesis is not that electronic games constitute a great, popular art, but that the mean level of mass culture has been demanding steadily more intellectual engagement from consumers. Games, he points out, generate satisfaction via the complexity of their virtual worlds, not by their robotic predictability. Testing the nature and limits of the laws of such imaginary worlds has more in common with scientific methods than with a pointless addiction, while the complexity of the problems children encounter within games exceeds that of anything they might find at school. Greenfield argues that there are ways of thinking that playing video games simply cannot teach. She has a point. We should never forget, for instance, the unique ability of books to engage and expand the human imagination, and to give us the means of more fully expressing our situations in the world. Intriguingly, the video games industry is now growing in ways that have more in common with an old-fashioned world of companionable pastimes than with a cyber future of lonely, isolated obsessives. Games in which friends and relations gather round a console to compete at activities are growing in popularity. The agenda is increasingly being set by the concerns of mainstream consumers what they consider acceptable for their children, what they want to play at parties and across generations. These trends embody a familiar but important truth: games are human products, and lie within our control. This doesnt mean we yet control or understand them fully, but it should remind us that there is nothing inevitable or incomprehensible about them. No matter how deeply it may be felt, instinctive fear is an inappropriate response to technology of any kind. So far, the dire predictions many traditionalists have made about the death of old-fashioned narratives and imaginative thought at the hands of video games cannot be upheld. Television and cinema may be suffering, economically, at the hands of interactive media. But literacy standards have failed to decline. Young people still enjoy sport, going out and listening to music And most research including a recent $1.5m study funded by the US government suggests that even pre- teens are not in the habit of blurring game worlds and real worlds. The sheer pace and scale of the changes we face, however, leave little room for complacency. Richard Battle, a British writer and game researcher, says Times change: accept it; embrace it. Just as, today, we have no living memories of a time before radio, we will soon live in a world in which no one living experienced growing up without computers. It is for this reason that we must try to examine what we stand to lose and gain, before it is too late.
Much media comment ignores the impact that video games can have on many peoples lives.
e
id_3551
Is there more to video games than people realize? Many people who spend a lot of time playing video games insist that they have helped them in areas like confidence-building, presentation skills and debating. Yet this way of thinking about video games can be found almost nowhere within the mainstream media, which still tend to treat games as an odd mix of the slightly menacing and the alien. This lack of awareness has become increasingly inappropriate, as video games and the culture that surrounds them have become very big business indeed. Recently, the British government released the Byron report into the effects of electronic media on children. Its conclusions set out a clear, rational basis for exploring the regulation of video games. The ensuing debate, however, has descended into the same old squabbling between partisan factions: the preachers of mental and moral decline, and the innovative game designers. In between are the gamers, busily buying and playing while nonsense is talked over their heads. Susan Greenfield, renowned neuroscientist, outlines her concerns in a new book. Every individuals mind is the product of a brain that has been personalized by the sum total of their experiences; with an increasing quantity of our experiences from very early childhood taking place on screen rather than in the world, there is potentially a profound shift in the way childrens minds work. She suggests that the fast-paced, second-hand experiences created by video games and the Internet may inculcate a worldview that is less empathetic, more risk-taking and less contemplative than what we tend to think of as healthy. Greenfields prose is full of mixed metaphors and self-contradictions and is perhaps the worst enemy of her attempts to persuade. This is unfortunate, because however much technophiles may snort, she is articulating widely held fears that have a basis in fact. Unlike even their immediate antecedents, the latest electronic media are at once domestic and work-related, their mobility blurring the boundaries between these spaces, and video games are at their forefront. A generational divide has opened that is in many ways more profound than the equivalent shifts associated with radio or television, more alienating for those unfamiliar with new technologies, more absorbing for those who are. So how do our lawmakers regulate something that is too fluid to be fully comprehended or controlled? Adam Martin, a lead programmer for an online games developer, says: Computer games teach and people dont even notice theyre being taught. But isnt the kind of learning that goes on in games rather narrow? A large part of the addictiveness of games does come from the fact that as you play you are mastering a set of challenges. But humanitys larger understanding of the world comes primarily through communication and experimentation, through answering the question What if? Games excel at teaching this too. Steven Johnsons thesis is not that electronic games constitute a great, popular art, but that the mean level of mass culture has been demanding steadily more intellectual engagement from consumers. Games, he points out, generate satisfaction via the complexity of their virtual worlds, not by their robotic predictability. Testing the nature and limits of the laws of such imaginary worlds has more in common with scientific methods than with a pointless addiction, while the complexity of the problems children encounter within games exceeds that of anything they might find at school. Greenfield argues that there are ways of thinking that playing video games simply cannot teach. She has a point. We should never forget, for instance, the unique ability of books to engage and expand the human imagination, and to give us the means of more fully expressing our situations in the world. Intriguingly, the video games industry is now growing in ways that have more in common with an old-fashioned world of companionable pastimes than with a cyber future of lonely, isolated obsessives. Games in which friends and relations gather round a console to compete at activities are growing in popularity. The agenda is increasingly being set by the concerns of mainstream consumers what they consider acceptable for their children, what they want to play at parties and across generations. These trends embody a familiar but important truth: games are human products, and lie within our control. This doesnt mean we yet control or understand them fully, but it should remind us that there is nothing inevitable or incomprehensible about them. No matter how deeply it may be felt, instinctive fear is an inappropriate response to technology of any kind. So far, the dire predictions many traditionalists have made about the death of old-fashioned narratives and imaginative thought at the hands of video games cannot be upheld. Television and cinema may be suffering, economically, at the hands of interactive media. But literacy standards have failed to decline. Young people still enjoy sport, going out and listening to music And most research including a recent $1.5m study funded by the US government suggests that even pre- teens are not in the habit of blurring game worlds and real worlds. The sheer pace and scale of the changes we face, however, leave little room for complacency. Richard Battle, a British writer and game researcher, says Times change: accept it; embrace it. Just as, today, we have no living memories of a time before radio, we will soon live in a world in which no one living experienced growing up without computers. It is for this reason that we must try to examine what we stand to lose and gain, before it is too late.
The publication of the Byron Report was followed by a worthwhile discussion between those for and against video games.
c
id_3552
Is there such a thing as Canadian English? If so, what is it? The standard stereotype among Americans is that Canadians are like Americans, except they say eh a lot and pronounce out and about as oot and aboot. Many Canadians, on the other hand, will tell you that Canadian English is more like British English, and as proof will hold aloft the spellings colour and centre and the name zed for the letter Z. Canadian does exist as a separate variety of British English, with subtly distinctive features of pronunciation and vocabulary. It has its own dictionaries; the Canadian Press has its own style guide; the Editors Association of Canada has just released a second edition of Editing Canadian English. But an emblematic feature of Editing Canadian English is comparison tables of American versus British spellings so the Canadian editor can come to a reasonable decision on which to use... on each occasion. The core of Canadian English is a pervasive ambivalence. Canadian history helps to explain this. In the beginning there were the indigenous people, with far more linguistic and cultural variety than Europe. Theyre still there, but Canadian English, like Canadian Anglophone society in general, gives them little more than desultory token nods. Fights between European settlers shaped Canadian English more. The French, starting in the 1600s, colonised the St Lawrence River region and the Atlantic coast south of it. In the mid-1700s, England got into a war with France, concluding with the Treaty of Paris in 1763, which ceded New France to England. The English allowed any French to stay who were willing to become subjects of the English King. At the time of the Treaty of Paris, however, there were very few English speakers in Canada. The American Revolution changed that. The founding English-speaking people of Canada were United Empire Loyalists people who fled American independence and were rewarded with land in Canada. Thus Canadian English was, from its very beginning, both American because its speakers had come from the American colonies and not American, because they rejected the newly independent nation. Just as the Americans sought to have a truly distinct, independent American version of English, the loyalists sought to remain more like England... sort of. These were people whose variety of English was already diverging from the British and vice versa: when the residents of London and its environs began to drop their rs and change some of their vowels people in certain parts of the United States adopted some of these changes, but Canadians did not.
The fifth paragraph states that many English-speaking countries adopted changes in pronounciation.
n
id_3553
Is there such a thing as Canadian English? If so, what is it? The standard stereotype among Americans is that Canadians are like Americans, except they say eh a lot and pronounce out and about as oot and aboot. Many Canadians, on the other hand, will tell you that Canadian English is more like British English, and as proof will hold aloft the spellings colour and centre and the name zed for the letter Z. Canadian does exist as a separate variety of British English, with subtly distinctive features of pronunciation and vocabulary. It has its own dictionaries; the Canadian Press has its own style guide; the Editors Association of Canada has just released a second edition of Editing Canadian English. But an emblematic feature of Editing Canadian English is comparison tables of American versus British spellings so the Canadian editor can come to a reasonable decision on which to use... on each occasion. The core of Canadian English is a pervasive ambivalence. Canadian history helps to explain this. In the beginning there were the indigenous people, with far more linguistic and cultural variety than Europe. Theyre still there, but Canadian English, like Canadian Anglophone society in general, gives them little more than desultory token nods. Fights between European settlers shaped Canadian English more. The French, starting in the 1600s, colonised the St Lawrence River region and the Atlantic coast south of it. In the mid-1700s, England got into a war with France, concluding with the Treaty of Paris in 1763, which ceded New France to England. The English allowed any French to stay who were willing to become subjects of the English King. At the time of the Treaty of Paris, however, there were very few English speakers in Canada. The American Revolution changed that. The founding English-speaking people of Canada were United Empire Loyalists people who fled American independence and were rewarded with land in Canada. Thus Canadian English was, from its very beginning, both American because its speakers had come from the American colonies and not American, because they rejected the newly independent nation. Just as the Americans sought to have a truly distinct, independent American version of English, the loyalists sought to remain more like England... sort of. These were people whose variety of English was already diverging from the British and vice versa: when the residents of London and its environs began to drop their rs and change some of their vowels people in certain parts of the United States adopted some of these changes, but Canadians did not.
Canadian English is considered neither American nor not American.
c
id_3554
Is there such a thing as Canadian English? If so, what is it? The standard stereotype among Americans is that Canadians are like Americans, except they say eh a lot and pronounce out and about as oot and aboot. Many Canadians, on the other hand, will tell you that Canadian English is more like British English, and as proof will hold aloft the spellings colour and centre and the name zed for the letter Z. Canadian does exist as a separate variety of British English, with subtly distinctive features of pronunciation and vocabulary. It has its own dictionaries; the Canadian Press has its own style guide; the Editors Association of Canada has just released a second edition of Editing Canadian English. But an emblematic feature of Editing Canadian English is comparison tables of American versus British spellings so the Canadian editor can come to a reasonable decision on which to use... on each occasion. The core of Canadian English is a pervasive ambivalence. Canadian history helps to explain this. In the beginning there were the indigenous people, with far more linguistic and cultural variety than Europe. Theyre still there, but Canadian English, like Canadian Anglophone society in general, gives them little more than desultory token nods. Fights between European settlers shaped Canadian English more. The French, starting in the 1600s, colonised the St Lawrence River region and the Atlantic coast south of it. In the mid-1700s, England got into a war with France, concluding with the Treaty of Paris in 1763, which ceded New France to England. The English allowed any French to stay who were willing to become subjects of the English King. At the time of the Treaty of Paris, however, there were very few English speakers in Canada. The American Revolution changed that. The founding English-speaking people of Canada were United Empire Loyalists people who fled American independence and were rewarded with land in Canada. Thus Canadian English was, from its very beginning, both American because its speakers had come from the American colonies and not American, because they rejected the newly independent nation. Just as the Americans sought to have a truly distinct, independent American version of English, the loyalists sought to remain more like England... sort of. These were people whose variety of English was already diverging from the British and vice versa: when the residents of London and its environs began to drop their rs and change some of their vowels people in certain parts of the United States adopted some of these changes, but Canadians did not.
The St Lawrence River was colonised by Canadians in 1600.
c
id_3555
Is there such a thing as Canadian English? If so, what is it? The standard stereotype among Americans is that Canadians are like Americans, except they say eh a lot and pronounce out and about as oot and aboot. Many Canadians, on the other hand, will tell you that Canadian English is more like British English, and as proof will hold aloft the spellings colour and centre and the name zed for the letter Z. Canadian does exist as a separate variety of British English, with subtly distinctive features of pronunciation and vocabulary. It has its own dictionaries; the Canadian Press has its own style guide; the Editors Association of Canada has just released a second edition of Editing Canadian English. But an emblematic feature of Editing Canadian English is comparison tables of American versus British spellings so the Canadian editor can come to a reasonable decision on which to use... on each occasion. The core of Canadian English is a pervasive ambivalence. Canadian history helps to explain this. In the beginning there were the indigenous people, with far more linguistic and cultural variety than Europe. Theyre still there, but Canadian English, like Canadian Anglophone society in general, gives them little more than desultory token nods. Fights between European settlers shaped Canadian English more. The French, starting in the 1600s, colonised the St Lawrence River region and the Atlantic coast south of it. In the mid-1700s, England got into a war with France, concluding with the Treaty of Paris in 1763, which ceded New France to England. The English allowed any French to stay who were willing to become subjects of the English King. At the time of the Treaty of Paris, however, there were very few English speakers in Canada. The American Revolution changed that. The founding English-speaking people of Canada were United Empire Loyalists people who fled American independence and were rewarded with land in Canada. Thus Canadian English was, from its very beginning, both American because its speakers had come from the American colonies and not American, because they rejected the newly independent nation. Just as the Americans sought to have a truly distinct, independent American version of English, the loyalists sought to remain more like England... sort of. These were people whose variety of English was already diverging from the British and vice versa: when the residents of London and its environs began to drop their rs and change some of their vowels people in certain parts of the United States adopted some of these changes, but Canadians did not.
According to the secod paragraph, Canadian English is pretty similar to British, with some minor differences.
e
id_3556
Is there such a thing as Canadian English? If so, what is it? The standard stereotype among Americans is that Canadians are like Americans, except they say eh a lot and pronounce out and about as oot and aboot. Many Canadians, on the other hand, will tell you that Canadian English is more like British English, and as proof will hold aloft the spellings colour and centre and the name zed for the letter Z. Canadian does exist as a separate variety of British English, with subtly distinctive features of pronunciation and vocabulary. It has its own dictionaries; the Canadian Press has its own style guide; the Editors Association of Canada has just released a second edition of Editing Canadian English. But an emblematic feature of Editing Canadian English is comparison tables of American versus British spellings so the Canadian editor can come to a reasonable decision on which to use... on each occasion. The core of Canadian English is a pervasive ambivalence. Canadian history helps to explain this. In the beginning there were the indigenous people, with far more linguistic and cultural variety than Europe. Theyre still there, but Canadian English, like Canadian Anglophone society in general, gives them little more than desultory token nods. Fights between European settlers shaped Canadian English more. The French, starting in the 1600s, colonised the St Lawrence River region and the Atlantic coast south of it. In the mid-1700s, England got into a war with France, concluding with the Treaty of Paris in 1763, which ceded New France to England. The English allowed any French to stay who were willing to become subjects of the English King. At the time of the Treaty of Paris, however, there were very few English speakers in Canada. The American Revolution changed that. The founding English-speaking people of Canada were United Empire Loyalists people who fled American independence and were rewarded with land in Canada. Thus Canadian English was, from its very beginning, both American because its speakers had come from the American colonies and not American, because they rejected the newly independent nation. Just as the Americans sought to have a truly distinct, independent American version of English, the loyalists sought to remain more like England... sort of. These were people whose variety of English was already diverging from the British and vice versa: when the residents of London and its environs began to drop their rs and change some of their vowels people in certain parts of the United States adopted some of these changes, but Canadians did not.
Canadian English is considered more like British English by canadians.
e
id_3557
Isambard Kingdom Brunel. Isambard Kingdom Brunel's name comes from his civil engineer father, a Normandy refugee from the French Revolution. His English mother, Sophia Kingdom, gives birth to their only son on 9 April 1806. Marc Isambard Brunel isn't good with money, but he is a great engineer and a great teacher to his son. Marc sends him to boarding school and then to France. This, along with some ill advised projects, proves financially unsustainable and both his parents spend three months in a debtor's prison. On top of this, Brunel's refused entry into a renowned French engineering school because despite his French father, he's considered a foreigner. But the British government recognise his father's engineering potential and release him from his debts and jail. His son returns to England and still just a teenager, Brunel becomes chief assistant engineer on his father's project to create a tunnel under the Thames. This 1,200ft (365m) long tunnel was to be their first, and last, project together. Despite Marc inventing a tunnelling shield that protects workers as they progress, work is still extremely hazardous. Breaches and collapses often halt the project. And on one Saturday, on 12 January 1828, the tunnel floods. Six men are swept to their deaths in a tidal wave of sewage, debris and water. The 22 year old Brunel should have joined them. But his assistant manages to pull Brunel's unconscious body from the water. It takes several months to recover, but as he does, he devises his most memorable design, the Bristol Clifton Suspension Bridge. He will build over a hundred bridges but this is the one that history will remember. At the time of building, it's the longest bridge in the world. It spans a 250ft (76m) deep gorge with sheer rock either side. To transport materials across, a 1,000 ft (305m) iron bar is suspended between the two ends and a man sized basket is pulled back and forth. The first man to test it is Brunel himself. But as with the Thames Tunnel, work is halted; this time because of riots. Britain's ruling classes are trying their best to withstand the rise of the working and middle classes. As the riots end, so does investor's interest in the bridge. Brunel will also redesign Bristol harbour, but he'll never live to see the completion of his greatest achievement. In 1833, Brunel is appointed chief engineer of the Great Western Railway and he starts connecting the South West of England with London. In total, he'll build 25 railway lines. With this one, he hopes to reduce journey time down to four hours, a full 13 hours quicker than the even the mail coach can achieve. Brunel's budget is 2 million. He will spend six. His design vision is total. No detail is too small. Everything from the lampposts, stations, locomotives, carriages and even the width of the track are re-designed. Brunel wants to bring not just speed, but comfort to the travelling public. By doubling the width of the track, he can do that. But Brunel's 'broad gauge' is a rejection of the gauge advocated by the other great man of rail, George Stephenson. And existing track owners are understandably resistant. And then there are countless landowners who oppose his plans to carve through the countryside. One of his first obstacles is a flood plain 11 miles west of London. The easiest engineering solution would have added just a few seconds to the journey time. Brunel's solution is the Wharncliffe Viaduct. Its 900ft (274m) length and eight arches shave those seconds off. Other remarkable innovations include building a bridge over the Thames at Maidenhead that is still the widest, flattest brick arch bridge in the world. And his constructions were costly, both in terms of men, as much as money. His Box Tunnel, then the longest railway tunnel in existence, took five years and 4,000 men with dynamite to build it. In percentage terms, you were more likely to die building the Box Hill tunnel than in the trenches of the First World War. In 1835, the year before his marriage, Brunel had offered his services free to the Great Western Steamship Company believing that steam powered ships could cross the Atlantic. It was to complete his vision of a passenger being able to buy one ticket that would get them from London to New York. At the time, Brunel had never designed a ship. And the Atlantic had only ever been crossed under sail. Many thought a steam powered boat would take so much coal to power there wouldn't be room for paying passengers or commercial cargo. When a rival attempted the journey, the crew had to burn cabin furniture to complete the journey. But Brunel calculates that a ship twice the size of a 100ft (30.5m) ship won't require twice as much coal to fuel it. Work commences on the 2,300 ton behemoth, the Great Western. Brunel is badly burnt during an engine fire on her launch but, in 1838, the longest ship in the world sets sail for New York. Fifteen days later she arrives. And she has a third of her coal, over 200 tonnes, left to burn. For the next eight years, she is the ship of choice for transatlantic passengers. He may have laid the foundations of modern industrial Britain, but Brunel's often forced to use wood as a material. His designs, such as for the Great Western, come several decades before Andrew Carnegie and his mass production of cheap steel. He was able to use metal, wrought-iron to be precise, and not wood for his ship, Great Britain. She is now considered to be the first modern ship because she was screw propeller-driven rather than by a paddle wheel. Her strength was demonstrated when she was run aground on only her fifth journey and left to winter in that state. On release, her hull was found to have no damage. But her 1845 journey was only between London and New York again. Brunel wanted more. His next Leviathan project is the Great Eastern. She is built to be capable of taking 4,000 passengers between London and Sydney, Australia. It would be another 50 years before the world would see another ship of the same size. But Brunel is becoming increasingly disillusioned. His visionary railway broad gauge had been all but abandoned against Stephenson's standard gauge. In 1859, as the engines of Great Eastern are tested, Brunel, like his father before him, suffers a stroke. He collapses on deck. As seen in his publicity photographs, he is a heavy smoker. Ten days later, on 15 September, Isambard Kingdom Brunel dies. His Great Eastern will become a commercial catastrophe. The ship intended to transport thousands to a new continent, instead ends its working days laying telegraph cable.
The Great Eastern was initially designed to help merchants commute between British and Australia.
n
id_3558
Isambard Kingdom Brunel. Isambard Kingdom Brunel's name comes from his civil engineer father, a Normandy refugee from the French Revolution. His English mother, Sophia Kingdom, gives birth to their only son on 9 April 1806. Marc Isambard Brunel isn't good with money, but he is a great engineer and a great teacher to his son. Marc sends him to boarding school and then to France. This, along with some ill advised projects, proves financially unsustainable and both his parents spend three months in a debtor's prison. On top of this, Brunel's refused entry into a renowned French engineering school because despite his French father, he's considered a foreigner. But the British government recognise his father's engineering potential and release him from his debts and jail. His son returns to England and still just a teenager, Brunel becomes chief assistant engineer on his father's project to create a tunnel under the Thames. This 1,200ft (365m) long tunnel was to be their first, and last, project together. Despite Marc inventing a tunnelling shield that protects workers as they progress, work is still extremely hazardous. Breaches and collapses often halt the project. And on one Saturday, on 12 January 1828, the tunnel floods. Six men are swept to their deaths in a tidal wave of sewage, debris and water. The 22 year old Brunel should have joined them. But his assistant manages to pull Brunel's unconscious body from the water. It takes several months to recover, but as he does, he devises his most memorable design, the Bristol Clifton Suspension Bridge. He will build over a hundred bridges but this is the one that history will remember. At the time of building, it's the longest bridge in the world. It spans a 250ft (76m) deep gorge with sheer rock either side. To transport materials across, a 1,000 ft (305m) iron bar is suspended between the two ends and a man sized basket is pulled back and forth. The first man to test it is Brunel himself. But as with the Thames Tunnel, work is halted; this time because of riots. Britain's ruling classes are trying their best to withstand the rise of the working and middle classes. As the riots end, so does investor's interest in the bridge. Brunel will also redesign Bristol harbour, but he'll never live to see the completion of his greatest achievement. In 1833, Brunel is appointed chief engineer of the Great Western Railway and he starts connecting the South West of England with London. In total, he'll build 25 railway lines. With this one, he hopes to reduce journey time down to four hours, a full 13 hours quicker than the even the mail coach can achieve. Brunel's budget is 2 million. He will spend six. His design vision is total. No detail is too small. Everything from the lampposts, stations, locomotives, carriages and even the width of the track are re-designed. Brunel wants to bring not just speed, but comfort to the travelling public. By doubling the width of the track, he can do that. But Brunel's 'broad gauge' is a rejection of the gauge advocated by the other great man of rail, George Stephenson. And existing track owners are understandably resistant. And then there are countless landowners who oppose his plans to carve through the countryside. One of his first obstacles is a flood plain 11 miles west of London. The easiest engineering solution would have added just a few seconds to the journey time. Brunel's solution is the Wharncliffe Viaduct. Its 900ft (274m) length and eight arches shave those seconds off. Other remarkable innovations include building a bridge over the Thames at Maidenhead that is still the widest, flattest brick arch bridge in the world. And his constructions were costly, both in terms of men, as much as money. His Box Tunnel, then the longest railway tunnel in existence, took five years and 4,000 men with dynamite to build it. In percentage terms, you were more likely to die building the Box Hill tunnel than in the trenches of the First World War. In 1835, the year before his marriage, Brunel had offered his services free to the Great Western Steamship Company believing that steam powered ships could cross the Atlantic. It was to complete his vision of a passenger being able to buy one ticket that would get them from London to New York. At the time, Brunel had never designed a ship. And the Atlantic had only ever been crossed under sail. Many thought a steam powered boat would take so much coal to power there wouldn't be room for paying passengers or commercial cargo. When a rival attempted the journey, the crew had to burn cabin furniture to complete the journey. But Brunel calculates that a ship twice the size of a 100ft (30.5m) ship won't require twice as much coal to fuel it. Work commences on the 2,300 ton behemoth, the Great Western. Brunel is badly burnt during an engine fire on her launch but, in 1838, the longest ship in the world sets sail for New York. Fifteen days later she arrives. And she has a third of her coal, over 200 tonnes, left to burn. For the next eight years, she is the ship of choice for transatlantic passengers. He may have laid the foundations of modern industrial Britain, but Brunel's often forced to use wood as a material. His designs, such as for the Great Western, come several decades before Andrew Carnegie and his mass production of cheap steel. He was able to use metal, wrought-iron to be precise, and not wood for his ship, Great Britain. She is now considered to be the first modern ship because she was screw propeller-driven rather than by a paddle wheel. Her strength was demonstrated when she was run aground on only her fifth journey and left to winter in that state. On release, her hull was found to have no damage. But her 1845 journey was only between London and New York again. Brunel wanted more. His next Leviathan project is the Great Eastern. She is built to be capable of taking 4,000 passengers between London and Sydney, Australia. It would be another 50 years before the world would see another ship of the same size. But Brunel is becoming increasingly disillusioned. His visionary railway broad gauge had been all but abandoned against Stephenson's standard gauge. In 1859, as the engines of Great Eastern are tested, Brunel, like his father before him, suffers a stroke. He collapses on deck. As seen in his publicity photographs, he is a heavy smoker. Ten days later, on 15 September, Isambard Kingdom Brunel dies. His Great Eastern will become a commercial catastrophe. The ship intended to transport thousands to a new continent, instead ends its working days laying telegraph cable.
The Bristol Clifton Suspension Bridge is the most distinguished one among the bridges designed by Brunel.
e
id_3559
Isambard Kingdom Brunel. Isambard Kingdom Brunel's name comes from his civil engineer father, a Normandy refugee from the French Revolution. His English mother, Sophia Kingdom, gives birth to their only son on 9 April 1806. Marc Isambard Brunel isn't good with money, but he is a great engineer and a great teacher to his son. Marc sends him to boarding school and then to France. This, along with some ill advised projects, proves financially unsustainable and both his parents spend three months in a debtor's prison. On top of this, Brunel's refused entry into a renowned French engineering school because despite his French father, he's considered a foreigner. But the British government recognise his father's engineering potential and release him from his debts and jail. His son returns to England and still just a teenager, Brunel becomes chief assistant engineer on his father's project to create a tunnel under the Thames. This 1,200ft (365m) long tunnel was to be their first, and last, project together. Despite Marc inventing a tunnelling shield that protects workers as they progress, work is still extremely hazardous. Breaches and collapses often halt the project. And on one Saturday, on 12 January 1828, the tunnel floods. Six men are swept to their deaths in a tidal wave of sewage, debris and water. The 22 year old Brunel should have joined them. But his assistant manages to pull Brunel's unconscious body from the water. It takes several months to recover, but as he does, he devises his most memorable design, the Bristol Clifton Suspension Bridge. He will build over a hundred bridges but this is the one that history will remember. At the time of building, it's the longest bridge in the world. It spans a 250ft (76m) deep gorge with sheer rock either side. To transport materials across, a 1,000 ft (305m) iron bar is suspended between the two ends and a man sized basket is pulled back and forth. The first man to test it is Brunel himself. But as with the Thames Tunnel, work is halted; this time because of riots. Britain's ruling classes are trying their best to withstand the rise of the working and middle classes. As the riots end, so does investor's interest in the bridge. Brunel will also redesign Bristol harbour, but he'll never live to see the completion of his greatest achievement. In 1833, Brunel is appointed chief engineer of the Great Western Railway and he starts connecting the South West of England with London. In total, he'll build 25 railway lines. With this one, he hopes to reduce journey time down to four hours, a full 13 hours quicker than the even the mail coach can achieve. Brunel's budget is 2 million. He will spend six. His design vision is total. No detail is too small. Everything from the lampposts, stations, locomotives, carriages and even the width of the track are re-designed. Brunel wants to bring not just speed, but comfort to the travelling public. By doubling the width of the track, he can do that. But Brunel's 'broad gauge' is a rejection of the gauge advocated by the other great man of rail, George Stephenson. And existing track owners are understandably resistant. And then there are countless landowners who oppose his plans to carve through the countryside. One of his first obstacles is a flood plain 11 miles west of London. The easiest engineering solution would have added just a few seconds to the journey time. Brunel's solution is the Wharncliffe Viaduct. Its 900ft (274m) length and eight arches shave those seconds off. Other remarkable innovations include building a bridge over the Thames at Maidenhead that is still the widest, flattest brick arch bridge in the world. And his constructions were costly, both in terms of men, as much as money. His Box Tunnel, then the longest railway tunnel in existence, took five years and 4,000 men with dynamite to build it. In percentage terms, you were more likely to die building the Box Hill tunnel than in the trenches of the First World War. In 1835, the year before his marriage, Brunel had offered his services free to the Great Western Steamship Company believing that steam powered ships could cross the Atlantic. It was to complete his vision of a passenger being able to buy one ticket that would get them from London to New York. At the time, Brunel had never designed a ship. And the Atlantic had only ever been crossed under sail. Many thought a steam powered boat would take so much coal to power there wouldn't be room for paying passengers or commercial cargo. When a rival attempted the journey, the crew had to burn cabin furniture to complete the journey. But Brunel calculates that a ship twice the size of a 100ft (30.5m) ship won't require twice as much coal to fuel it. Work commences on the 2,300 ton behemoth, the Great Western. Brunel is badly burnt during an engine fire on her launch but, in 1838, the longest ship in the world sets sail for New York. Fifteen days later she arrives. And she has a third of her coal, over 200 tonnes, left to burn. For the next eight years, she is the ship of choice for transatlantic passengers. He may have laid the foundations of modern industrial Britain, but Brunel's often forced to use wood as a material. His designs, such as for the Great Western, come several decades before Andrew Carnegie and his mass production of cheap steel. He was able to use metal, wrought-iron to be precise, and not wood for his ship, Great Britain. She is now considered to be the first modern ship because she was screw propeller-driven rather than by a paddle wheel. Her strength was demonstrated when she was run aground on only her fifth journey and left to winter in that state. On release, her hull was found to have no damage. But her 1845 journey was only between London and New York again. Brunel wanted more. His next Leviathan project is the Great Eastern. She is built to be capable of taking 4,000 passengers between London and Sydney, Australia. It would be another 50 years before the world would see another ship of the same size. But Brunel is becoming increasingly disillusioned. His visionary railway broad gauge had been all but abandoned against Stephenson's standard gauge. In 1859, as the engines of Great Eastern are tested, Brunel, like his father before him, suffers a stroke. He collapses on deck. As seen in his publicity photographs, he is a heavy smoker. Ten days later, on 15 September, Isambard Kingdom Brunel dies. His Great Eastern will become a commercial catastrophe. The ship intended to transport thousands to a new continent, instead ends its working days laying telegraph cable.
Brunel managed to prove that steam-powered ship could carry some passengers besides the fuel.
e
id_3560
Isambard Kingdom Brunel. Isambard Kingdom Brunel's name comes from his civil engineer father, a Normandy refugee from the French Revolution. His English mother, Sophia Kingdom, gives birth to their only son on 9 April 1806. Marc Isambard Brunel isn't good with money, but he is a great engineer and a great teacher to his son. Marc sends him to boarding school and then to France. This, along with some ill advised projects, proves financially unsustainable and both his parents spend three months in a debtor's prison. On top of this, Brunel's refused entry into a renowned French engineering school because despite his French father, he's considered a foreigner. But the British government recognise his father's engineering potential and release him from his debts and jail. His son returns to England and still just a teenager, Brunel becomes chief assistant engineer on his father's project to create a tunnel under the Thames. This 1,200ft (365m) long tunnel was to be their first, and last, project together. Despite Marc inventing a tunnelling shield that protects workers as they progress, work is still extremely hazardous. Breaches and collapses often halt the project. And on one Saturday, on 12 January 1828, the tunnel floods. Six men are swept to their deaths in a tidal wave of sewage, debris and water. The 22 year old Brunel should have joined them. But his assistant manages to pull Brunel's unconscious body from the water. It takes several months to recover, but as he does, he devises his most memorable design, the Bristol Clifton Suspension Bridge. He will build over a hundred bridges but this is the one that history will remember. At the time of building, it's the longest bridge in the world. It spans a 250ft (76m) deep gorge with sheer rock either side. To transport materials across, a 1,000 ft (305m) iron bar is suspended between the two ends and a man sized basket is pulled back and forth. The first man to test it is Brunel himself. But as with the Thames Tunnel, work is halted; this time because of riots. Britain's ruling classes are trying their best to withstand the rise of the working and middle classes. As the riots end, so does investor's interest in the bridge. Brunel will also redesign Bristol harbour, but he'll never live to see the completion of his greatest achievement. In 1833, Brunel is appointed chief engineer of the Great Western Railway and he starts connecting the South West of England with London. In total, he'll build 25 railway lines. With this one, he hopes to reduce journey time down to four hours, a full 13 hours quicker than the even the mail coach can achieve. Brunel's budget is 2 million. He will spend six. His design vision is total. No detail is too small. Everything from the lampposts, stations, locomotives, carriages and even the width of the track are re-designed. Brunel wants to bring not just speed, but comfort to the travelling public. By doubling the width of the track, he can do that. But Brunel's 'broad gauge' is a rejection of the gauge advocated by the other great man of rail, George Stephenson. And existing track owners are understandably resistant. And then there are countless landowners who oppose his plans to carve through the countryside. One of his first obstacles is a flood plain 11 miles west of London. The easiest engineering solution would have added just a few seconds to the journey time. Brunel's solution is the Wharncliffe Viaduct. Its 900ft (274m) length and eight arches shave those seconds off. Other remarkable innovations include building a bridge over the Thames at Maidenhead that is still the widest, flattest brick arch bridge in the world. And his constructions were costly, both in terms of men, as much as money. His Box Tunnel, then the longest railway tunnel in existence, took five years and 4,000 men with dynamite to build it. In percentage terms, you were more likely to die building the Box Hill tunnel than in the trenches of the First World War. In 1835, the year before his marriage, Brunel had offered his services free to the Great Western Steamship Company believing that steam powered ships could cross the Atlantic. It was to complete his vision of a passenger being able to buy one ticket that would get them from London to New York. At the time, Brunel had never designed a ship. And the Atlantic had only ever been crossed under sail. Many thought a steam powered boat would take so much coal to power there wouldn't be room for paying passengers or commercial cargo. When a rival attempted the journey, the crew had to burn cabin furniture to complete the journey. But Brunel calculates that a ship twice the size of a 100ft (30.5m) ship won't require twice as much coal to fuel it. Work commences on the 2,300 ton behemoth, the Great Western. Brunel is badly burnt during an engine fire on her launch but, in 1838, the longest ship in the world sets sail for New York. Fifteen days later she arrives. And she has a third of her coal, over 200 tonnes, left to burn. For the next eight years, she is the ship of choice for transatlantic passengers. He may have laid the foundations of modern industrial Britain, but Brunel's often forced to use wood as a material. His designs, such as for the Great Western, come several decades before Andrew Carnegie and his mass production of cheap steel. He was able to use metal, wrought-iron to be precise, and not wood for his ship, Great Britain. She is now considered to be the first modern ship because she was screw propeller-driven rather than by a paddle wheel. Her strength was demonstrated when she was run aground on only her fifth journey and left to winter in that state. On release, her hull was found to have no damage. But her 1845 journey was only between London and New York again. Brunel wanted more. His next Leviathan project is the Great Eastern. She is built to be capable of taking 4,000 passengers between London and Sydney, Australia. It would be another 50 years before the world would see another ship of the same size. But Brunel is becoming increasingly disillusioned. His visionary railway broad gauge had been all but abandoned against Stephenson's standard gauge. In 1859, as the engines of Great Eastern are tested, Brunel, like his father before him, suffers a stroke. He collapses on deck. As seen in his publicity photographs, he is a heavy smoker. Ten days later, on 15 September, Isambard Kingdom Brunel dies. His Great Eastern will become a commercial catastrophe. The ship intended to transport thousands to a new continent, instead ends its working days laying telegraph cable.
The tunnelling shield was invented in the process of the Thames Tunnel project.
n
id_3561
Isambard Kingdom Brunel. Isambard Kingdom Brunel's name comes from his civil engineer father, a Normandy refugee from the French Revolution. His English mother, Sophia Kingdom, gives birth to their only son on 9 April 1806. Marc Isambard Brunel isn't good with money, but he is a great engineer and a great teacher to his son. Marc sends him to boarding school and then to France. This, along with some ill advised projects, proves financially unsustainable and both his parents spend three months in a debtor's prison. On top of this, Brunel's refused entry into a renowned French engineering school because despite his French father, he's considered a foreigner. But the British government recognise his father's engineering potential and release him from his debts and jail. His son returns to England and still just a teenager, Brunel becomes chief assistant engineer on his father's project to create a tunnel under the Thames. This 1,200ft (365m) long tunnel was to be their first, and last, project together. Despite Marc inventing a tunnelling shield that protects workers as they progress, work is still extremely hazardous. Breaches and collapses often halt the project. And on one Saturday, on 12 January 1828, the tunnel floods. Six men are swept to their deaths in a tidal wave of sewage, debris and water. The 22 year old Brunel should have joined them. But his assistant manages to pull Brunel's unconscious body from the water. It takes several months to recover, but as he does, he devises his most memorable design, the Bristol Clifton Suspension Bridge. He will build over a hundred bridges but this is the one that history will remember. At the time of building, it's the longest bridge in the world. It spans a 250ft (76m) deep gorge with sheer rock either side. To transport materials across, a 1,000 ft (305m) iron bar is suspended between the two ends and a man sized basket is pulled back and forth. The first man to test it is Brunel himself. But as with the Thames Tunnel, work is halted; this time because of riots. Britain's ruling classes are trying their best to withstand the rise of the working and middle classes. As the riots end, so does investor's interest in the bridge. Brunel will also redesign Bristol harbour, but he'll never live to see the completion of his greatest achievement. In 1833, Brunel is appointed chief engineer of the Great Western Railway and he starts connecting the South West of England with London. In total, he'll build 25 railway lines. With this one, he hopes to reduce journey time down to four hours, a full 13 hours quicker than the even the mail coach can achieve. Brunel's budget is 2 million. He will spend six. His design vision is total. No detail is too small. Everything from the lampposts, stations, locomotives, carriages and even the width of the track are re-designed. Brunel wants to bring not just speed, but comfort to the travelling public. By doubling the width of the track, he can do that. But Brunel's 'broad gauge' is a rejection of the gauge advocated by the other great man of rail, George Stephenson. And existing track owners are understandably resistant. And then there are countless landowners who oppose his plans to carve through the countryside. One of his first obstacles is a flood plain 11 miles west of London. The easiest engineering solution would have added just a few seconds to the journey time. Brunel's solution is the Wharncliffe Viaduct. Its 900ft (274m) length and eight arches shave those seconds off. Other remarkable innovations include building a bridge over the Thames at Maidenhead that is still the widest, flattest brick arch bridge in the world. And his constructions were costly, both in terms of men, as much as money. His Box Tunnel, then the longest railway tunnel in existence, took five years and 4,000 men with dynamite to build it. In percentage terms, you were more likely to die building the Box Hill tunnel than in the trenches of the First World War. In 1835, the year before his marriage, Brunel had offered his services free to the Great Western Steamship Company believing that steam powered ships could cross the Atlantic. It was to complete his vision of a passenger being able to buy one ticket that would get them from London to New York. At the time, Brunel had never designed a ship. And the Atlantic had only ever been crossed under sail. Many thought a steam powered boat would take so much coal to power there wouldn't be room for paying passengers or commercial cargo. When a rival attempted the journey, the crew had to burn cabin furniture to complete the journey. But Brunel calculates that a ship twice the size of a 100ft (30.5m) ship won't require twice as much coal to fuel it. Work commences on the 2,300 ton behemoth, the Great Western. Brunel is badly burnt during an engine fire on her launch but, in 1838, the longest ship in the world sets sail for New York. Fifteen days later she arrives. And she has a third of her coal, over 200 tonnes, left to burn. For the next eight years, she is the ship of choice for transatlantic passengers. He may have laid the foundations of modern industrial Britain, but Brunel's often forced to use wood as a material. His designs, such as for the Great Western, come several decades before Andrew Carnegie and his mass production of cheap steel. He was able to use metal, wrought-iron to be precise, and not wood for his ship, Great Britain. She is now considered to be the first modern ship because she was screw propeller-driven rather than by a paddle wheel. Her strength was demonstrated when she was run aground on only her fifth journey and left to winter in that state. On release, her hull was found to have no damage. But her 1845 journey was only between London and New York again. Brunel wanted more. His next Leviathan project is the Great Eastern. She is built to be capable of taking 4,000 passengers between London and Sydney, Australia. It would be another 50 years before the world would see another ship of the same size. But Brunel is becoming increasingly disillusioned. His visionary railway broad gauge had been all but abandoned against Stephenson's standard gauge. In 1859, as the engines of Great Eastern are tested, Brunel, like his father before him, suffers a stroke. He collapses on deck. As seen in his publicity photographs, he is a heavy smoker. Ten days later, on 15 September, Isambard Kingdom Brunel dies. His Great Eastern will become a commercial catastrophe. The ship intended to transport thousands to a new continent, instead ends its working days laying telegraph cable.
Brunel and Stephenson reached a consensus on the width of gauge.
c
id_3562
Isambard Kingdom Brunel. Isambard Kingdom Brunel's name comes from his civil engineer father, a Normandy refugee from the French Revolution. His English mother, Sophia Kingdom, gives birth to their only son on 9 April 1806. Marc Isambard Brunel isn't good with money, but he is a great engineer and a great teacher to his son. Marc sends him to boarding school and then to France. This, along with some ill advised projects, proves financially unsustainable and both his parents spend three months in a debtor's prison. On top of this, Brunel's refused entry into a renowned French engineering school because despite his French father, he's considered a foreigner. But the British government recognise his father's engineering potential and release him from his debts and jail. His son returns to England and still just a teenager, Brunel becomes chief assistant engineer on his father's project to create a tunnel under the Thames. This 1,200ft (365m) long tunnel was to be their first, and last, project together. Despite Marc inventing a tunnelling shield that protects workers as they progress, work is still extremely hazardous. Breaches and collapses often halt the project. And on one Saturday, on 12 January 1828, the tunnel floods. Six men are swept to their deaths in a tidal wave of sewage, debris and water. The 22 year old Brunel should have joined them. But his assistant manages to pull Brunel's unconscious body from the water. It takes several months to recover, but as he does, he devises his most memorable design, the Bristol Clifton Suspension Bridge. He will build over a hundred bridges but this is the one that history will remember. At the time of building, it's the longest bridge in the world. It spans a 250ft (76m) deep gorge with sheer rock either side. To transport materials across, a 1,000 ft (305m) iron bar is suspended between the two ends and a man sized basket is pulled back and forth. The first man to test it is Brunel himself. But as with the Thames Tunnel, work is halted; this time because of riots. Britain's ruling classes are trying their best to withstand the rise of the working and middle classes. As the riots end, so does investor's interest in the bridge. Brunel will also redesign Bristol harbour, but he'll never live to see the completion of his greatest achievement. In 1833, Brunel is appointed chief engineer of the Great Western Railway and he starts connecting the South West of England with London. In total, he'll build 25 railway lines. With this one, he hopes to reduce journey time down to four hours, a full 13 hours quicker than the even the mail coach can achieve. Brunel's budget is 2 million. He will spend six. His design vision is total. No detail is too small. Everything from the lampposts, stations, locomotives, carriages and even the width of the track are re-designed. Brunel wants to bring not just speed, but comfort to the travelling public. By doubling the width of the track, he can do that. But Brunel's 'broad gauge' is a rejection of the gauge advocated by the other great man of rail, George Stephenson. And existing track owners are understandably resistant. And then there are countless landowners who oppose his plans to carve through the countryside. One of his first obstacles is a flood plain 11 miles west of London. The easiest engineering solution would have added just a few seconds to the journey time. Brunel's solution is the Wharncliffe Viaduct. Its 900ft (274m) length and eight arches shave those seconds off. Other remarkable innovations include building a bridge over the Thames at Maidenhead that is still the widest, flattest brick arch bridge in the world. And his constructions were costly, both in terms of men, as much as money. His Box Tunnel, then the longest railway tunnel in existence, took five years and 4,000 men with dynamite to build it. In percentage terms, you were more likely to die building the Box Hill tunnel than in the trenches of the First World War. In 1835, the year before his marriage, Brunel had offered his services free to the Great Western Steamship Company believing that steam powered ships could cross the Atlantic. It was to complete his vision of a passenger being able to buy one ticket that would get them from London to New York. At the time, Brunel had never designed a ship. And the Atlantic had only ever been crossed under sail. Many thought a steam powered boat would take so much coal to power there wouldn't be room for paying passengers or commercial cargo. When a rival attempted the journey, the crew had to burn cabin furniture to complete the journey. But Brunel calculates that a ship twice the size of a 100ft (30.5m) ship won't require twice as much coal to fuel it. Work commences on the 2,300 ton behemoth, the Great Western. Brunel is badly burnt during an engine fire on her launch but, in 1838, the longest ship in the world sets sail for New York. Fifteen days later she arrives. And she has a third of her coal, over 200 tonnes, left to burn. For the next eight years, she is the ship of choice for transatlantic passengers. He may have laid the foundations of modern industrial Britain, but Brunel's often forced to use wood as a material. His designs, such as for the Great Western, come several decades before Andrew Carnegie and his mass production of cheap steel. He was able to use metal, wrought-iron to be precise, and not wood for his ship, Great Britain. She is now considered to be the first modern ship because she was screw propeller-driven rather than by a paddle wheel. Her strength was demonstrated when she was run aground on only her fifth journey and left to winter in that state. On release, her hull was found to have no damage. But her 1845 journey was only between London and New York again. Brunel wanted more. His next Leviathan project is the Great Eastern. She is built to be capable of taking 4,000 passengers between London and Sydney, Australia. It would be another 50 years before the world would see another ship of the same size. But Brunel is becoming increasingly disillusioned. His visionary railway broad gauge had been all but abandoned against Stephenson's standard gauge. In 1859, as the engines of Great Eastern are tested, Brunel, like his father before him, suffers a stroke. He collapses on deck. As seen in his publicity photographs, he is a heavy smoker. Ten days later, on 15 September, Isambard Kingdom Brunel dies. His Great Eastern will become a commercial catastrophe. The ship intended to transport thousands to a new continent, instead ends its working days laying telegraph cable.
The Thames Tunnel was halted because of political conflicts.
c
id_3563
Islamic Art and the Book The arts of the Islamic book, such as calligraphy and decorative drawing, developed during A. D. 900 to 1500, and luxury books are some of the most characteristic examples of Islamic art produced in this period. This came about from two major developments: paper became common, replacing parchment as the major medium for writing, and rounded scripts were regularized and perfected so that they replaced the angular scripts of the previous period, which because of their angularity were uneven in height. Books became major vehicles for artistic expression, and the artists who produced them, notably calligraphers and painters, enjoyed high status, and their workshops were often sponsored by princes and their courts. Before A. D. 900, manuscripts of the Koran (the book containing the teachings of the Islamic religion) seem to have been the most common type of book produced and decorated, but after that date a wide range of books were produced for a broad spectrum of patrons. These continued to include, of course, manuscripts of the Koran, which every Muslim wanted to read, but scientific works, histories, romances, and epic and lyric poetry were also copied in fine handwriting and decorated with beautiful illustrations. Most were made for sale on the open market, and cities boasted special souks (markets) where books were bought and sold. The mosque of Marrakech in Morocco is known as the Kutubiyya, or Booksellers' Mosque, after the adjacent market. Some of the most luxurious books were specific commissions made at the order of a particular prince and signed by the calligrapher and decorator. Papermaking had been introduced to the Islamic lands from China in the eighth century. It has been said that Chinese papermakers were among the prisoners captured in a battle fought near Samarqand between the Chinese and the Muslims in 751, and the technique of papermaking in which cellulose pulp extracted from any of several plants is first suspended in water, caught on a fine screen, and then dried into flexible sheets slowly spread westward. Within fifty years, the government in Baghdad was using paper for documents. Writing in ink on paper, unlike parchment, could not easily be erased, and therefore paper had the advantage that it was difficult to alter what was written on it. Papermaking spread quickly to Egypt and eventually to Sicily and Spain but it was several centuries before paper supplanted parchment for copies of the Koran, probably because of the conservative nature of religious art and its practitioners. In western Islamic lands, parchment continued to be used for manuscripts of the Koran throughout this period. The introduction of paper spurred a conceptual revolution whose consequences have barely been explored. Although paper was never as cheap as it has become today, it was far less expensive than parchment, and therefore more people could afford to buy books, Paper is thinner than parchment, so more pages could be enclosed within a single volume. At first, paper was made in relatively small sheets that were pasted together, but by the beginning of the fourteenth century, very large sheets as much as a meter across were available. These large sheets meant that calligraphers and artists had more space on which to work. Paintings became more complicated, giving the artist greater opportunities to depict space or emotion. The increased availability of paper, particularly after 1250, encouraged people to develop systems of representation, such as architectural plans and drawings. This in turn allowed the easy transfer of artistic ideas and motifs over great distances from one medium to another, and in a different scale in ways that had been difficult, if not impossible, in the previous period. Rounded styles of Arabic handwriting had long been used for correspondence and documents alongside the formal angular scripts used for inscriptions and manuscripts of the Koran. Around the year 900, Ibn Muqla, who was a secretary and vizier at the Abbasid court in Baghdad, developed a system of proportioned writing. He standardized the length of alif, the first letter of the Arabic alphabet, and then determined what the size and shape of all other letters should be, based on the alif. Eventually, six round forms of handwriting, composed of three pairs of big and little scripts known collectively as the Six Pens, became the standard repertory of every calligrapher.
Most books were intended for sale on the open market.
e
id_3564
Islamic Art and the Book The arts of the Islamic book, such as calligraphy and decorative drawing, developed during A. D. 900 to 1500, and luxury books are some of the most characteristic examples of Islamic art produced in this period. This came about from two major developments: paper became common, replacing parchment as the major medium for writing, and rounded scripts were regularized and perfected so that they replaced the angular scripts of the previous period, which because of their angularity were uneven in height. Books became major vehicles for artistic expression, and the artists who produced them, notably calligraphers and painters, enjoyed high status, and their workshops were often sponsored by princes and their courts. Before A. D. 900, manuscripts of the Koran (the book containing the teachings of the Islamic religion) seem to have been the most common type of book produced and decorated, but after that date a wide range of books were produced for a broad spectrum of patrons. These continued to include, of course, manuscripts of the Koran, which every Muslim wanted to read, but scientific works, histories, romances, and epic and lyric poetry were also copied in fine handwriting and decorated with beautiful illustrations. Most were made for sale on the open market, and cities boasted special souks (markets) where books were bought and sold. The mosque of Marrakech in Morocco is known as the Kutubiyya, or Booksellers' Mosque, after the adjacent market. Some of the most luxurious books were specific commissions made at the order of a particular prince and signed by the calligrapher and decorator. Papermaking had been introduced to the Islamic lands from China in the eighth century. It has been said that Chinese papermakers were among the prisoners captured in a battle fought near Samarqand between the Chinese and the Muslims in 751, and the technique of papermaking in which cellulose pulp extracted from any of several plants is first suspended in water, caught on a fine screen, and then dried into flexible sheets slowly spread westward. Within fifty years, the government in Baghdad was using paper for documents. Writing in ink on paper, unlike parchment, could not easily be erased, and therefore paper had the advantage that it was difficult to alter what was written on it. Papermaking spread quickly to Egypt and eventually to Sicily and Spain but it was several centuries before paper supplanted parchment for copies of the Koran, probably because of the conservative nature of religious art and its practitioners. In western Islamic lands, parchment continued to be used for manuscripts of the Koran throughout this period. The introduction of paper spurred a conceptual revolution whose consequences have barely been explored. Although paper was never as cheap as it has become today, it was far less expensive than parchment, and therefore more people could afford to buy books, Paper is thinner than parchment, so more pages could be enclosed within a single volume. At first, paper was made in relatively small sheets that were pasted together, but by the beginning of the fourteenth century, very large sheets as much as a meter across were available. These large sheets meant that calligraphers and artists had more space on which to work. Paintings became more complicated, giving the artist greater opportunities to depict space or emotion. The increased availability of paper, particularly after 1250, encouraged people to develop systems of representation, such as architectural plans and drawings. This in turn allowed the easy transfer of artistic ideas and motifs over great distances from one medium to another, and in a different scale in ways that had been difficult, if not impossible, in the previous period. Rounded styles of Arabic handwriting had long been used for correspondence and documents alongside the formal angular scripts used for inscriptions and manuscripts of the Koran. Around the year 900, Ibn Muqla, who was a secretary and vizier at the Abbasid court in Baghdad, developed a system of proportioned writing. He standardized the length of alif, the first letter of the Arabic alphabet, and then determined what the size and shape of all other letters should be, based on the alif. Eventually, six round forms of handwriting, composed of three pairs of big and little scripts known collectively as the Six Pens, became the standard repertory of every calligrapher.
Books were an important form of artistic expression.
e
id_3565
Islamic Art and the Book The arts of the Islamic book, such as calligraphy and decorative drawing, developed during A. D. 900 to 1500, and luxury books are some of the most characteristic examples of Islamic art produced in this period. This came about from two major developments: paper became common, replacing parchment as the major medium for writing, and rounded scripts were regularized and perfected so that they replaced the angular scripts of the previous period, which because of their angularity were uneven in height. Books became major vehicles for artistic expression, and the artists who produced them, notably calligraphers and painters, enjoyed high status, and their workshops were often sponsored by princes and their courts. Before A. D. 900, manuscripts of the Koran (the book containing the teachings of the Islamic religion) seem to have been the most common type of book produced and decorated, but after that date a wide range of books were produced for a broad spectrum of patrons. These continued to include, of course, manuscripts of the Koran, which every Muslim wanted to read, but scientific works, histories, romances, and epic and lyric poetry were also copied in fine handwriting and decorated with beautiful illustrations. Most were made for sale on the open market, and cities boasted special souks (markets) where books were bought and sold. The mosque of Marrakech in Morocco is known as the Kutubiyya, or Booksellers' Mosque, after the adjacent market. Some of the most luxurious books were specific commissions made at the order of a particular prince and signed by the calligrapher and decorator. Papermaking had been introduced to the Islamic lands from China in the eighth century. It has been said that Chinese papermakers were among the prisoners captured in a battle fought near Samarqand between the Chinese and the Muslims in 751, and the technique of papermaking in which cellulose pulp extracted from any of several plants is first suspended in water, caught on a fine screen, and then dried into flexible sheets slowly spread westward. Within fifty years, the government in Baghdad was using paper for documents. Writing in ink on paper, unlike parchment, could not easily be erased, and therefore paper had the advantage that it was difficult to alter what was written on it. Papermaking spread quickly to Egypt and eventually to Sicily and Spain but it was several centuries before paper supplanted parchment for copies of the Koran, probably because of the conservative nature of religious art and its practitioners. In western Islamic lands, parchment continued to be used for manuscripts of the Koran throughout this period. The introduction of paper spurred a conceptual revolution whose consequences have barely been explored. Although paper was never as cheap as it has become today, it was far less expensive than parchment, and therefore more people could afford to buy books, Paper is thinner than parchment, so more pages could be enclosed within a single volume. At first, paper was made in relatively small sheets that were pasted together, but by the beginning of the fourteenth century, very large sheets as much as a meter across were available. These large sheets meant that calligraphers and artists had more space on which to work. Paintings became more complicated, giving the artist greater opportunities to depict space or emotion. The increased availability of paper, particularly after 1250, encouraged people to develop systems of representation, such as architectural plans and drawings. This in turn allowed the easy transfer of artistic ideas and motifs over great distances from one medium to another, and in a different scale in ways that had been difficult, if not impossible, in the previous period. Rounded styles of Arabic handwriting had long been used for correspondence and documents alongside the formal angular scripts used for inscriptions and manuscripts of the Koran. Around the year 900, Ibn Muqla, who was a secretary and vizier at the Abbasid court in Baghdad, developed a system of proportioned writing. He standardized the length of alif, the first letter of the Arabic alphabet, and then determined what the size and shape of all other letters should be, based on the alif. Eventually, six round forms of handwriting, composed of three pairs of big and little scripts known collectively as the Six Pens, became the standard repertory of every calligrapher.
A wide variety of books with different styles and topics became available.
e
id_3566
Islamic Art and the Book The arts of the Islamic book, such as calligraphy and decorative drawing, developed during A. D. 900 to 1500, and luxury books are some of the most characteristic examples of Islamic art produced in this period. This came about from two major developments: paper became common, replacing parchment as the major medium for writing, and rounded scripts were regularized and perfected so that they replaced the angular scripts of the previous period, which because of their angularity were uneven in height. Books became major vehicles for artistic expression, and the artists who produced them, notably calligraphers and painters, enjoyed high status, and their workshops were often sponsored by princes and their courts. Before A. D. 900, manuscripts of the Koran (the book containing the teachings of the Islamic religion) seem to have been the most common type of book produced and decorated, but after that date a wide range of books were produced for a broad spectrum of patrons. These continued to include, of course, manuscripts of the Koran, which every Muslim wanted to read, but scientific works, histories, romances, and epic and lyric poetry were also copied in fine handwriting and decorated with beautiful illustrations. Most were made for sale on the open market, and cities boasted special souks (markets) where books were bought and sold. The mosque of Marrakech in Morocco is known as the Kutubiyya, or Booksellers' Mosque, after the adjacent market. Some of the most luxurious books were specific commissions made at the order of a particular prince and signed by the calligrapher and decorator. Papermaking had been introduced to the Islamic lands from China in the eighth century. It has been said that Chinese papermakers were among the prisoners captured in a battle fought near Samarqand between the Chinese and the Muslims in 751, and the technique of papermaking in which cellulose pulp extracted from any of several plants is first suspended in water, caught on a fine screen, and then dried into flexible sheets slowly spread westward. Within fifty years, the government in Baghdad was using paper for documents. Writing in ink on paper, unlike parchment, could not easily be erased, and therefore paper had the advantage that it was difficult to alter what was written on it. Papermaking spread quickly to Egypt and eventually to Sicily and Spain but it was several centuries before paper supplanted parchment for copies of the Koran, probably because of the conservative nature of religious art and its practitioners. In western Islamic lands, parchment continued to be used for manuscripts of the Koran throughout this period. The introduction of paper spurred a conceptual revolution whose consequences have barely been explored. Although paper was never as cheap as it has become today, it was far less expensive than parchment, and therefore more people could afford to buy books, Paper is thinner than parchment, so more pages could be enclosed within a single volume. At first, paper was made in relatively small sheets that were pasted together, but by the beginning of the fourteenth century, very large sheets as much as a meter across were available. These large sheets meant that calligraphers and artists had more space on which to work. Paintings became more complicated, giving the artist greater opportunities to depict space or emotion. The increased availability of paper, particularly after 1250, encouraged people to develop systems of representation, such as architectural plans and drawings. This in turn allowed the easy transfer of artistic ideas and motifs over great distances from one medium to another, and in a different scale in ways that had been difficult, if not impossible, in the previous period. Rounded styles of Arabic handwriting had long been used for correspondence and documents alongside the formal angular scripts used for inscriptions and manuscripts of the Koran. Around the year 900, Ibn Muqla, who was a secretary and vizier at the Abbasid court in Baghdad, developed a system of proportioned writing. He standardized the length of alif, the first letter of the Arabic alphabet, and then determined what the size and shape of all other letters should be, based on the alif. Eventually, six round forms of handwriting, composed of three pairs of big and little scripts known collectively as the Six Pens, became the standard repertory of every calligrapher.
They were sold primarily near mosques.
c
id_3567
Islands That Float Islands are not known for their mobility but, occasionally it occurs. Natural floating islands have been recorded in many parts of the world (Burns et al 1985). Longevity studies in lakes have been carried out by Hesser, and in rivers and the open sea by Boughey (Smithsonian Institute 1970). They can form in two common ways: landslides of (usually vegetated) peaty soils into lakes or seawater or as a flotation of peat soils (usually bound by roots of woody vegetation) after storm surges, river floods or lake level risings. The capacity of the living part of a floating island to maintain its equilibrium in the face of destructive forces, such as fire, wave attack or hogging and sagging while riding sea or swell waves is a major obstacle. In general, ocean-going floating islands are most likely to be short-lived; wave wash-over gradually eliminates enough of the islands store of fresh water to deplete soil air and kill vegetation around the edges which, in turn, causes erosion and diminishes buoyancy and horizontal mobility. The forces acting on a floating island determine the speed and direction of movement and are very similar to those acting on floating mobile ice chunks during the partially open-water season (Peterson 1965). In contrast to such ice rafts, many floating islands carry vegetation, perhaps including trees which act as sails. Burns et al examined the forces acting and concluded that comparatively low wind velocities are required to mobilise free-floating islands with vegetation standing two meters or more tall. The sighting of floating islands at sea is a rare event; such a thing is unscheduled, short-lived and usually undocumented. On July 4th, 1969, an island some 15 meters in diameter with 10-15 trees 10-12 meters tall was included in the daily notice to mariners as posing a shipping navigation hazard between Cuba and Haiti. McWhirter described the island as looking ... as though it were held together by a mangrove-type matting; there was some earth on it but it looked kind of bushy around the bottom, like there was dead foliage, grass-like material or something on the island itself. The trees were coming up out of that. It looked like the trees came right out of the surface brown layer. No roots were visible. By the 14th of July the island had apparently broken up and the parts had partially submerged so that only the upper tree trunks were above the water. By July 19th, no trace of the island was found after an intensive six hour search. Another example albeit freshwater, can be found in Victoria, Australia the floating islands of Pirron Yallock. Accounts of how the floating islands were formed have been given by local residents. These accounts have not been disputed in the scientific literature. Prior to 1938, the lake was an intermittent swamp which usually dried out in summer. A drainage channel had been excavated at the lowest point of the swamp at the northern part of its perimeter. This is likely to have encouraged the development or enlargement of a peat mat on the floor of the depression. Potatoes were grown in the centre of the depression where the peat rose to a slight mound. The peat was ignited by a fire in 1938 which burned from the dry edges towards a central damp section. A track was laid through the swamp last century and pavement work was carried out in 1929/30. This causeway restricted flow between the depression and its former southern arm. These roadworks, plus collapse and partial infilling of the northern drainage channel, created drainage conditions conducive to a transition from swamp to permanent lake. The transformation from swamp to lake was dramatic, occurring over the winter of 1952 when rainfall of around 250mm was well above average. Peat is very buoyant and the central raised section which had been isolated by the fire, broke away from the rocky, basalt floor as the water level rose in winter. The main island then broke up into several smaller islands which drifted slowly for up to 200 meters within the confines of the lake and ranged in size from 2 to 30 meters in diameter. The years immediately following experienced average or above average rainfall and the water level was maintained. Re-alignment of the highway in 1963 completely blocked the former southern outlet of the depression, further enhancing its ability to retain water. The road surface also provided an additional source of runoff to the depression. Anecdotal evidence indicates that the islands floated uninterrupted for 30 years following their formation. They generally moved between the NW and NE sides of the lake in response to the prevailing winds. In 1980, the Rural Water Commission issued a nearby motel a domestic licence to remove water from the lake and occasionally water is taken for the purpose of firefighting. The most significant amount taken for firefighting was during severe fires in February 1983. Since then, the Pirron Yallock islands have ceased to float, and this is thought to be related to a drop in the water level of approximately 600 mm over the past 10-15 years. The islands have either run aground on the bed or the lagoon or vegetation has attached them to the bed. Floating islands have attracted attention because they are uncommon and their behaviour has provided not only explanations for events in myth and legend but also great scope for discussion and speculation amongst scientific and other observers.
Natural floating islands occur mostly in lakes.
n
id_3568
Islands That Float Islands are not known for their mobility but, occasionally it occurs. Natural floating islands have been recorded in many parts of the world (Burns et al 1985). Longevity studies in lakes have been carried out by Hesser, and in rivers and the open sea by Boughey (Smithsonian Institute 1970). They can form in two common ways: landslides of (usually vegetated) peaty soils into lakes or seawater or as a flotation of peat soils (usually bound by roots of woody vegetation) after storm surges, river floods or lake level risings. The capacity of the living part of a floating island to maintain its equilibrium in the face of destructive forces, such as fire, wave attack or hogging and sagging while riding sea or swell waves is a major obstacle. In general, ocean-going floating islands are most likely to be short-lived; wave wash-over gradually eliminates enough of the islands store of fresh water to deplete soil air and kill vegetation around the edges which, in turn, causes erosion and diminishes buoyancy and horizontal mobility. The forces acting on a floating island determine the speed and direction of movement and are very similar to those acting on floating mobile ice chunks during the partially open-water season (Peterson 1965). In contrast to such ice rafts, many floating islands carry vegetation, perhaps including trees which act as sails. Burns et al examined the forces acting and concluded that comparatively low wind velocities are required to mobilise free-floating islands with vegetation standing two meters or more tall. The sighting of floating islands at sea is a rare event; such a thing is unscheduled, short-lived and usually undocumented. On July 4th, 1969, an island some 15 meters in diameter with 10-15 trees 10-12 meters tall was included in the daily notice to mariners as posing a shipping navigation hazard between Cuba and Haiti. McWhirter described the island as looking ... as though it were held together by a mangrove-type matting; there was some earth on it but it looked kind of bushy around the bottom, like there was dead foliage, grass-like material or something on the island itself. The trees were coming up out of that. It looked like the trees came right out of the surface brown layer. No roots were visible. By the 14th of July the island had apparently broken up and the parts had partially submerged so that only the upper tree trunks were above the water. By July 19th, no trace of the island was found after an intensive six hour search. Another example albeit freshwater, can be found in Victoria, Australia the floating islands of Pirron Yallock. Accounts of how the floating islands were formed have been given by local residents. These accounts have not been disputed in the scientific literature. Prior to 1938, the lake was an intermittent swamp which usually dried out in summer. A drainage channel had been excavated at the lowest point of the swamp at the northern part of its perimeter. This is likely to have encouraged the development or enlargement of a peat mat on the floor of the depression. Potatoes were grown in the centre of the depression where the peat rose to a slight mound. The peat was ignited by a fire in 1938 which burned from the dry edges towards a central damp section. A track was laid through the swamp last century and pavement work was carried out in 1929/30. This causeway restricted flow between the depression and its former southern arm. These roadworks, plus collapse and partial infilling of the northern drainage channel, created drainage conditions conducive to a transition from swamp to permanent lake. The transformation from swamp to lake was dramatic, occurring over the winter of 1952 when rainfall of around 250mm was well above average. Peat is very buoyant and the central raised section which had been isolated by the fire, broke away from the rocky, basalt floor as the water level rose in winter. The main island then broke up into several smaller islands which drifted slowly for up to 200 meters within the confines of the lake and ranged in size from 2 to 30 meters in diameter. The years immediately following experienced average or above average rainfall and the water level was maintained. Re-alignment of the highway in 1963 completely blocked the former southern outlet of the depression, further enhancing its ability to retain water. The road surface also provided an additional source of runoff to the depression. Anecdotal evidence indicates that the islands floated uninterrupted for 30 years following their formation. They generally moved between the NW and NE sides of the lake in response to the prevailing winds. In 1980, the Rural Water Commission issued a nearby motel a domestic licence to remove water from the lake and occasionally water is taken for the purpose of firefighting. The most significant amount taken for firefighting was during severe fires in February 1983. Since then, the Pirron Yallock islands have ceased to float, and this is thought to be related to a drop in the water level of approximately 600 mm over the past 10-15 years. The islands have either run aground on the bed or the lagoon or vegetation has attached them to the bed. Floating islands have attracted attention because they are uncommon and their behaviour has provided not only explanations for events in myth and legend but also great scope for discussion and speculation amongst scientific and other observers.
Floating Islands occur after a heavy storm or landslide.
e
id_3569
Islands That Float Islands are not known for their mobility but, occasionally it occurs. Natural floating islands have been recorded in many parts of the world (Burns et al 1985). Longevity studies in lakes have been carried out by Hesser, and in rivers and the open sea by Boughey (Smithsonian Institute 1970). They can form in two common ways: landslides of (usually vegetated) peaty soils into lakes or seawater or as a flotation of peat soils (usually bound by roots of woody vegetation) after storm surges, river floods or lake level risings. The capacity of the living part of a floating island to maintain its equilibrium in the face of destructive forces, such as fire, wave attack or hogging and sagging while riding sea or swell waves is a major obstacle. In general, ocean-going floating islands are most likely to be short-lived; wave wash-over gradually eliminates enough of the islands store of fresh water to deplete soil air and kill vegetation around the edges which, in turn, causes erosion and diminishes buoyancy and horizontal mobility. The forces acting on a floating island determine the speed and direction of movement and are very similar to those acting on floating mobile ice chunks during the partially open-water season (Peterson 1965). In contrast to such ice rafts, many floating islands carry vegetation, perhaps including trees which act as sails. Burns et al examined the forces acting and concluded that comparatively low wind velocities are required to mobilise free-floating islands with vegetation standing two meters or more tall. The sighting of floating islands at sea is a rare event; such a thing is unscheduled, short-lived and usually undocumented. On July 4th, 1969, an island some 15 meters in diameter with 10-15 trees 10-12 meters tall was included in the daily notice to mariners as posing a shipping navigation hazard between Cuba and Haiti. McWhirter described the island as looking ... as though it were held together by a mangrove-type matting; there was some earth on it but it looked kind of bushy around the bottom, like there was dead foliage, grass-like material or something on the island itself. The trees were coming up out of that. It looked like the trees came right out of the surface brown layer. No roots were visible. By the 14th of July the island had apparently broken up and the parts had partially submerged so that only the upper tree trunks were above the water. By July 19th, no trace of the island was found after an intensive six hour search. Another example albeit freshwater, can be found in Victoria, Australia the floating islands of Pirron Yallock. Accounts of how the floating islands were formed have been given by local residents. These accounts have not been disputed in the scientific literature. Prior to 1938, the lake was an intermittent swamp which usually dried out in summer. A drainage channel had been excavated at the lowest point of the swamp at the northern part of its perimeter. This is likely to have encouraged the development or enlargement of a peat mat on the floor of the depression. Potatoes were grown in the centre of the depression where the peat rose to a slight mound. The peat was ignited by a fire in 1938 which burned from the dry edges towards a central damp section. A track was laid through the swamp last century and pavement work was carried out in 1929/30. This causeway restricted flow between the depression and its former southern arm. These roadworks, plus collapse and partial infilling of the northern drainage channel, created drainage conditions conducive to a transition from swamp to permanent lake. The transformation from swamp to lake was dramatic, occurring over the winter of 1952 when rainfall of around 250mm was well above average. Peat is very buoyant and the central raised section which had been isolated by the fire, broke away from the rocky, basalt floor as the water level rose in winter. The main island then broke up into several smaller islands which drifted slowly for up to 200 meters within the confines of the lake and ranged in size from 2 to 30 meters in diameter. The years immediately following experienced average or above average rainfall and the water level was maintained. Re-alignment of the highway in 1963 completely blocked the former southern outlet of the depression, further enhancing its ability to retain water. The road surface also provided an additional source of runoff to the depression. Anecdotal evidence indicates that the islands floated uninterrupted for 30 years following their formation. They generally moved between the NW and NE sides of the lake in response to the prevailing winds. In 1980, the Rural Water Commission issued a nearby motel a domestic licence to remove water from the lake and occasionally water is taken for the purpose of firefighting. The most significant amount taken for firefighting was during severe fires in February 1983. Since then, the Pirron Yallock islands have ceased to float, and this is thought to be related to a drop in the water level of approximately 600 mm over the past 10-15 years. The islands have either run aground on the bed or the lagoon or vegetation has attached them to the bed. Floating islands have attracted attention because they are uncommon and their behaviour has provided not only explanations for events in myth and legend but also great scope for discussion and speculation amongst scientific and other observers.
The details of the floating island at sea near Cuba and Haiti were one of many sea-going islands in that area.
c
id_3570
Islands That Float Islands are not known for their mobility but, occasionally it occurs. Natural floating islands have been recorded in many parts of the world (Burns et al 1985). Longevity studies in lakes have been carried out by Hesser, and in rivers and the open sea by Boughey (Smithsonian Institute 1970). They can form in two common ways: landslides of (usually vegetated) peaty soils into lakes or seawater or as a flotation of peat soils (usually bound by roots of woody vegetation) after storm surges, river floods or lake level risings. The capacity of the living part of a floating island to maintain its equilibrium in the face of destructive forces, such as fire, wave attack or hogging and sagging while riding sea or swell waves is a major obstacle. In general, ocean-going floating islands are most likely to be short-lived; wave wash-over gradually eliminates enough of the islands store of fresh water to deplete soil air and kill vegetation around the edges which, in turn, causes erosion and diminishes buoyancy and horizontal mobility. The forces acting on a floating island determine the speed and direction of movement and are very similar to those acting on floating mobile ice chunks during the partially open-water season (Peterson 1965). In contrast to such ice rafts, many floating islands carry vegetation, perhaps including trees which act as sails. Burns et al examined the forces acting and concluded that comparatively low wind velocities are required to mobilise free-floating islands with vegetation standing two meters or more tall. The sighting of floating islands at sea is a rare event; such a thing is unscheduled, short-lived and usually undocumented. On July 4th, 1969, an island some 15 meters in diameter with 10-15 trees 10-12 meters tall was included in the daily notice to mariners as posing a shipping navigation hazard between Cuba and Haiti. McWhirter described the island as looking ... as though it were held together by a mangrove-type matting; there was some earth on it but it looked kind of bushy around the bottom, like there was dead foliage, grass-like material or something on the island itself. The trees were coming up out of that. It looked like the trees came right out of the surface brown layer. No roots were visible. By the 14th of July the island had apparently broken up and the parts had partially submerged so that only the upper tree trunks were above the water. By July 19th, no trace of the island was found after an intensive six hour search. Another example albeit freshwater, can be found in Victoria, Australia the floating islands of Pirron Yallock. Accounts of how the floating islands were formed have been given by local residents. These accounts have not been disputed in the scientific literature. Prior to 1938, the lake was an intermittent swamp which usually dried out in summer. A drainage channel had been excavated at the lowest point of the swamp at the northern part of its perimeter. This is likely to have encouraged the development or enlargement of a peat mat on the floor of the depression. Potatoes were grown in the centre of the depression where the peat rose to a slight mound. The peat was ignited by a fire in 1938 which burned from the dry edges towards a central damp section. A track was laid through the swamp last century and pavement work was carried out in 1929/30. This causeway restricted flow between the depression and its former southern arm. These roadworks, plus collapse and partial infilling of the northern drainage channel, created drainage conditions conducive to a transition from swamp to permanent lake. The transformation from swamp to lake was dramatic, occurring over the winter of 1952 when rainfall of around 250mm was well above average. Peat is very buoyant and the central raised section which had been isolated by the fire, broke away from the rocky, basalt floor as the water level rose in winter. The main island then broke up into several smaller islands which drifted slowly for up to 200 meters within the confines of the lake and ranged in size from 2 to 30 meters in diameter. The years immediately following experienced average or above average rainfall and the water level was maintained. Re-alignment of the highway in 1963 completely blocked the former southern outlet of the depression, further enhancing its ability to retain water. The road surface also provided an additional source of runoff to the depression. Anecdotal evidence indicates that the islands floated uninterrupted for 30 years following their formation. They generally moved between the NW and NE sides of the lake in response to the prevailing winds. In 1980, the Rural Water Commission issued a nearby motel a domestic licence to remove water from the lake and occasionally water is taken for the purpose of firefighting. The most significant amount taken for firefighting was during severe fires in February 1983. Since then, the Pirron Yallock islands have ceased to float, and this is thought to be related to a drop in the water level of approximately 600 mm over the past 10-15 years. The islands have either run aground on the bed or the lagoon or vegetation has attached them to the bed. Floating islands have attracted attention because they are uncommon and their behaviour has provided not only explanations for events in myth and legend but also great scope for discussion and speculation amongst scientific and other observers.
Scientists and local residents agree on how the Pirron Yallock Islands were formed.
e
id_3571
Islands That Float Islands are not known for their mobility but, occasionally it occurs. Natural floating islands have been recorded in many parts of the world (Burns et al 1985). Longevity studies in lakes have been carried out by Hesser, and in rivers and the open sea by Boughey (Smithsonian Institute 1970). They can form in two common ways: landslides of (usually vegetated) peaty soils into lakes or seawater or as a flotation of peat soils (usually bound by roots of woody vegetation) after storm surges, river floods or lake level risings. The capacity of the living part of a floating island to maintain its equilibrium in the face of destructive forces, such as fire, wave attack or hogging and sagging while riding sea or swell waves is a major obstacle. In general, ocean-going floating islands are most likely to be short-lived; wave wash-over gradually eliminates enough of the islands store of fresh water to deplete soil air and kill vegetation around the edges which, in turn, causes erosion and diminishes buoyancy and horizontal mobility. The forces acting on a floating island determine the speed and direction of movement and are very similar to those acting on floating mobile ice chunks during the partially open-water season (Peterson 1965). In contrast to such ice rafts, many floating islands carry vegetation, perhaps including trees which act as sails. Burns et al examined the forces acting and concluded that comparatively low wind velocities are required to mobilise free-floating islands with vegetation standing two meters or more tall. The sighting of floating islands at sea is a rare event; such a thing is unscheduled, short-lived and usually undocumented. On July 4th, 1969, an island some 15 meters in diameter with 10-15 trees 10-12 meters tall was included in the daily notice to mariners as posing a shipping navigation hazard between Cuba and Haiti. McWhirter described the island as looking ... as though it were held together by a mangrove-type matting; there was some earth on it but it looked kind of bushy around the bottom, like there was dead foliage, grass-like material or something on the island itself. The trees were coming up out of that. It looked like the trees came right out of the surface brown layer. No roots were visible. By the 14th of July the island had apparently broken up and the parts had partially submerged so that only the upper tree trunks were above the water. By July 19th, no trace of the island was found after an intensive six hour search. Another example albeit freshwater, can be found in Victoria, Australia the floating islands of Pirron Yallock. Accounts of how the floating islands were formed have been given by local residents. These accounts have not been disputed in the scientific literature. Prior to 1938, the lake was an intermittent swamp which usually dried out in summer. A drainage channel had been excavated at the lowest point of the swamp at the northern part of its perimeter. This is likely to have encouraged the development or enlargement of a peat mat on the floor of the depression. Potatoes were grown in the centre of the depression where the peat rose to a slight mound. The peat was ignited by a fire in 1938 which burned from the dry edges towards a central damp section. A track was laid through the swamp last century and pavement work was carried out in 1929/30. This causeway restricted flow between the depression and its former southern arm. These roadworks, plus collapse and partial infilling of the northern drainage channel, created drainage conditions conducive to a transition from swamp to permanent lake. The transformation from swamp to lake was dramatic, occurring over the winter of 1952 when rainfall of around 250mm was well above average. Peat is very buoyant and the central raised section which had been isolated by the fire, broke away from the rocky, basalt floor as the water level rose in winter. The main island then broke up into several smaller islands which drifted slowly for up to 200 meters within the confines of the lake and ranged in size from 2 to 30 meters in diameter. The years immediately following experienced average or above average rainfall and the water level was maintained. Re-alignment of the highway in 1963 completely blocked the former southern outlet of the depression, further enhancing its ability to retain water. The road surface also provided an additional source of runoff to the depression. Anecdotal evidence indicates that the islands floated uninterrupted for 30 years following their formation. They generally moved between the NW and NE sides of the lake in response to the prevailing winds. In 1980, the Rural Water Commission issued a nearby motel a domestic licence to remove water from the lake and occasionally water is taken for the purpose of firefighting. The most significant amount taken for firefighting was during severe fires in February 1983. Since then, the Pirron Yallock islands have ceased to float, and this is thought to be related to a drop in the water level of approximately 600 mm over the past 10-15 years. The islands have either run aground on the bed or the lagoon or vegetation has attached them to the bed. Floating islands have attracted attention because they are uncommon and their behaviour has provided not only explanations for events in myth and legend but also great scope for discussion and speculation amongst scientific and other observers.
Floating islands at sea sink because the plants on them eventually die.
n
id_3572
Issued by the Bank of New South Wales in 1816, Police Fund Notes were one of the first official notes in Australia and were well circulated throughout the 19th century. Their use continued up until 1910, round which time the Federal Government became responsible for issuing monitoring and controlling all currencies that were used throughout the country. Once the Australian Notes Act was passed in 1910, it took three years for the Federal Government to issue the first series of Australian notes. The Government followed the British Imperial system where twelve pence made a shiling and twenty shilings made a pound. The same Act also stopped different states and their banks from issuing and circulating their own notes. The status of State notes as legal tender ceased from that time resulting in the Commonwealth Treasury having full responsibility and control over issuing notes. In 1920 however, control was transferred to a Board of Directors directly appointed by the Commonwealth Government. By the end of 1924, a number of changes took place regarding the control of note-issuing, the most significant being the replacement of the Commonwealth Government Board of Directors by the Commonwealth Bank Board of Directors. Gradually, the Commonwealth Bank became the sole authority to issue Australian notes. This authority was formalised in 1945 by the Commonwealth Bank Act. In 1960, control was passed to another authority, the Reserve Bank of Australia (RBA), which took over the responsibility of central banking and the issuing of notes. In 1966 the RBA converted its currency from the Imperial system to decimal currency and named its standard currency the dollar In the 1970s Australia experienced rapid growth in its economy and population. This growth meant that more currency would need to be printed so the RBA began the construction of a new note printing complex in Melbourne. In 1981, the first batch of notes was printed in the new complex by the printing branch of the RBA which, in 1910, was officially named Note Printing Australie. In addition to larger scale note printing, the RBA also concentrated on developing technologically advanced and complex note printing mechanisms to guard against counterfeiting. As a result of joint efforts by the RBA and the Commonwealth Scientific and industrial Research Organisation (CSIRO), revolutionary polymer notes were invented. Featuring exclusively a pictorial theme of settlement incorporating elements of Aboriginal culture, commemorative $10 polymer notes were introduced in 1988 as part of Australias bicentennial celebrations The basic idea of developing polymer notes originated from an experiment where the RBA attempted to insert an Optically variable Device (OVD) in the notes so that counterfeiters could not copy them. Over the years, a process has evolved in the production of polymer note printing which involves several steps. Initially, blank sheets are made out of a special kind of surface material called Bioxically Oriented Polypropylene (BOPP) a non-fibrous and non-porous polymer used as an alternative to paper in note printing that has a distinctive feel when touched. Usually, a technique called Opocifying is then used to apply ink to each side of the sheet through a die-cut that has a sealed space in it for the OVD no ink is placed in this area, it remains transparent. The sheet is then ready for Intaglio Printing, a kind of printing which sets the ink in an embossed form raising the printed elements text, image, lines and other complicated shapes. The process then prints a see through registration device by matching the images on both sides, dot by dot. If the images on both sides do not align perfectly then the see-through device will not show any printing on it once the note is held up to a light source. As a special security feature, Shadow Image Creation technique is then used by applying Optically variable Ink (OVI) which allows the print on the reverse side to be also seen. All the notes then undergo a safety and functionality test where they are placed in front of a light source to check manually whether or not the reverse side can be seen. If the notes pass the test, it is assumed that the process has been successful. The process then moves to Micro Printing, which is the printing of text so small that it can only be read with a magnifying glass. The second last phase of the process is Florescence Printing where some texts are printed in such a way that is only visible when viewed under ultraviolet (UV) light. The authenticity of a polymer note can be quickly established by holding it up to a UV light source if some texts glow under the UV light then the note is authentic. The last phase of the process is called varnishing, which is the over-coating of notes with a chemical that consists of drying oil, resin and thinner. This final phase makes the surfaces of the notes glossy and more durable. Despite significant developments of technology and control some people argue that the life of polymer notes as currency in Australia will come to an end due to the widespread usage of electronic fund transfer cards. Whether this will come to pass remains to be seen. One thing, however, seems certain innovation of currency notes in Australia will continue into the foreseeable future Computer-based systems used to perform financial transactions electronically without physically exchanging notes or coins.
Illustrations on the first Australian polymer note featured Australias bicentenary.
c
id_3573
Issued by the Bank of New South Wales in 1816, Police Fund Notes were one of the first official notes in Australia and were well circulated throughout the 19th century. Their use continued up until 1910, round which time the Federal Government became responsible for issuing monitoring and controlling all currencies that were used throughout the country. Once the Australian Notes Act was passed in 1910, it took three years for the Federal Government to issue the first series of Australian notes. The Government followed the British Imperial system where twelve pence made a shiling and twenty shilings made a pound. The same Act also stopped different states and their banks from issuing and circulating their own notes. The status of State notes as legal tender ceased from that time resulting in the Commonwealth Treasury having full responsibility and control over issuing notes. In 1920 however, control was transferred to a Board of Directors directly appointed by the Commonwealth Government. By the end of 1924, a number of changes took place regarding the control of note-issuing, the most significant being the replacement of the Commonwealth Government Board of Directors by the Commonwealth Bank Board of Directors. Gradually, the Commonwealth Bank became the sole authority to issue Australian notes. This authority was formalised in 1945 by the Commonwealth Bank Act. In 1960, control was passed to another authority, the Reserve Bank of Australia (RBA), which took over the responsibility of central banking and the issuing of notes. In 1966 the RBA converted its currency from the Imperial system to decimal currency and named its standard currency the dollar In the 1970s Australia experienced rapid growth in its economy and population. This growth meant that more currency would need to be printed so the RBA began the construction of a new note printing complex in Melbourne. In 1981, the first batch of notes was printed in the new complex by the printing branch of the RBA which, in 1910, was officially named Note Printing Australie. In addition to larger scale note printing, the RBA also concentrated on developing technologically advanced and complex note printing mechanisms to guard against counterfeiting. As a result of joint efforts by the RBA and the Commonwealth Scientific and industrial Research Organisation (CSIRO), revolutionary polymer notes were invented. Featuring exclusively a pictorial theme of settlement incorporating elements of Aboriginal culture, commemorative $10 polymer notes were introduced in 1988 as part of Australias bicentennial celebrations The basic idea of developing polymer notes originated from an experiment where the RBA attempted to insert an Optically variable Device (OVD) in the notes so that counterfeiters could not copy them. Over the years, a process has evolved in the production of polymer note printing which involves several steps. Initially, blank sheets are made out of a special kind of surface material called Bioxically Oriented Polypropylene (BOPP) a non-fibrous and non-porous polymer used as an alternative to paper in note printing that has a distinctive feel when touched. Usually, a technique called Opocifying is then used to apply ink to each side of the sheet through a die-cut that has a sealed space in it for the OVD no ink is placed in this area, it remains transparent. The sheet is then ready for Intaglio Printing, a kind of printing which sets the ink in an embossed form raising the printed elements text, image, lines and other complicated shapes. The process then prints a see through registration device by matching the images on both sides, dot by dot. If the images on both sides do not align perfectly then the see-through device will not show any printing on it once the note is held up to a light source. As a special security feature, Shadow Image Creation technique is then used by applying Optically variable Ink (OVI) which allows the print on the reverse side to be also seen. All the notes then undergo a safety and functionality test where they are placed in front of a light source to check manually whether or not the reverse side can be seen. If the notes pass the test, it is assumed that the process has been successful. The process then moves to Micro Printing, which is the printing of text so small that it can only be read with a magnifying glass. The second last phase of the process is Florescence Printing where some texts are printed in such a way that is only visible when viewed under ultraviolet (UV) light. The authenticity of a polymer note can be quickly established by holding it up to a UV light source if some texts glow under the UV light then the note is authentic. The last phase of the process is called varnishing, which is the over-coating of notes with a chemical that consists of drying oil, resin and thinner. This final phase makes the surfaces of the notes glossy and more durable. Despite significant developments of technology and control some people argue that the life of polymer notes as currency in Australia will come to an end due to the widespread usage of electronic fund transfer cards. Whether this will come to pass remains to be seen. One thing, however, seems certain innovation of currency notes in Australia will continue into the foreseeable future Computer-based systems used to perform financial transactions electronically without physically exchanging notes or coins.
The construction of the note printing complex in Melbourne was due to economic progress in Australia
e
id_3574
Issued by the Bank of New South Wales in 1816, Police Fund Notes were one of the first official notes in Australia and were well circulated throughout the 19th century. Their use continued up until 1910, round which time the Federal Government became responsible for issuing monitoring and controlling all currencies that were used throughout the country. Once the Australian Notes Act was passed in 1910, it took three years for the Federal Government to issue the first series of Australian notes. The Government followed the British Imperial system where twelve pence made a shiling and twenty shilings made a pound. The same Act also stopped different states and their banks from issuing and circulating their own notes. The status of State notes as legal tender ceased from that time resulting in the Commonwealth Treasury having full responsibility and control over issuing notes. In 1920 however, control was transferred to a Board of Directors directly appointed by the Commonwealth Government. By the end of 1924, a number of changes took place regarding the control of note-issuing, the most significant being the replacement of the Commonwealth Government Board of Directors by the Commonwealth Bank Board of Directors. Gradually, the Commonwealth Bank became the sole authority to issue Australian notes. This authority was formalised in 1945 by the Commonwealth Bank Act. In 1960, control was passed to another authority, the Reserve Bank of Australia (RBA), which took over the responsibility of central banking and the issuing of notes. In 1966 the RBA converted its currency from the Imperial system to decimal currency and named its standard currency the dollar In the 1970s Australia experienced rapid growth in its economy and population. This growth meant that more currency would need to be printed so the RBA began the construction of a new note printing complex in Melbourne. In 1981, the first batch of notes was printed in the new complex by the printing branch of the RBA which, in 1910, was officially named Note Printing Australie. In addition to larger scale note printing, the RBA also concentrated on developing technologically advanced and complex note printing mechanisms to guard against counterfeiting. As a result of joint efforts by the RBA and the Commonwealth Scientific and industrial Research Organisation (CSIRO), revolutionary polymer notes were invented. Featuring exclusively a pictorial theme of settlement incorporating elements of Aboriginal culture, commemorative $10 polymer notes were introduced in 1988 as part of Australias bicentennial celebrations The basic idea of developing polymer notes originated from an experiment where the RBA attempted to insert an Optically variable Device (OVD) in the notes so that counterfeiters could not copy them. Over the years, a process has evolved in the production of polymer note printing which involves several steps. Initially, blank sheets are made out of a special kind of surface material called Bioxically Oriented Polypropylene (BOPP) a non-fibrous and non-porous polymer used as an alternative to paper in note printing that has a distinctive feel when touched. Usually, a technique called Opocifying is then used to apply ink to each side of the sheet through a die-cut that has a sealed space in it for the OVD no ink is placed in this area, it remains transparent. The sheet is then ready for Intaglio Printing, a kind of printing which sets the ink in an embossed form raising the printed elements text, image, lines and other complicated shapes. The process then prints a see through registration device by matching the images on both sides, dot by dot. If the images on both sides do not align perfectly then the see-through device will not show any printing on it once the note is held up to a light source. As a special security feature, Shadow Image Creation technique is then used by applying Optically variable Ink (OVI) which allows the print on the reverse side to be also seen. All the notes then undergo a safety and functionality test where they are placed in front of a light source to check manually whether or not the reverse side can be seen. If the notes pass the test, it is assumed that the process has been successful. The process then moves to Micro Printing, which is the printing of text so small that it can only be read with a magnifying glass. The second last phase of the process is Florescence Printing where some texts are printed in such a way that is only visible when viewed under ultraviolet (UV) light. The authenticity of a polymer note can be quickly established by holding it up to a UV light source if some texts glow under the UV light then the note is authentic. The last phase of the process is called varnishing, which is the over-coating of notes with a chemical that consists of drying oil, resin and thinner. This final phase makes the surfaces of the notes glossy and more durable. Despite significant developments of technology and control some people argue that the life of polymer notes as currency in Australia will come to an end due to the widespread usage of electronic fund transfer cards. Whether this will come to pass remains to be seen. One thing, however, seems certain innovation of currency notes in Australia will continue into the foreseeable future Computer-based systems used to perform financial transactions electronically without physically exchanging notes or coins.
The first series of Australian notes were released in 1910.
c
id_3575
Issued by the Bank of New South Wales in 1816, Police Fund Notes were one of the first official notes in Australia and were well circulated throughout the 19th century. Their use continued up until 1910, round which time the Federal Government became responsible for issuing monitoring and controlling all currencies that were used throughout the country. Once the Australian Notes Act was passed in 1910, it took three years for the Federal Government to issue the first series of Australian notes. The Government followed the British Imperial system where twelve pence made a shiling and twenty shilings made a pound. The same Act also stopped different states and their banks from issuing and circulating their own notes. The status of State notes as legal tender ceased from that time resulting in the Commonwealth Treasury having full responsibility and control over issuing notes. In 1920 however, control was transferred to a Board of Directors directly appointed by the Commonwealth Government. By the end of 1924, a number of changes took place regarding the control of note-issuing, the most significant being the replacement of the Commonwealth Government Board of Directors by the Commonwealth Bank Board of Directors. Gradually, the Commonwealth Bank became the sole authority to issue Australian notes. This authority was formalised in 1945 by the Commonwealth Bank Act. In 1960, control was passed to another authority, the Reserve Bank of Australia (RBA), which took over the responsibility of central banking and the issuing of notes. In 1966 the RBA converted its currency from the Imperial system to decimal currency and named its standard currency the dollar In the 1970s Australia experienced rapid growth in its economy and population. This growth meant that more currency would need to be printed so the RBA began the construction of a new note printing complex in Melbourne. In 1981, the first batch of notes was printed in the new complex by the printing branch of the RBA which, in 1910, was officially named Note Printing Australie. In addition to larger scale note printing, the RBA also concentrated on developing technologically advanced and complex note printing mechanisms to guard against counterfeiting. As a result of joint efforts by the RBA and the Commonwealth Scientific and industrial Research Organisation (CSIRO), revolutionary polymer notes were invented. Featuring exclusively a pictorial theme of settlement incorporating elements of Aboriginal culture, commemorative $10 polymer notes were introduced in 1988 as part of Australias bicentennial celebrations The basic idea of developing polymer notes originated from an experiment where the RBA attempted to insert an Optically variable Device (OVD) in the notes so that counterfeiters could not copy them. Over the years, a process has evolved in the production of polymer note printing which involves several steps. Initially, blank sheets are made out of a special kind of surface material called Bioxically Oriented Polypropylene (BOPP) a non-fibrous and non-porous polymer used as an alternative to paper in note printing that has a distinctive feel when touched. Usually, a technique called Opocifying is then used to apply ink to each side of the sheet through a die-cut that has a sealed space in it for the OVD no ink is placed in this area, it remains transparent. The sheet is then ready for Intaglio Printing, a kind of printing which sets the ink in an embossed form raising the printed elements text, image, lines and other complicated shapes. The process then prints a see through registration device by matching the images on both sides, dot by dot. If the images on both sides do not align perfectly then the see-through device will not show any printing on it once the note is held up to a light source. As a special security feature, Shadow Image Creation technique is then used by applying Optically variable Ink (OVI) which allows the print on the reverse side to be also seen. All the notes then undergo a safety and functionality test where they are placed in front of a light source to check manually whether or not the reverse side can be seen. If the notes pass the test, it is assumed that the process has been successful. The process then moves to Micro Printing, which is the printing of text so small that it can only be read with a magnifying glass. The second last phase of the process is Florescence Printing where some texts are printed in such a way that is only visible when viewed under ultraviolet (UV) light. The authenticity of a polymer note can be quickly established by holding it up to a UV light source if some texts glow under the UV light then the note is authentic. The last phase of the process is called varnishing, which is the over-coating of notes with a chemical that consists of drying oil, resin and thinner. This final phase makes the surfaces of the notes glossy and more durable. Despite significant developments of technology and control some people argue that the life of polymer notes as currency in Australia will come to an end due to the widespread usage of electronic fund transfer cards. Whether this will come to pass remains to be seen. One thing, however, seems certain innovation of currency notes in Australia will continue into the foreseeable future Computer-based systems used to perform financial transactions electronically without physically exchanging notes or coins.
The first notes issued by the Bank of New South Wales followed the British Imperial System
n
id_3576
Issued by the Bank of New South Wales in 1816, Police Fund Notes were one of the first official notes in Australia and were well circulated throughout the 19th century. Their use continued up until 1910, round which time the Federal Government became responsible for issuing monitoring and controlling all currencies that were used throughout the country. Once the Australian Notes Act was passed in 1910, it took three years for the Federal Government to issue the first series of Australian notes. The Government followed the British Imperial system where twelve pence made a shiling and twenty shilings made a pound. The same Act also stopped different states and their banks from issuing and circulating their own notes. The status of State notes as legal tender ceased from that time resulting in the Commonwealth Treasury having full responsibility and control over issuing notes. In 1920 however, control was transferred to a Board of Directors directly appointed by the Commonwealth Government. By the end of 1924, a number of changes took place regarding the control of note-issuing, the most significant being the replacement of the Commonwealth Government Board of Directors by the Commonwealth Bank Board of Directors. Gradually, the Commonwealth Bank became the sole authority to issue Australian notes. This authority was formalised in 1945 by the Commonwealth Bank Act. In 1960, control was passed to another authority, the Reserve Bank of Australia (RBA), which took over the responsibility of central banking and the issuing of notes. In 1966 the RBA converted its currency from the Imperial system to decimal currency and named its standard currency the dollar In the 1970s Australia experienced rapid growth in its economy and population. This growth meant that more currency would need to be printed so the RBA began the construction of a new note printing complex in Melbourne. In 1981, the first batch of notes was printed in the new complex by the printing branch of the RBA which, in 1910, was officially named Note Printing Australie. In addition to larger scale note printing, the RBA also concentrated on developing technologically advanced and complex note printing mechanisms to guard against counterfeiting. As a result of joint efforts by the RBA and the Commonwealth Scientific and industrial Research Organisation (CSIRO), revolutionary polymer notes were invented. Featuring exclusively a pictorial theme of settlement incorporating elements of Aboriginal culture, commemorative $10 polymer notes were introduced in 1988 as part of Australias bicentennial celebrations The basic idea of developing polymer notes originated from an experiment where the RBA attempted to insert an Optically variable Device (OVD) in the notes so that counterfeiters could not copy them. Over the years, a process has evolved in the production of polymer note printing which involves several steps. Initially, blank sheets are made out of a special kind of surface material called Bioxically Oriented Polypropylene (BOPP) a non-fibrous and non-porous polymer used as an alternative to paper in note printing that has a distinctive feel when touched. Usually, a technique called Opocifying is then used to apply ink to each side of the sheet through a die-cut that has a sealed space in it for the OVD no ink is placed in this area, it remains transparent. The sheet is then ready for Intaglio Printing, a kind of printing which sets the ink in an embossed form raising the printed elements text, image, lines and other complicated shapes. The process then prints a see through registration device by matching the images on both sides, dot by dot. If the images on both sides do not align perfectly then the see-through device will not show any printing on it once the note is held up to a light source. As a special security feature, Shadow Image Creation technique is then used by applying Optically variable Ink (OVI) which allows the print on the reverse side to be also seen. All the notes then undergo a safety and functionality test where they are placed in front of a light source to check manually whether or not the reverse side can be seen. If the notes pass the test, it is assumed that the process has been successful. The process then moves to Micro Printing, which is the printing of text so small that it can only be read with a magnifying glass. The second last phase of the process is Florescence Printing where some texts are printed in such a way that is only visible when viewed under ultraviolet (UV) light. The authenticity of a polymer note can be quickly established by holding it up to a UV light source if some texts glow under the UV light then the note is authentic. The last phase of the process is called varnishing, which is the over-coating of notes with a chemical that consists of drying oil, resin and thinner. This final phase makes the surfaces of the notes glossy and more durable. Despite significant developments of technology and control some people argue that the life of polymer notes as currency in Australia will come to an end due to the widespread usage of electronic fund transfer cards. Whether this will come to pass remains to be seen. One thing, however, seems certain innovation of currency notes in Australia will continue into the foreseeable future Computer-based systems used to perform financial transactions electronically without physically exchanging notes or coins.
Police Fund Notes were the first and only notes used in Australia
c
id_3577
It has always been known that bright children from high income house- holds perform better academically than bright children from low income households. This inequality places the bright child from a low income household at a considerable disadvantage and this has repercussions for the rest of their lives. A bright child from a high income household is very likely to go to one of the countrys top universities and is also very likely to enjoy a high income during their working lives. A bright child from a low income household is far less likely to win a place at any university let alone the countrys top colleges. They are also likely to earn no more than the national average wage during their working lives.
If true, the fact that some bright children from low income households do gain places at university would weaken the claim in the passage that bright children from low income households are far less likely to win a place at any university than bright children from high income households.
c
id_3578
It has always been known that bright children from high income house- holds perform better academically than bright children from low income households. This inequality places the bright child from a low income household at a considerable disadvantage and this has repercussions for the rest of their lives. A bright child from a high income household is very likely to go to one of the countrys top universities and is also very likely to enjoy a high income during their working lives. A bright child from a low income household is far less likely to win a place at any university let alone the countrys top colleges. They are also likely to earn no more than the national average wage during their working lives.
The main theme in the passage is the advantages enjoyed by bright children from high income households.
c
id_3579
It has always been known that bright children from high income house- holds perform better academically than bright children from low income households. This inequality places the bright child from a low income household at a considerable disadvantage and this has repercussions for the rest of their lives. A bright child from a high income household is very likely to go to one of the countrys top universities and is also very likely to enjoy a high income during their working lives. A bright child from a low income household is far less likely to win a place at any university let alone the countrys top colleges. They are also likely to earn no more than the national average wage during their working lives.
The author of the passage is likely to agree with the statement that a very bright child from a low income household is very likely to go to university.
c
id_3580
It has always been known that bright children from high income house- holds perform better academically than bright children from low income households. This inequality places the bright child from a low income household at a considerable disadvantage and this has repercussions for the rest of their lives. A bright child from a high income household is very likely to go to one of the countrys top universities and is also very likely to enjoy a high income during their working lives. A bright child from a low income household is far less likely to win a place at any university let alone the countrys top colleges. They are also likely to earn no more than the national average wage during their working lives.
The fact that bright children from low income house holds do less well than bright children from high income families is not something that has only just been realized.
e
id_3581
It has always been known that bright children from high income house- holds perform better academically than bright children from low income households. This inequality places the bright child from a low income household at a considerable disadvantage and this has repercussions for the rest of their lives. A bright child from a high income household is very likely to go to one of the countrys top universities and is also very likely to enjoy a high income during their working lives. A bright child from a low income household is far less likely to win a place at any university let alone the countrys top colleges. They are also likely to earn no more than the national average wage during their working lives.
In the context of the passage, high income household means one in which the combined income is in excess of $50,000 per annum.
n
id_3582
It has been a controversial debate that half of the jobs which Labour created in 1997 have been filled by foreign workers. The Department of Work and Pensions claims that over 52% of jobs have gone to foreign workers. One recent eye-opener has been that the government have declared that more than 1.1 million overseas workers have come to Britain in the past 10 years, and not 8 million as previously disclosed. National statistics provided by the Home Office indicate that 1.5 million overseas workers have entered the UK over the last decade. However, in reply to this the Department of Work and Pensions has claimed that the extra 400,000 workers were British residents who were born overseas. With such statistics the findings seem to make a mockery of what the government had initially proposed British jobs for every British worker.
48% of jobs have gone to British workers.
n
id_3583
It has been suggested that listening to music may develop an individuals imaginative ability. It can help people concentrate on thoughts, brainstorm ideas, help creativity in the formation of art and inventions and help people formulate solutions to complex tasks and theories. There have been examples of scientists who have exploited music to help them learn, give inspiration to introduce novel concepts and knowledge, and find solutions to complicated scientific notions. For example, Albert Einstein would often listen to a violin piece while deep in thought, trying to solve physics problems.
If an individual listened to classical music, originality may be achieved
n
id_3584
It has been suggested that listening to music may develop an individuals imaginative ability. It can help people concentrate on thoughts, brainstorm ideas, help creativity in the formation of art and inventions and help people formulate solutions to complex tasks and theories. There have been examples of scientists who have exploited music to help them learn, give inspiration to introduce novel concepts and knowledge, and find solutions to complicated scientific notions. For example, Albert Einstein would often listen to a violin piece while deep in thought, trying to solve physics problems.
Listening to music will allow an individual to become more creative
c
id_3585
It has been suggested that listening to music may develop an individuals imaginative ability. It can help people concentrate on thoughts, brainstorm ideas, help creativity in the formation of art and inventions and help people formulate solutions to complex tasks and theories. There have been examples of scientists who have exploited music to help them learn, give inspiration to introduce novel concepts and knowledge, and find solutions to complicated scientific notions. For example, Albert Einstein would often listen to a violin piece while deep in thought, trying to solve physics problems.
Listening to music will enhance a scientists creativity
c
id_3586
It has been suggested that listening to music may develop an individuals imaginative ability. It can help people concentrate on thoughts, brainstorm ideas, help creativity in the formation of art and inventions and help people formulate solutions to complex tasks and theories. There have been examples of scientists who have exploited music to help them learn, give inspiration to introduce novel concepts and knowledge, and find solutions to complicated scientific notions. For example, Albert Einstein would often listen to a violin piece while deep in thought, trying to solve physics problems.
Einstein developed solutions to various problems by playing a violin
c
id_3587
It is a crime to harm or threaten to harm someone who has reported a crime or someone who has plans to testify in court about a crime. This is called Retaliation.
A robbery suspect tells his cellmate he wishes he could keep a witness from testifying against him in court.
c
id_3588
It is a crime to harm or threaten to harm someone who has reported a crime or someone who has plans to testify in court about a crime. This is called Retaliation.
Sue is called to be a witness in a robbery trial. The man she is to testify against bumps her shoulder in the crowded hallway in the courthouse and walks on, unaware of what he has done.
c
id_3589
It is a crime to harm or threaten to harm someone who has reported a crime or someone who has plans to testify in court about a crime. This is called Retaliation.
Ernie breaks Harold's nose with his fist. Ernie tells Harold he will break both of his legs if he reports him to the police. This situation is the best example of Retaliation.
e
id_3590
It is a crime to harm or threaten to harm someone who has reported a crime or someone who has plans to testify in court about a crime. This is called Retaliation.
Sally tells Larry a secret, and then says she is going to hit him if he tells the secret to Jeff.
c
id_3591
It is a great deal easier to motivate employees in a growing organisation than a declining one. When organisations are expanding and adding personnel, promotional opportunities, pay rises, and the excitement of being associated with a dynamic organisation create feelings of optimism. Management is able to use the growth to entice and encourage employees. When an organisation is shrinking, the best and most mobile workers are prone to leave voluntarily. Unfortunately, they are the ones the organisation can least afford to lose those with the highest skills and experience. The minor employees remain because their job options are limited. Morale also suffers during decline. People fear they may be the next to be made redundant. Productivity often suffers, as employees spend their time sharing rumours and providing one another with moral support rather than focusing on their jobs. For those whose jobs are secure, pay increases are rarely possible. Pay cuts, unheard of during times of growth, may even be imposed. The challenge to management is how to motivate employees under such retrenchment conditions. The ways of meeting this challenge can be broadly divided into six Key Points, which are outlined below. There is an abundance of evidence to support the motivational benefits that result from carefully matching people to jobs. For example, if the job is running a small business or an autonomous unit within a larger business, high achievers should be sought. However, if the job to be filled is a managerial post in a large bureaucratic organisation, a candidate who has a high need for power and a low need for affiliation should be selected. Accordingly, high achievers should not be put into jobs that are inconsistent with their needs. High achievers will do best when the job provides moderately challenging goals and where there is independence and feedback. However, it should be remembered that not everybody is motivated by jobs that are high in independence, variety and responsibility. The literature on goal-setting theory suggests that managers should ensure that all employees have specific goals and receive comments on how well they are doing in those goals. For those with high achievement needs, typically a minority in any organisation, the existence of external goals is less important because high achievers are already internally motivated. The next factor to be determined is whether the goals should be assigned by a manager or collectively set in conjunction with the employees. The answer to that depends on perceptions of goal acceptance and the organisations culture. If resistance to goals is expected, the use of participation in goal-setting should increase acceptance. If participation is inconsistent with the culture, however, goals should be assigned. If participation and the culture are incongruous, employees are likely to perceive the participation process as manipulative and be negatively affected by it. Regardless of whether goals are achievable or well within managements perceptions of the employees ability, if employees see them as unachievable they will reduce their effort. Managers must be sure, therefore, that employees feel confident that their efforts can lead to performance goals. For managers, this means that employees must have the capability of doing the job and must regard the appraisal process as valid. Since employees have different needs, what acts as a reinforcement for one may not for another. Managers could use their knowledge of each employee to personalise the rewards over which they have control. Some of the more obvious rewards that managers allocate include pay, promotions, autonomy, job scope and depth, and the opportunity to participate in goal-setting and decision-making. Managers need to make rewards contingent on performance. To reward factors other than performance will only reinforce those other factors. Key rewards such as pay increases and promotions or advancements should be allocated for the attainment of the employees specific goals. Consistent with maximising the impact of rewards, managers should look for ways to increase their visibility. Eliminating the secrecy surrounding pay by openly communicating everyones remuneration, publicising performance bonuses and allocating annual salary increases in a lump sum rather than spreading them out over an entire year are examples of actions that will make rewards more visible and potentially more motivating. The way rewards are distributed should be transparent so that employees perceive that rewards or outcomes are equitable and equal to the inputs given. On a simplistic level, experience, abilities, effort and other obvious inputs should explain differences in pay, responsibility and other obvious outcomes. The problem, however, is complicated by the existence of dozens of inputs and outcomes and by the fact that employee groups place different degrees of importance on them. For instance, a study comparing clerical and production workers identified nearly twenty inputs and outcomes. The clerical workers considered factors such as quality of work performed and job knowledge near the top of their list, but these were at the bottom of the production workers list. Similarly, production workers thought that the most important inputs were intelligence and personal involvement with task accomplishment, two factors that were quite low in the importance ratings of the clerks. There were also important, though less dramatic, differences on the outcome side. For example, production workers rated advancement very highly, whereas clerical workers rated advancement in the lower third of their list. Such findings suggest that one persons equity is anothers inequity, so an ideal should probably weigh different inputs and outcomes according to employee group.
High achievers are well suited to team work.
c
id_3592
It is a great deal easier to motivate employees in a growing organisation than a declining one. When organisations are expanding and adding personnel, promotional opportunities, pay rises, and the excitement of being associated with a dynamic organisation create feelings of optimism. Management is able to use the growth to entice and encourage employees. When an organisation is shrinking, the best and most mobile workers are prone to leave voluntarily. Unfortunately, they are the ones the organisation can least afford to lose those with the highest skills and experience. The minor employees remain because their job options are limited. Morale also suffers during decline. People fear they may be the next to be made redundant. Productivity often suffers, as employees spend their time sharing rumours and providing one another with moral support rather than focusing on their jobs. For those whose jobs are secure, pay increases are rarely possible. Pay cuts, unheard of during times of growth, may even be imposed. The challenge to management is how to motivate employees under such retrenchment conditions. The ways of meeting this challenge can be broadly divided into six Key Points, which are outlined below. There is an abundance of evidence to support the motivational benefits that result from carefully matching people to jobs. For example, if the job is running a small business or an autonomous unit within a larger business, high achievers should be sought. However, if the job to be filled is a managerial post in a large bureaucratic organisation, a candidate who has a high need for power and a low need for affiliation should be selected. Accordingly, high achievers should not be put into jobs that are inconsistent with their needs. High achievers will do best when the job provides moderately challenging goals and where there is independence and feedback. However, it should be remembered that not everybody is motivated by jobs that are high in independence, variety and responsibility. The literature on goal-setting theory suggests that managers should ensure that all employees have specific goals and receive comments on how well they are doing in those goals. For those with high achievement needs, typically a minority in any organisation, the existence of external goals is less important because high achievers are already internally motivated. The next factor to be determined is whether the goals should be assigned by a manager or collectively set in conjunction with the employees. The answer to that depends on perceptions of goal acceptance and the organisations culture. If resistance to goals is expected, the use of participation in goal-setting should increase acceptance. If participation is inconsistent with the culture, however, goals should be assigned. If participation and the culture are incongruous, employees are likely to perceive the participation process as manipulative and be negatively affected by it. Regardless of whether goals are achievable or well within managements perceptions of the employees ability, if employees see them as unachievable they will reduce their effort. Managers must be sure, therefore, that employees feel confident that their efforts can lead to performance goals. For managers, this means that employees must have the capability of doing the job and must regard the appraisal process as valid. Since employees have different needs, what acts as a reinforcement for one may not for another. Managers could use their knowledge of each employee to personalise the rewards over which they have control. Some of the more obvious rewards that managers allocate include pay, promotions, autonomy, job scope and depth, and the opportunity to participate in goal-setting and decision-making. Managers need to make rewards contingent on performance. To reward factors other than performance will only reinforce those other factors. Key rewards such as pay increases and promotions or advancements should be allocated for the attainment of the employees specific goals. Consistent with maximising the impact of rewards, managers should look for ways to increase their visibility. Eliminating the secrecy surrounding pay by openly communicating everyones remuneration, publicising performance bonuses and allocating annual salary increases in a lump sum rather than spreading them out over an entire year are examples of actions that will make rewards more visible and potentially more motivating. The way rewards are distributed should be transparent so that employees perceive that rewards or outcomes are equitable and equal to the inputs given. On a simplistic level, experience, abilities, effort and other obvious inputs should explain differences in pay, responsibility and other obvious outcomes. The problem, however, is complicated by the existence of dozens of inputs and outcomes and by the fact that employee groups place different degrees of importance on them. For instance, a study comparing clerical and production workers identified nearly twenty inputs and outcomes. The clerical workers considered factors such as quality of work performed and job knowledge near the top of their list, but these were at the bottom of the production workers list. Similarly, production workers thought that the most important inputs were intelligence and personal involvement with task accomplishment, two factors that were quite low in the importance ratings of the clerks. There were also important, though less dramatic, differences on the outcome side. For example, production workers rated advancement very highly, whereas clerical workers rated advancement in the lower third of their list. Such findings suggest that one persons equity is anothers inequity, so an ideal should probably weigh different inputs and outcomes according to employee group.
Some employees can feel manipulated when asked to participate in goal-setting.
e
id_3593
It is a great deal easier to motivate employees in a growing organisation than a declining one. When organisations are expanding and adding personnel, promotional opportunities, pay rises, and the excitement of being associated with a dynamic organisation create feelings of optimism. Management is able to use the growth to entice and encourage employees. When an organisation is shrinking, the best and most mobile workers are prone to leave voluntarily. Unfortunately, they are the ones the organisation can least afford to lose those with the highest skills and experience. The minor employees remain because their job options are limited. Morale also suffers during decline. People fear they may be the next to be made redundant. Productivity often suffers, as employees spend their time sharing rumours and providing one another with moral support rather than focusing on their jobs. For those whose jobs are secure, pay increases are rarely possible. Pay cuts, unheard of during times of growth, may even be imposed. The challenge to management is how to motivate employees under such retrenchment conditions. The ways of meeting this challenge can be broadly divided into six Key Points, which are outlined below. There is an abundance of evidence to support the motivational benefits that result from carefully matching people to jobs. For example, if the job is running a small business or an autonomous unit within a larger business, high achievers should be sought. However, if the job to be filled is a managerial post in a large bureaucratic organisation, a candidate who has a high need for power and a low need for affiliation should be selected. Accordingly, high achievers should not be put into jobs that are inconsistent with their needs. High achievers will do best when the job provides moderately challenging goals and where there is independence and feedback. However, it should be remembered that not everybody is motivated by jobs that are high in independence, variety and responsibility. The literature on goal-setting theory suggests that managers should ensure that all employees have specific goals and receive comments on how well they are doing in those goals. For those with high achievement needs, typically a minority in any organisation, the existence of external goals is less important because high achievers are already internally motivated. The next factor to be determined is whether the goals should be assigned by a manager or collectively set in conjunction with the employees. The answer to that depends on perceptions of goal acceptance and the organisations culture. If resistance to goals is expected, the use of participation in goal-setting should increase acceptance. If participation is inconsistent with the culture, however, goals should be assigned. If participation and the culture are incongruous, employees are likely to perceive the participation process as manipulative and be negatively affected by it. Regardless of whether goals are achievable or well within managements perceptions of the employees ability, if employees see them as unachievable they will reduce their effort. Managers must be sure, therefore, that employees feel confident that their efforts can lead to performance goals. For managers, this means that employees must have the capability of doing the job and must regard the appraisal process as valid. Since employees have different needs, what acts as a reinforcement for one may not for another. Managers could use their knowledge of each employee to personalise the rewards over which they have control. Some of the more obvious rewards that managers allocate include pay, promotions, autonomy, job scope and depth, and the opportunity to participate in goal-setting and decision-making. Managers need to make rewards contingent on performance. To reward factors other than performance will only reinforce those other factors. Key rewards such as pay increases and promotions or advancements should be allocated for the attainment of the employees specific goals. Consistent with maximising the impact of rewards, managers should look for ways to increase their visibility. Eliminating the secrecy surrounding pay by openly communicating everyones remuneration, publicising performance bonuses and allocating annual salary increases in a lump sum rather than spreading them out over an entire year are examples of actions that will make rewards more visible and potentially more motivating. The way rewards are distributed should be transparent so that employees perceive that rewards or outcomes are equitable and equal to the inputs given. On a simplistic level, experience, abilities, effort and other obvious inputs should explain differences in pay, responsibility and other obvious outcomes. The problem, however, is complicated by the existence of dozens of inputs and outcomes and by the fact that employee groups place different degrees of importance on them. For instance, a study comparing clerical and production workers identified nearly twenty inputs and outcomes. The clerical workers considered factors such as quality of work performed and job knowledge near the top of their list, but these were at the bottom of the production workers list. Similarly, production workers thought that the most important inputs were intelligence and personal involvement with task accomplishment, two factors that were quite low in the importance ratings of the clerks. There were also important, though less dramatic, differences on the outcome side. For example, production workers rated advancement very highly, whereas clerical workers rated advancement in the lower third of their list. Such findings suggest that one persons equity is anothers inequity, so an ideal should probably weigh different inputs and outcomes according to employee group.
The staff appraisal process should be designed by employees.
n
id_3594
It is a great deal easier to motivate employees in a growing organisation than a declining one. When organisations are expanding and adding personnel, promotional opportunities, pay rises, and the excitement of being associated with a dynamic organisation create feelings of optimism. Management is able to use the growth to entice and encourage employees. When an organisation is shrinking, the best and most mobile workers are prone to leave voluntarily. Unfortunately, they are the ones the organisation can least afford to lose those with the highest skills and experience. The minor employees remain because their job options are limited. Morale also suffers during decline. People fear they may be the next to be made redundant. Productivity often suffers, as employees spend their time sharing rumours and providing one another with moral support rather than focusing on their jobs. For those whose jobs are secure, pay increases are rarely possible. Pay cuts, unheard of during times of growth, may even be imposed. The challenge to management is how to motivate employees under such retrenchment conditions. The ways of meeting this challenge can be broadly divided into six Key Points, which are outlined below. There is an abundance of evidence to support the motivational benefits that result from carefully matching people to jobs. For example, if the job is running a small business or an autonomous unit within a larger business, high achievers should be sought. However, if the job to be filled is a managerial post in a large bureaucratic organisation, a candidate who has a high need for power and a low need for affiliation should be selected. Accordingly, high achievers should not be put into jobs that are inconsistent with their needs. High achievers will do best when the job provides moderately challenging goals and where there is independence and feedback. However, it should be remembered that not everybody is motivated by jobs that are high in independence, variety and responsibility. The literature on goal-setting theory suggests that managers should ensure that all employees have specific goals and receive comments on how well they are doing in those goals. For those with high achievement needs, typically a minority in any organisation, the existence of external goals is less important because high achievers are already internally motivated. The next factor to be determined is whether the goals should be assigned by a manager or collectively set in conjunction with the employees. The answer to that depends on perceptions of goal acceptance and the organisations culture. If resistance to goals is expected, the use of participation in goal-setting should increase acceptance. If participation is inconsistent with the culture, however, goals should be assigned. If participation and the culture are incongruous, employees are likely to perceive the participation process as manipulative and be negatively affected by it. Regardless of whether goals are achievable or well within managements perceptions of the employees ability, if employees see them as unachievable they will reduce their effort. Managers must be sure, therefore, that employees feel confident that their efforts can lead to performance goals. For managers, this means that employees must have the capability of doing the job and must regard the appraisal process as valid. Since employees have different needs, what acts as a reinforcement for one may not for another. Managers could use their knowledge of each employee to personalise the rewards over which they have control. Some of the more obvious rewards that managers allocate include pay, promotions, autonomy, job scope and depth, and the opportunity to participate in goal-setting and decision-making. Managers need to make rewards contingent on performance. To reward factors other than performance will only reinforce those other factors. Key rewards such as pay increases and promotions or advancements should be allocated for the attainment of the employees specific goals. Consistent with maximising the impact of rewards, managers should look for ways to increase their visibility. Eliminating the secrecy surrounding pay by openly communicating everyones remuneration, publicising performance bonuses and allocating annual salary increases in a lump sum rather than spreading them out over an entire year are examples of actions that will make rewards more visible and potentially more motivating. The way rewards are distributed should be transparent so that employees perceive that rewards or outcomes are equitable and equal to the inputs given. On a simplistic level, experience, abilities, effort and other obvious inputs should explain differences in pay, responsibility and other obvious outcomes. The problem, however, is complicated by the existence of dozens of inputs and outcomes and by the fact that employee groups place different degrees of importance on them. For instance, a study comparing clerical and production workers identified nearly twenty inputs and outcomes. The clerical workers considered factors such as quality of work performed and job knowledge near the top of their list, but these were at the bottom of the production workers list. Similarly, production workers thought that the most important inputs were intelligence and personal involvement with task accomplishment, two factors that were quite low in the importance ratings of the clerks. There were also important, though less dramatic, differences on the outcome side. For example, production workers rated advancement very highly, whereas clerical workers rated advancement in the lower third of their list. Such findings suggest that one persons equity is anothers inequity, so an ideal should probably weigh different inputs and outcomes according to employee group.
Employees earnings should be disclosed to everyone within the organization.
e
id_3595
It is a great deal easier to motivate employees in a growing organisation than a declining one. When organisations are expanding and adding personnel, promotional opportunities, pay rises, and the excitement of being associated with a dynamic organisation create feelings of optimism. Management is able to use the growth to entice and encourage employees. When an organisation is shrinking, the best and most mobile workers are prone to leave voluntarily. Unfortunately, they are the ones the organisation can least afford to lose those with the highest skills and experience. The minor employees remain because their job options are limited. Morale also suffers during decline. People fear they may be the next to be made redundant. Productivity often suffers, as employees spend their time sharing rumours and providing one another with moral support rather than focusing on their jobs. For those whose jobs are secure, pay increases are rarely possible. Pay cuts, unheard of during times of growth, may even be imposed. The challenge to management is how to motivate employees under such retrenchment conditions. The ways of meeting this challenge can be broadly divided into six Key Points, which are outlined below. There is an abundance of evidence to support the motivational benefits that result from carefully matching people to jobs. For example, if the job is running a small business or an autonomous unit within a larger business, high achievers should be sought. However, if the job to be filled is a managerial post in a large bureaucratic organisation, a candidate who has a high need for power and a low need for affiliation should be selected. Accordingly, high achievers should not be put into jobs that are inconsistent with their needs. High achievers will do best when the job provides moderately challenging goals and where there is independence and feedback. However, it should be remembered that not everybody is motivated by jobs that are high in independence, variety and responsibility. The literature on goal-setting theory suggests that managers should ensure that all employees have specific goals and receive comments on how well they are doing in those goals. For those with high achievement needs, typically a minority in any organisation, the existence of external goals is less important because high achievers are already internally motivated. The next factor to be determined is whether the goals should be assigned by a manager or collectively set in conjunction with the employees. The answer to that depends on perceptions of goal acceptance and the organisations culture. If resistance to goals is expected, the use of participation in goal-setting should increase acceptance. If participation is inconsistent with the culture, however, goals should be assigned. If participation and the culture are incongruous, employees are likely to perceive the participation process as manipulative and be negatively affected by it. Regardless of whether goals are achievable or well within managements perceptions of the employees ability, if employees see them as unachievable they will reduce their effort. Managers must be sure, therefore, that employees feel confident that their efforts can lead to performance goals. For managers, this means that employees must have the capability of doing the job and must regard the appraisal process as valid. Since employees have different needs, what acts as a reinforcement for one may not for another. Managers could use their knowledge of each employee to personalise the rewards over which they have control. Some of the more obvious rewards that managers allocate include pay, promotions, autonomy, job scope and depth, and the opportunity to participate in goal-setting and decision-making. Managers need to make rewards contingent on performance. To reward factors other than performance will only reinforce those other factors. Key rewards such as pay increases and promotions or advancements should be allocated for the attainment of the employees specific goals. Consistent with maximising the impact of rewards, managers should look for ways to increase their visibility. Eliminating the secrecy surrounding pay by openly communicating everyones remuneration, publicising performance bonuses and allocating annual salary increases in a lump sum rather than spreading them out over an entire year are examples of actions that will make rewards more visible and potentially more motivating. The way rewards are distributed should be transparent so that employees perceive that rewards or outcomes are equitable and equal to the inputs given. On a simplistic level, experience, abilities, effort and other obvious inputs should explain differences in pay, responsibility and other obvious outcomes. The problem, however, is complicated by the existence of dozens of inputs and outcomes and by the fact that employee groups place different degrees of importance on them. For instance, a study comparing clerical and production workers identified nearly twenty inputs and outcomes. The clerical workers considered factors such as quality of work performed and job knowledge near the top of their list, but these were at the bottom of the production workers list. Similarly, production workers thought that the most important inputs were intelligence and personal involvement with task accomplishment, two factors that were quite low in the importance ratings of the clerks. There were also important, though less dramatic, differences on the outcome side. For example, production workers rated advancement very highly, whereas clerical workers rated advancement in the lower third of their list. Such findings suggest that one persons equity is anothers inequity, so an ideal should probably weigh different inputs and outcomes according to employee group.
A shrinking organization tends to lose its less skilled employees rather than its more skilled employees.
c
id_3596
It is a great deal easier to motivate employees in a growing organisation than a declining one. When organisations are expanding and adding personnel, promotional opportunities, pay rises, and the excitement of being associated with a dynamic organisation create feelings of optimism. Management is able to use the growth to entice and encourage employees. When an organisation is shrinking, the best and most mobile workers are prone to leave voluntarily. Unfortunately, they are the ones the organisation can least afford to lose those with the highest skills and experience. The minor employees remain because their job options are limited. Morale also suffers during decline. People fear they may be the next to be made redundant. Productivity often suffers, as employees spend their time sharing rumours and providing one another with moral support rather than focusing on their jobs. For those whose jobs are secure, pay increases are rarely possible. Pay cuts, unheard of during times of growth, may even be imposed. The challenge to management is how to motivate employees under such retrenchment conditions. The ways of meeting this challenge can be broadly divided into six Key Points, which are outlined below. There is an abundance of evidence to support the motivational benefits that result from carefully matching people to jobs. For example, if the job is running a small business or an autonomous unit within a larger business, high achievers should be sought. However, if the job to be filled is a managerial post in a large bureaucratic organisation, a candidate who has a high need for power and a low need for affiliation should be selected. Accordingly, high achievers should not be put into jobs that are inconsistent with their needs. High achievers will do best when the job provides moderately challenging goals and where there is independence and feedback. However, it should be remembered that not everybody is motivated by jobs that are high in independence, variety and responsibility. The literature on goal-setting theory suggests that managers should ensure that all employees have specific goals and receive comments on how well they are doing in those goals. For those with high achievement needs, typically a minority in any organisation, the existence of external goals is less important because high achievers are already internally motivated. The next factor to be determined is whether the goals should be assigned by a manager or collectively set in conjunction with the employees. The answer to that depends on perceptions of goal acceptance and the organisations culture. If resistance to goals is expected, the use of participation in goal-setting should increase acceptance. If participation is inconsistent with the culture, however, goals should be assigned. If participation and the culture are incongruous, employees are likely to perceive the participation process as manipulative and be negatively affected by it. Regardless of whether goals are achievable or well within managements perceptions of the employees ability, if employees see them as unachievable they will reduce their effort. Managers must be sure, therefore, that employees feel confident that their efforts can lead to performance goals. For managers, this means that employees must have the capability of doing the job and must regard the appraisal process as valid. Since employees have different needs, what acts as a reinforcement for one may not for another. Managers could use their knowledge of each employee to personalise the rewards over which they have control. Some of the more obvious rewards that managers allocate include pay, promotions, autonomy, job scope and depth, and the opportunity to participate in goal-setting and decision-making. Managers need to make rewards contingent on performance. To reward factors other than performance will only reinforce those other factors. Key rewards such as pay increases and promotions or advancements should be allocated for the attainment of the employees specific goals. Consistent with maximising the impact of rewards, managers should look for ways to increase their visibility. Eliminating the secrecy surrounding pay by openly communicating everyones remuneration, publicising performance bonuses and allocating annual salary increases in a lump sum rather than spreading them out over an entire year are examples of actions that will make rewards more visible and potentially more motivating. The way rewards are distributed should be transparent so that employees perceive that rewards or outcomes are equitable and equal to the inputs given. On a simplistic level, experience, abilities, effort and other obvious inputs should explain differences in pay, responsibility and other obvious outcomes. The problem, however, is complicated by the existence of dozens of inputs and outcomes and by the fact that employee groups place different degrees of importance on them. For instance, a study comparing clerical and production workers identified nearly twenty inputs and outcomes. The clerical workers considered factors such as quality of work performed and job knowledge near the top of their list, but these were at the bottom of the production workers list. Similarly, production workers thought that the most important inputs were intelligence and personal involvement with task accomplishment, two factors that were quite low in the importance ratings of the clerks. There were also important, though less dramatic, differences on the outcome side. For example, production workers rated advancement very highly, whereas clerical workers rated advancement in the lower third of their list. Such findings suggest that one persons equity is anothers inequity, so an ideal should probably weigh different inputs and outcomes according to employee group.
It is easier to manage a small business than a large business.
n
id_3597
It is estimated that one in four children aged between 4 and 11 contract head lice each year. Children in urban schools are particularly prone. Girls and boys are infected by a ratio of three to one. This is related to the fact that girls play tends to involve more prolonged contact between heads. Head lice live close to the scalp and feed on blood. The itch they cause is from the bite made when they are feeding. A louse lives for about 300 days and a female will lay over 2,000 eggs in her lifetime.
Boys aged between 4 and 11 years who go to urban schools are more likely to contact head lice than girls.
c
id_3598
It is estimated that one in four children aged between 4 and 11 contract head lice each year. Children in urban schools are particularly prone. Girls and boys are infected by a ratio of three to one. This is related to the fact that girls play tends to involve more prolonged contact between heads. Head lice live close to the scalp and feed on blood. The itch they cause is from the bite made when they are feeding. A louse lives for about 300 days and a female will lay over 2,000 eggs in her lifetime.
The length of hair and the frequency with which it is washed are irrelevant to the likelihood of contracting head lice.
n
id_3599
It is estimated that one in four children aged between 4 and 11 contract head lice each year. Children in urban schools are particularly prone. Girls and boys are infected by a ratio of three to one. This is related to the fact that girls play tends to involve more prolonged contact between heads. Head lice live close to the scalp and feed on blood. The itch they cause is from the bite made when they are feeding. A louse lives for about 300 days and a female will lay over 2,000 eggs in her lifetime.
A child can contract head lice when prolonged contact between heads occurs.
e