uid
stringlengths 4
7
| premise
stringlengths 19
9.21k
| hypothesis
stringlengths 13
488
| label
stringclasses 3
values |
---|---|---|---|
id_2100 | Driving While Intoxicated occurs when a person is intoxicated while driving or operating a motor vehicle in a public place. | Having drunk several beers, Fred is driving his farm truck around in circles in the back pasture of the family farm. This situation is the best example of Driving While Intoxicated. | c |
id_2101 | Driving While Intoxicated occurs when a person is intoxicated while driving or operating a motor vehicle in a public place. | Officer Marques pulls a man over for weaving in traffic and discovers the driver was falling asleep at the wheel. This situation is the best example of Driving While Intoxicated. | c |
id_2102 | Driving While Intoxicated occurs when a person is intoxicated while driving or operating a motor vehicle in a public place. | Sandra drives her car off of the highway and into a telephone pole after drinking shots of tequila for four hours at Pete's Corner Bar. This situation is the best example of Driving While Intoxicated. | e |
id_2103 | Driving in snowy and icy conditions can be dangerous as it increases stopping distances. The stopping distance represents how far a car will travel before slowing down to a halt. It is made up of the thinking distance, which is how long it takes for a driver to react, and the braking distance, which represents the time taken for the brakes to fully stop the car. It is advised to fit winter tyres during the winter season as these have much better grip on snow and ice. These tyres are made out of a softer material than regular tyres, allowing them to have more traction at colder temperatures. This ultimately reduces the braking distance. As the brakes are not very effective at stopping a vehicle on icy roads, it is recommended to steer out of trouble if possible rather than applying the brakes. It is therefore important to be travelling at slower speeds to avoid the need to suddenly brake. If the car is stuck and cannot move, the driver can stay warm by running the engine to generate heat. However, if the exhaust pipe becomes blocked by snow and the fumes cannot escape then the engine must be turned off. This is because the engine produces carbon monoxide, which is extremely toxic and odourless. | It is always safe to run the engine for heat in cold conditions | c |
id_2104 | Driving in snowy and icy conditions can be dangerous as it increases stopping distances. The stopping distance represents how far a car will travel before slowing down to a halt. It is made up of the thinking distance, which is how long it takes for a driver to react, and the braking distance, which represents the time taken for the brakes to fully stop the car. It is advised to fit winter tyres during the winter season as these have much better grip on snow and ice. These tyres are made out of a softer material than regular tyres, allowing them to have more traction at colder temperatures. This ultimately reduces the braking distance. As the brakes are not very effective at stopping a vehicle on icy roads, it is recommended to steer out of trouble if possible rather than applying the brakes. It is therefore important to be travelling at slower speeds to avoid the need to suddenly brake. If the car is stuck and cannot move, the driver can stay warm by running the engine to generate heat. However, if the exhaust pipe becomes blocked by snow and the fumes cannot escape then the engine must be turned off. This is because the engine produces carbon monoxide, which is extremely toxic and odourless. | It is dangerous to use winter tyres in hot summer conditions. | n |
id_2105 | Driving in snowy and icy conditions can be dangerous as it increases stopping distances. The stopping distance represents how far a car will travel before slowing down to a halt. It is made up of the thinking distance, which is how long it takes for a driver to react, and the braking distance, which represents the time taken for the brakes to fully stop the car. It is advised to fit winter tyres during the winter season as these have much better grip on snow and ice. These tyres are made out of a softer material than regular tyres, allowing them to have more traction at colder temperatures. This ultimately reduces the braking distance. As the brakes are not very effective at stopping a vehicle on icy roads, it is recommended to steer out of trouble if possible rather than applying the brakes. It is therefore important to be travelling at slower speeds to avoid the need to suddenly brake. If the car is stuck and cannot move, the driver can stay warm by running the engine to generate heat. However, if the exhaust pipe becomes blocked by snow and the fumes cannot escape then the engine must be turned off. This is because the engine produces carbon monoxide, which is extremely toxic and odourless. | To avoid a collision on icy surfaces, it is important to gently apply the brakes. | c |
id_2106 | Driving in snowy and icy conditions can be dangerous as it increases stopping distances. The stopping distance represents how far a car will travel before slowing down to a halt. It is made up of the thinking distance, which is how long it takes for a driver to react, and the braking distance, which represents the time taken for the brakes to fully stop the car. It is advised to fit winter tyres during the winter season as these have much better grip on snow and ice. These tyres are made out of a softer material than regular tyres, allowing them to have more traction at colder temperatures. This ultimately reduces the braking distance. As the brakes are not very effective at stopping a vehicle on icy roads, it is recommended to steer out of trouble if possible rather than applying the brakes. It is therefore important to be travelling at slower speeds to avoid the need to suddenly brake. If the car is stuck and cannot move, the driver can stay warm by running the engine to generate heat. However, if the exhaust pipe becomes blocked by snow and the fumes cannot escape then the engine must be turned off. This is because the engine produces carbon monoxide, which is extremely toxic and odourless. | Driving whilst tired increases the braking distance. | c |
id_2107 | Driving in snowy and icy conditions can be dangerous as it increases stopping distances. The stopping distance represents how far a car will travel before slowing down to a halt. It is made up of the thinking distance, which is how long it takes for a driver to react, and the braking distance, which represents the time taken for the brakes to fully stop the car. It is advised to fit winter tyres during the winter season as these have much better grip on snow and ice. These tyres are made out of a softer material than regular tyres, allowing them to have more traction at colder temperatures. This ultimately reduces the braking distance. As the brakes are not very effective at stopping a vehicle on icy roads, it is recommended to steer out of trouble if possible rather than applying the brakes. It is therefore important to be travelling at slower speeds to avoid the need to suddenly brake. If the car is stuck and cannot move, the driver can stay warm by running the engine to generate heat. However, if the exhaust pipe becomes blocked by snow and the fumes cannot escape then the engine must be turned off. This is because the engine produces carbon monoxide, which is extremely toxic and odourless. | Regular tyres are more dangerous than winter tyres in cold conditions as they are harder | e |
id_2108 | Due to Private Finance Initiatives (PFIs), schools and hospitals are being built that the government would otherwise not be able to afford. PFIs are currently preferred by government over traditional public procurement for the building of schools, hospitals, social housing and prisons. Straight public procurement is notorious for cost overruns and delays while PFI projects rarely suffer either. Another advantage and the most important for the government is that PFIs are what are called off-balance-sheet expen- diture. The cost of building is not funded by the government upfront. Instead the private sector pays for the building and the government lease the building from the private owner and guarantee an annual rent. This allows the government to spread out the cost and so PFIs help the Treasury to balance the books. | Given the advantages of PFI over public procurement it is hard to imagine that public procurement will again be used to fund the building of public buildings. | n |
id_2109 | Due to Private Finance Initiatives (PFIs), schools and hospitals are being built that the government would otherwise not be able to afford. PFIs are currently preferred by government over traditional public procurement for the building of schools, hospitals, social housing and prisons. Straight public procurement is notorious for cost overruns and delays while PFI projects rarely suffer either. Another advantage and the most important for the government is that PFIs are what are called off-balance-sheet expen- diture. The cost of building is not funded by the government upfront. Instead the private sector pays for the building and the government lease the building from the private owner and guarantee an annual rent. This allows the government to spread out the cost and so PFIs help the Treasury to balance the books. | Even if all initiatives, both PFI and public procurement, suffered the same delay and overspend, government may still prefer the PFI route. | e |
id_2110 | Due to Private Finance Initiatives (PFIs), schools and hospitals are being built that the government would otherwise not be able to afford. PFIs are currently preferred by government over traditional public procurement for the building of schools, hospitals, social housing and prisons. Straight public procurement is notorious for cost overruns and delays while PFI projects rarely suffer either. Another advantage and the most important for the government is that PFIs are what are called off-balance-sheet expen- diture. The cost of building is not funded by the government upfront. Instead the private sector pays for the building and the government lease the building from the private owner and guarantee an annual rent. This allows the government to spread out the cost and so PFIs help the Treasury to balance the books. | The last sentence would be less open to misinterpretation if it was rewritten to read This allows the government to spread out the cost of building and so PFIs help the Treasury to balance the books. | c |
id_2111 | Dumbards champagne house, a company based in the south of France, employ local workers only. The reason behind this policy is that Dumbards promote their products as home-crafted, lovingly-made. They believe that by employing local workers, their products appear more exclusive, and therefore, more expensive. In recruiting its workers, Dumbards place advertisements in the local newspaper and shop windows. They are heavily sceptical about the internet and refuse to advertise on it. Another way in which Dumbards recruit their staff is via recommendations from current employees. In September last year, the company was taken over by an American businessman who is keen to rebrand the company. He wants Dumbards to be more metropolitan and intends to open the selection process to workers from other EU countries. This idea has been controversial and may lead to strike action by current employees. | Dumbards employ local workers as it is cheaper | c |
id_2112 | Dumbards champagne house, a company based in the south of France, employ local workers only. The reason behind this policy is that Dumbards promote their products as home-crafted, lovingly-made. They believe that by employing local workers, their products appear more exclusive, and therefore, more expensive. In recruiting its workers, Dumbards place advertisements in the local newspaper and shop windows. They are heavily sceptical about the internet and refuse to advertise on it. Another way in which Dumbards recruit their staff is via recommendations from current employees. In September last year, the company was taken over by an American businessman who is keen to rebrand the company. He wants Dumbards to be more metropolitan and intends to open the selection process to workers from other EU countries. This idea has been controversial and may lead to strike action by current employees. | Dumbards rely exclusively on recommendations from current employees | c |
id_2113 | Dumbards champagne house, a company based in the south of France, employ local workers only. The reason behind this policy is that Dumbards promote their products as home-crafted, lovingly-made. They believe that by employing local workers, their products appear more exclusive, and therefore, more expensive. In recruiting its workers, Dumbards place advertisements in the local newspaper and shop windows. They are heavily sceptical about the internet and refuse to advertise on it. Another way in which Dumbards recruit their staff is via recommendations from current employees. In September last year, the company was taken over by an American businessman who is keen to rebrand the company. He wants Dumbards to be more metropolitan and intends to open the selection process to workers from other EU countries. This idea has been controversial and may lead to strike action by current employees. | Dumbards refuse to use internet advertising due to bad experiences | c |
id_2114 | Dumbards champagne house, a company based in the south of France, employ local workers only. The reason behind this policy is that Dumbards promote their products as home-crafted, lovingly-made. They believe that by employing local workers, their products appear more exclusive, and therefore, more expensive. In recruiting its workers, Dumbards place advertisements in the local newspaper and shop windows. They are heavily sceptical about the internet and refuse to advertise on it. Another way in which Dumbards recruit their staff is via recommendations from current employees. In September last year, the company was taken over by an American businessman who is keen to rebrand the company. He wants Dumbards to be more metropolitan and intends to open the selection process to workers from other EU countries. This idea has been controversial and may lead to strike action by current employees. | Dumbards only employ local workers to appear exclusive. | e |
id_2115 | Dummy pills There is an ongoing debate about the merits and the ethics of using placebos, sometimes called sugar pills. The placebo effect is well documented though not completely understood. It refers to the apparent benefits, both psychological and physiological, of taking a medication or receiving a treatment that you expect will improve your health, when in fact the tablet contains no active ingredients and the treatment has never been proven. Any benefit that arises from a placebo originates solely in the mind of the person taking it. The therapeutic effect can be either real and measurable or perceived and imagined. The placebo effect is a headache for drug manufactures. Guinea pig patients, that is to say, those who volunteer for a new treatment, may show positive health gains from the placebo effect that masks the response to the treatment. This has led to the introduction of double-blind trials experiments where neither the patient nor the healthcare professional observing the patient knows whether a placebo has been used or not. So, for example, in a randomized control trial (RCT), patients are selected at random and half the patients are given the new medication and half are given a placebo tablet that looks just the same. The observer is also blind to the treatment to avoid bias. If the observer knows which patients are receiving the real treatment they may be tempted to look harder for greater health improvements in these people in comparison with those on the placebo. Whilst the case for placebos in drug trials appears to be justified, there are ethical issues to consider when using placebos. In particular, the need to discontinue placebos in clinical trials in favour of real medication that is found to work, and whether a placebo should ever be prescribed in place of a real treatment without the patient ever knowing. In the first circumstance, it would be unethical to deny patients a new and effective treatment in a clinical trial and also unethical to stop patients from taking their existing tablets so that they can enter a trial. These two ethical perspectives are easy to understand. What is perhaps less clear is the distinction between a placebo that may have therapeutic value and a quack cure which makes claims without any supporting evidence. Quackery was at its height at the end of the nineteenth century, when so-called men of medicine peddled fake remedies claiming that all manner of diseases and afflictions could be cured. The modern equivalent of these quack cures are com plementary and alternative medicine (CAM) which are unable to substantiate the claims they make. There are dozens of these treatments, though the best-known are perhaps acupuncture, homeopathy, osteopathy and reflexology. There is anecdotal evidence from patients that these treatments are effective but no scientific basis to support the evidence. Whilst recipients of complementary and alternative medicine (CAM) can find the treatment to be therapeutic, it is not possible to distinguish these benefits from the placebo effect. Consequently it is important not to turn to alternative therapies too early but to adhere to modern scientific treatments. Complementary therapies are by definition intended to be used alongside traditional medicine as an adjunct treatment to obtain, at the very least, a placebo effect. With either comple mentary or alternative therapies the patient may notice an improvement in their health and link it with the therapy, when in fact it is the psychological benefit derived from a bit of pampering in a relaxing environment that has led to feelings of improvement, or it could be nature taking its course. Patients enter into a clinical trial in the full knowledge that they have a 50/50 chance of receiving the new drug or the placebo. An ethical dilemma arises when a placebo is considered as a treatment in its own right; for example, in patients whose problems appear to be all in the mind. Whilst a placebo is by definition harmless and the placebo effect is normally therapeutic, the practice is ethically dubious because the patient is being deceived into believing that the treatment is authentic. The person prescribing the placebo may hold the view that the treatment can be justified as long as it leads to an improvement in the patients health. However, benevolent efforts of this type are based on a deception that could, if it came to light, jeopardize the relationship between the physician and the patient. It is a small step between prescribing a placebo and believing that the physician always knows best, thereby denying patients the right to judge for themselves what is best for their own bodies. Whilst it is entirely proper for healthcare professionals to act at all times in patients best interests, honesty is usually the best policy where medical treatments are concerned, in which case dummy pills have no place in modern medicine outside of clinical trials. On the other hand, complementary medicine, whilst lacking scientific foundations, should not be considered unethical if it is able to demonstrate thera peutic benefits, even if only a placebo effect, as long as patients are not given false hopes nor hold unrealistic expectations, and are aware that the treatment remains unproven. | People turn to complementary and alternative therapies too early. | n |
id_2116 | Dummy pills There is an ongoing debate about the merits and the ethics of using placebos, sometimes called sugar pills. The placebo effect is well documented though not completely understood. It refers to the apparent benefits, both psychological and physiological, of taking a medication or receiving a treatment that you expect will improve your health, when in fact the tablet contains no active ingredients and the treatment has never been proven. Any benefit that arises from a placebo originates solely in the mind of the person taking it. The therapeutic effect can be either real and measurable or perceived and imagined. The placebo effect is a headache for drug manufactures. Guinea pig patients, that is to say, those who volunteer for a new treatment, may show positive health gains from the placebo effect that masks the response to the treatment. This has led to the introduction of double-blind trials experiments where neither the patient nor the healthcare professional observing the patient knows whether a placebo has been used or not. So, for example, in a randomized control trial (RCT), patients are selected at random and half the patients are given the new medication and half are given a placebo tablet that looks just the same. The observer is also blind to the treatment to avoid bias. If the observer knows which patients are receiving the real treatment they may be tempted to look harder for greater health improvements in these people in comparison with those on the placebo. Whilst the case for placebos in drug trials appears to be justified, there are ethical issues to consider when using placebos. In particular, the need to discontinue placebos in clinical trials in favour of real medication that is found to work, and whether a placebo should ever be prescribed in place of a real treatment without the patient ever knowing. In the first circumstance, it would be unethical to deny patients a new and effective treatment in a clinical trial and also unethical to stop patients from taking their existing tablets so that they can enter a trial. These two ethical perspectives are easy to understand. What is perhaps less clear is the distinction between a placebo that may have therapeutic value and a quack cure which makes claims without any supporting evidence. Quackery was at its height at the end of the nineteenth century, when so-called men of medicine peddled fake remedies claiming that all manner of diseases and afflictions could be cured. The modern equivalent of these quack cures are com plementary and alternative medicine (CAM) which are unable to substantiate the claims they make. There are dozens of these treatments, though the best-known are perhaps acupuncture, homeopathy, osteopathy and reflexology. There is anecdotal evidence from patients that these treatments are effective but no scientific basis to support the evidence. Whilst recipients of complementary and alternative medicine (CAM) can find the treatment to be therapeutic, it is not possible to distinguish these benefits from the placebo effect. Consequently it is important not to turn to alternative therapies too early but to adhere to modern scientific treatments. Complementary therapies are by definition intended to be used alongside traditional medicine as an adjunct treatment to obtain, at the very least, a placebo effect. With either comple mentary or alternative therapies the patient may notice an improvement in their health and link it with the therapy, when in fact it is the psychological benefit derived from a bit of pampering in a relaxing environment that has led to feelings of improvement, or it could be nature taking its course. Patients enter into a clinical trial in the full knowledge that they have a 50/50 chance of receiving the new drug or the placebo. An ethical dilemma arises when a placebo is considered as a treatment in its own right; for example, in patients whose problems appear to be all in the mind. Whilst a placebo is by definition harmless and the placebo effect is normally therapeutic, the practice is ethically dubious because the patient is being deceived into believing that the treatment is authentic. The person prescribing the placebo may hold the view that the treatment can be justified as long as it leads to an improvement in the patients health. However, benevolent efforts of this type are based on a deception that could, if it came to light, jeopardize the relationship between the physician and the patient. It is a small step between prescribing a placebo and believing that the physician always knows best, thereby denying patients the right to judge for themselves what is best for their own bodies. Whilst it is entirely proper for healthcare professionals to act at all times in patients best interests, honesty is usually the best policy where medical treatments are concerned, in which case dummy pills have no place in modern medicine outside of clinical trials. On the other hand, complementary medicine, whilst lacking scientific foundations, should not be considered unethical if it is able to demonstrate thera peutic benefits, even if only a placebo effect, as long as patients are not given false hopes nor hold unrealistic expectations, and are aware that the treatment remains unproven. | Health improvements following complementary or alternative therapies may not have been caused by the therapies. | e |
id_2117 | Dummy pills There is an ongoing debate about the merits and the ethics of using placebos, sometimes called sugar pills. The placebo effect is well documented though not completely understood. It refers to the apparent benefits, both psychological and physiological, of taking a medication or receiving a treatment that you expect will improve your health, when in fact the tablet contains no active ingredients and the treatment has never been proven. Any benefit that arises from a placebo originates solely in the mind of the person taking it. The therapeutic effect can be either real and measurable or perceived and imagined. The placebo effect is a headache for drug manufactures. Guinea pig patients, that is to say, those who volunteer for a new treatment, may show positive health gains from the placebo effect that masks the response to the treatment. This has led to the introduction of double-blind trials experiments where neither the patient nor the healthcare professional observing the patient knows whether a placebo has been used or not. So, for example, in a randomized control trial (RCT), patients are selected at random and half the patients are given the new medication and half are given a placebo tablet that looks just the same. The observer is also blind to the treatment to avoid bias. If the observer knows which patients are receiving the real treatment they may be tempted to look harder for greater health improvements in these people in comparison with those on the placebo. Whilst the case for placebos in drug trials appears to be justified, there are ethical issues to consider when using placebos. In particular, the need to discontinue placebos in clinical trials in favour of real medication that is found to work, and whether a placebo should ever be prescribed in place of a real treatment without the patient ever knowing. In the first circumstance, it would be unethical to deny patients a new and effective treatment in a clinical trial and also unethical to stop patients from taking their existing tablets so that they can enter a trial. These two ethical perspectives are easy to understand. What is perhaps less clear is the distinction between a placebo that may have therapeutic value and a quack cure which makes claims without any supporting evidence. Quackery was at its height at the end of the nineteenth century, when so-called men of medicine peddled fake remedies claiming that all manner of diseases and afflictions could be cured. The modern equivalent of these quack cures are com plementary and alternative medicine (CAM) which are unable to substantiate the claims they make. There are dozens of these treatments, though the best-known are perhaps acupuncture, homeopathy, osteopathy and reflexology. There is anecdotal evidence from patients that these treatments are effective but no scientific basis to support the evidence. Whilst recipients of complementary and alternative medicine (CAM) can find the treatment to be therapeutic, it is not possible to distinguish these benefits from the placebo effect. Consequently it is important not to turn to alternative therapies too early but to adhere to modern scientific treatments. Complementary therapies are by definition intended to be used alongside traditional medicine as an adjunct treatment to obtain, at the very least, a placebo effect. With either comple mentary or alternative therapies the patient may notice an improvement in their health and link it with the therapy, when in fact it is the psychological benefit derived from a bit of pampering in a relaxing environment that has led to feelings of improvement, or it could be nature taking its course. Patients enter into a clinical trial in the full knowledge that they have a 50/50 chance of receiving the new drug or the placebo. An ethical dilemma arises when a placebo is considered as a treatment in its own right; for example, in patients whose problems appear to be all in the mind. Whilst a placebo is by definition harmless and the placebo effect is normally therapeutic, the practice is ethically dubious because the patient is being deceived into believing that the treatment is authentic. The person prescribing the placebo may hold the view that the treatment can be justified as long as it leads to an improvement in the patients health. However, benevolent efforts of this type are based on a deception that could, if it came to light, jeopardize the relationship between the physician and the patient. It is a small step between prescribing a placebo and believing that the physician always knows best, thereby denying patients the right to judge for themselves what is best for their own bodies. Whilst it is entirely proper for healthcare professionals to act at all times in patients best interests, honesty is usually the best policy where medical treatments are concerned, in which case dummy pills have no place in modern medicine outside of clinical trials. On the other hand, complementary medicine, whilst lacking scientific foundations, should not be considered unethical if it is able to demonstrate thera peutic benefits, even if only a placebo effect, as long as patients are not given false hopes nor hold unrealistic expectations, and are aware that the treatment remains unproven. | Complementary medicine should be used separately from traditional medicine. | c |
id_2118 | Dummy pills There is an ongoing debate about the merits and the ethics of using placebos, sometimes called sugar pills. The placebo effect is well documented though not completely understood. It refers to the apparent benefits, both psychological and physiological, of taking a medication or receiving a treatment that you expect will improve your health, when in fact the tablet contains no active ingredients and the treatment has never been proven. Any benefit that arises from a placebo originates solely in the mind of the person taking it. The therapeutic effect can be either real and measurable or perceived and imagined. The placebo effect is a headache for drug manufactures. Guinea pig patients, that is to say, those who volunteer for a new treatment, may show positive health gains from the placebo effect that masks the response to the treatment. This has led to the introduction of double-blind trials experiments where neither the patient nor the healthcare professional observing the patient knows whether a placebo has been used or not. So, for example, in a randomized control trial (RCT), patients are selected at random and half the patients are given the new medication and half are given a placebo tablet that looks just the same. The observer is also blind to the treatment to avoid bias. If the observer knows which patients are receiving the real treatment they may be tempted to look harder for greater health improvements in these people in comparison with those on the placebo. Whilst the case for placebos in drug trials appears to be justified, there are ethical issues to consider when using placebos. In particular, the need to discontinue placebos in clinical trials in favour of real medication that is found to work, and whether a placebo should ever be prescribed in place of a real treatment without the patient ever knowing. In the first circumstance, it would be unethical to deny patients a new and effective treatment in a clinical trial and also unethical to stop patients from taking their existing tablets so that they can enter a trial. These two ethical perspectives are easy to understand. What is perhaps less clear is the distinction between a placebo that may have therapeutic value and a quack cure which makes claims without any supporting evidence. Quackery was at its height at the end of the nineteenth century, when so-called men of medicine peddled fake remedies claiming that all manner of diseases and afflictions could be cured. The modern equivalent of these quack cures are com plementary and alternative medicine (CAM) which are unable to substantiate the claims they make. There are dozens of these treatments, though the best-known are perhaps acupuncture, homeopathy, osteopathy and reflexology. There is anecdotal evidence from patients that these treatments are effective but no scientific basis to support the evidence. Whilst recipients of complementary and alternative medicine (CAM) can find the treatment to be therapeutic, it is not possible to distinguish these benefits from the placebo effect. Consequently it is important not to turn to alternative therapies too early but to adhere to modern scientific treatments. Complementary therapies are by definition intended to be used alongside traditional medicine as an adjunct treatment to obtain, at the very least, a placebo effect. With either comple mentary or alternative therapies the patient may notice an improvement in their health and link it with the therapy, when in fact it is the psychological benefit derived from a bit of pampering in a relaxing environment that has led to feelings of improvement, or it could be nature taking its course. Patients enter into a clinical trial in the full knowledge that they have a 50/50 chance of receiving the new drug or the placebo. An ethical dilemma arises when a placebo is considered as a treatment in its own right; for example, in patients whose problems appear to be all in the mind. Whilst a placebo is by definition harmless and the placebo effect is normally therapeutic, the practice is ethically dubious because the patient is being deceived into believing that the treatment is authentic. The person prescribing the placebo may hold the view that the treatment can be justified as long as it leads to an improvement in the patients health. However, benevolent efforts of this type are based on a deception that could, if it came to light, jeopardize the relationship between the physician and the patient. It is a small step between prescribing a placebo and believing that the physician always knows best, thereby denying patients the right to judge for themselves what is best for their own bodies. Whilst it is entirely proper for healthcare professionals to act at all times in patients best interests, honesty is usually the best policy where medical treatments are concerned, in which case dummy pills have no place in modern medicine outside of clinical trials. On the other hand, complementary medicine, whilst lacking scientific foundations, should not be considered unethical if it is able to demonstrate thera peutic benefits, even if only a placebo effect, as long as patients are not given false hopes nor hold unrealistic expectations, and are aware that the treatment remains unproven. | There are personal accounts of complementary and alternative medicine being successful. | e |
id_2119 | Dummy pills There is an ongoing debate about the merits and the ethics of using placebos, sometimes called sugar pills. The placebo effect is well documented though not completely understood. It refers to the apparent benefits, both psychological and physiological, of taking a medication or receiving a treatment that you expect will improve your health, when in fact the tablet contains no active ingredients and the treatment has never been proven. Any benefit that arises from a placebo originates solely in the mind of the person taking it. The therapeutic effect can be either real and measurable or perceived and imagined. The placebo effect is a headache for drug manufactures. Guinea pig patients, that is to say, those who volunteer for a new treatment, may show positive health gains from the placebo effect that masks the response to the treatment. This has led to the introduction of double-blind trials experiments where neither the patient nor the healthcare professional observing the patient knows whether a placebo has been used or not. So, for example, in a randomized control trial (RCT), patients are selected at random and half the patients are given the new medication and half are given a placebo tablet that looks just the same. The observer is also blind to the treatment to avoid bias. If the observer knows which patients are receiving the real treatment they may be tempted to look harder for greater health improvements in these people in comparison with those on the placebo. Whilst the case for placebos in drug trials appears to be justified, there are ethical issues to consider when using placebos. In particular, the need to discontinue placebos in clinical trials in favour of real medication that is found to work, and whether a placebo should ever be prescribed in place of a real treatment without the patient ever knowing. In the first circumstance, it would be unethical to deny patients a new and effective treatment in a clinical trial and also unethical to stop patients from taking their existing tablets so that they can enter a trial. These two ethical perspectives are easy to understand. What is perhaps less clear is the distinction between a placebo that may have therapeutic value and a quack cure which makes claims without any supporting evidence. Quackery was at its height at the end of the nineteenth century, when so-called men of medicine peddled fake remedies claiming that all manner of diseases and afflictions could be cured. The modern equivalent of these quack cures are com plementary and alternative medicine (CAM) which are unable to substantiate the claims they make. There are dozens of these treatments, though the best-known are perhaps acupuncture, homeopathy, osteopathy and reflexology. There is anecdotal evidence from patients that these treatments are effective but no scientific basis to support the evidence. Whilst recipients of complementary and alternative medicine (CAM) can find the treatment to be therapeutic, it is not possible to distinguish these benefits from the placebo effect. Consequently it is important not to turn to alternative therapies too early but to adhere to modern scientific treatments. Complementary therapies are by definition intended to be used alongside traditional medicine as an adjunct treatment to obtain, at the very least, a placebo effect. With either comple mentary or alternative therapies the patient may notice an improvement in their health and link it with the therapy, when in fact it is the psychological benefit derived from a bit of pampering in a relaxing environment that has led to feelings of improvement, or it could be nature taking its course. Patients enter into a clinical trial in the full knowledge that they have a 50/50 chance of receiving the new drug or the placebo. An ethical dilemma arises when a placebo is considered as a treatment in its own right; for example, in patients whose problems appear to be all in the mind. Whilst a placebo is by definition harmless and the placebo effect is normally therapeutic, the practice is ethically dubious because the patient is being deceived into believing that the treatment is authentic. The person prescribing the placebo may hold the view that the treatment can be justified as long as it leads to an improvement in the patients health. However, benevolent efforts of this type are based on a deception that could, if it came to light, jeopardize the relationship between the physician and the patient. It is a small step between prescribing a placebo and believing that the physician always knows best, thereby denying patients the right to judge for themselves what is best for their own bodies. Whilst it is entirely proper for healthcare professionals to act at all times in patients best interests, honesty is usually the best policy where medical treatments are concerned, in which case dummy pills have no place in modern medicine outside of clinical trials. On the other hand, complementary medicine, whilst lacking scientific foundations, should not be considered unethical if it is able to demonstrate thera peutic benefits, even if only a placebo effect, as long as patients are not given false hopes nor hold unrealistic expectations, and are aware that the treatment remains unproven. | The author states that quack cures can be likened to complementary and alternative medicine (CAM). | e |
id_2120 | Dummy pills There is an ongoing debate about the merits and the ethics of using placebos, sometimes called sugar pills. The placebo effect is well documented though not completely understood. It refers to the apparent benefits, both psychological and physiological, of taking a medication or receiving a treatment that you expect will improve your health, when in fact the tablet contains no active ingredients and the treatment has never been proven. Any benefit that arises from a placebo originates solely in the mind of the person taking it. The therapeutic effect can be either real and measurable or perceived and imagined. The placebo effect is a headache for drug manufactures. Guinea pig patients, that is to say, those who volunteer for a new treatment, may show positive health gains from the placebo effect that masks the response to the treatment. This has led to the introduction of double-blind trials experiments where neither the patient nor the healthcare professional observing the patient knows whether a placebo has been used or not. So, for example, in a randomized control trial (RCT), patients are selected at random and half the patients are given the new medication and half are given a placebo tablet that looks just the same. The observer is also blind to the treatment to avoid bias. If the observer knows which patients are receiving the real treatment they may be tempted to look harder for greater health improvements in these people in comparison with those on the placebo. Whilst the case for placebos in drug trials appears to be justified, there are ethical issues to consider when using placebos. In particular, the need to discontinue placebos in clinical trials in favour of real medication that is found to work, and whether a placebo should ever be prescribed in place of a real treatment without the patient ever knowing. In the first circumstance, it would be unethical to deny patients a new and effective treatment in a clinical trial and also unethical to stop patients from taking their existing tablets so that they can enter a trial. These two ethical perspectives are easy to understand. What is perhaps less clear is the distinction between a placebo that may have therapeutic value and a quack cure which makes claims without any supporting evidence. Quackery was at its height at the end of the nineteenth century, when so-called men of medicine peddled fake remedies claiming that all manner of diseases and afflictions could be cured. The modern equivalent of these quack cures are com plementary and alternative medicine (CAM) which are unable to substantiate the claims they make. There are dozens of these treatments, though the best-known are perhaps acupuncture, homeopathy, osteopathy and reflexology. There is anecdotal evidence from patients that these treatments are effective but no scientific basis to support the evidence. Whilst recipients of complementary and alternative medicine (CAM) can find the treatment to be therapeutic, it is not possible to distinguish these benefits from the placebo effect. Consequently it is important not to turn to alternative therapies too early but to adhere to modern scientific treatments. Complementary therapies are by definition intended to be used alongside traditional medicine as an adjunct treatment to obtain, at the very least, a placebo effect. With either comple mentary or alternative therapies the patient may notice an improvement in their health and link it with the therapy, when in fact it is the psychological benefit derived from a bit of pampering in a relaxing environment that has led to feelings of improvement, or it could be nature taking its course. Patients enter into a clinical trial in the full knowledge that they have a 50/50 chance of receiving the new drug or the placebo. An ethical dilemma arises when a placebo is considered as a treatment in its own right; for example, in patients whose problems appear to be all in the mind. Whilst a placebo is by definition harmless and the placebo effect is normally therapeutic, the practice is ethically dubious because the patient is being deceived into believing that the treatment is authentic. The person prescribing the placebo may hold the view that the treatment can be justified as long as it leads to an improvement in the patients health. However, benevolent efforts of this type are based on a deception that could, if it came to light, jeopardize the relationship between the physician and the patient. It is a small step between prescribing a placebo and believing that the physician always knows best, thereby denying patients the right to judge for themselves what is best for their own bodies. Whilst it is entirely proper for healthcare professionals to act at all times in patients best interests, honesty is usually the best policy where medical treatments are concerned, in which case dummy pills have no place in modern medicine outside of clinical trials. On the other hand, complementary medicine, whilst lacking scientific foundations, should not be considered unethical if it is able to demonstrate thera peutic benefits, even if only a placebo effect, as long as patients are not given false hopes nor hold unrealistic expectations, and are aware that the treatment remains unproven. | There can be risks associated with alternative therapies. | n |
id_2121 | During the 1960s and 70s, terrorism was a contemporary subject due to the conflict between the UK government and the IRA. The Troubles (the name given to the violence) originated in the 1920s and eventually resulted in bombings on the streets of Northern Ireland and occasionally in England. The government felt that in order to prevent mayhem, their actions needed to be swift and decisive. Thus, a series of temporary measures were initiated; policemen and soldiers all over Ulster were given the right to stop, question, search and arrest members of the public. In 2000, the Terrorism Act 2000 was passed as a definitive measure following twenty years of temporary measures. Policemen were given wider stop and search powers and enabled to detain suspects for up to 48 hours without charge. The Act was met with strong criticism as it outlawed certain Islamic fundamentalist groups and this was seen as a portrayal of Islam as a religion that fuels terrorism. This, in turn, made it likely that discrimination would occur in the form of the disproportionate stopping and searching of Asians who were thought to look Muslim. Although, prior to the September 2001 attacks on Washington D. C, government legislation in the UK had attempted to prevent the occurrence ofterrorism; the counter-terrorism strategies had focused a lot of attention on the punishment of terrorists and the criminalisation of new offences following their occurrence. However, the Anti-Terrorism Crime and Security Act 2001 (ATCSA) marked a more firm move towards the management of anticipatory risk (Piazza, Walsh, 2010) which was to characterise the counter-terrorism legislation of the 21 st century. | The act targets preventing rather than punishing terrorism acts. | e |
id_2122 | During the 1960s and 70s, terrorism was a contemporary subject due to the conflict between the UK government and the IRA. The Troubles (the name given to the violence) originated in the 1920s and eventually resulted in bombings on the streets of Northern Ireland and occasionally in England. The government felt that in order to prevent mayhem, their actions needed to be swift and decisive. Thus, a series of temporary measures were initiated; policemen and soldiers all over Ulster were given the right to stop, question, search and arrest members of the public. In 2000, the Terrorism Act 2000 was passed as a definitive measure following twenty years of temporary measures. Policemen were given wider stop and search powers and enabled to detain suspects for up to 48 hours without charge. The Act was met with strong criticism as it outlawed certain Islamic fundamentalist groups and this was seen as a portrayal of Islam as a religion that fuels terrorism. This, in turn, made it likely that discrimination would occur in the form of the disproportionate stopping and searching of Asians who were thought to look Muslim. Although, prior to the September 2001 attacks on Washington D. C, government legislation in the UK had attempted to prevent the occurrence ofterrorism; the counter-terrorism strategies had focused a lot of attention on the punishment of terrorists and the criminalisation of new offences following their occurrence. However, the Anti-Terrorism Crime and Security Act 2001 (ATCSA) marked a more firm move towards the management of anticipatory risk (Piazza, Walsh, 2010) which was to characterise the counter-terrorism legislation of the 21 st century. | Terrorism management was a big problem between 1980 and 2000. | e |
id_2123 | During the 1960s and 70s, terrorism was a contemporary subject due to the conflict between the UK government and the IRA. The Troubles (the name given to the violence) originated in the 1920s and eventually resulted in bombings on the streets of Northern Ireland and occasionally in England. The government felt that in order to prevent mayhem, their actions needed to be swift and decisive. Thus, a series of temporary measures were initiated; policemen and soldiers all over Ulster were given the right to stop, question, search and arrest members of the public. In 2000, the Terrorism Act 2000 was passed as a definitive measure following twenty years of temporary measures. Policemen were given wider stop and search powers and enabled to detain suspects for up to 48 hours without charge. The Act was met with strong criticism as it outlawed certain Islamic fundamentalist groups and this was seen as a portrayal of Islam as a religion that fuels terrorism. This, in turn, made it likely that discrimination would occur in the form of the disproportionate stopping and searching of Asians who were thought to look Muslim. Although, prior to the September 2001 attacks on Washington D. C, government legislation in the UK had attempted to prevent the occurrence ofterrorism; the counter-terrorism strategies had focused a lot of attention on the punishment of terrorists and the criminalisation of new offences following their occurrence. However, the Anti-Terrorism Crime and Security Act 2001 (ATCSA) marked a more firm move towards the management of anticipatory risk (Piazza, Walsh, 2010) which was to characterise the counter-terrorism legislation of the 21 st century. | In the 1960s, policemen and soldiers all over Ulster were given the right to stop, question, search and arrest members of the public. | c |
id_2124 | During the 1960s and 70s, terrorism was a contemporary subject due to the conflict between the UK government and the IRA. The Troubles (the name given to the violence) originated in the 1920s and eventually resulted in bombings on the streets of Northern Ireland and occasionally in England. The government felt that in order to prevent mayhem, their actions needed to be swift and decisive. Thus, a series of temporary measures were initiated; policemen and soldiers all over Ulster were given the right to stop, question, search and arrest members of the public. In 2000, the Terrorism Act 2000 was passed as a definitive measure following twenty years of temporary measures. Policemen were given wider stop and search powers and enabled to detain suspects for up to 48 hours without charge. The Act was met with strong criticism as it outlawed certain Islamic fundamentalist groups and this was seen as a portrayal of Islam as a religion that fuels terrorism. This, in turn, made it likely that discrimination would occur in the form of the disproportionate stopping and searching of Asians who were thought to look Muslim. Although, prior to the September 2001 attacks on Washington D. C, government legislation in the UK had attempted to prevent the occurrence ofterrorism; the counter-terrorism strategies had focused a lot of attention on the punishment of terrorists and the criminalisation of new offences following their occurrence. However, the Anti-Terrorism Crime and Security Act 2001 (ATCSA) marked a more firm move towards the management of anticipatory risk (Piazza, Walsh, 2010) which was to characterise the counter-terrorism legislation of the 21 st century. | The Terrorism Act 2000 is still implemented today. | n |
id_2125 | During the 1960s and 70s, terrorism was a contemporary subject due to the conflict between the UK government and the IRA. The Troubles (the name given to the violence) originated in the 1920s and eventually resulted in bombings on the streets of Northern Ireland and occasionally in England. The government felt that in order to prevent mayhem, their actions needed to be swift and decisive. Thus, a series of temporary measures were initiated; policemen and soldiers all over Ulster were given the right to stop, question, search and arrest members of the public. In 2000, the Terrorism Act 2000 was passed as a definitive measure following twenty years of temporary measures. Policemen were given wider stop and search powers and enabled to detain suspects for up to 48 hours without charge. The Act was met with strong criticism as it outlawed certain Islamic fundamentalist groups and this was seen as a portrayal of Islam as a religion that fuels terrorism. This, in turn, made it likely that discrimination would occur in the form of the disproportionate stopping and searching of Asians who were thought to look Muslim. Although, prior to the September 2001 attacks on Washington D. C, government legislation in the UK had attempted to prevent the occurrence ofterrorism; the counter-terrorism strategies had focused a lot of attention on the punishment of terrorists and the criminalisation of new offences following their occurrence. However, the Anti-Terrorism Crime and Security Act 2001 (ATCSA) marked a more firm move towards the management of anticipatory risk (Piazza, Walsh, 2010) which was to characterise the counter-terrorism legislation of the 21 st century. | The act in 2000 was passed in an effort to combat the threat of Islamic extremists. | n |
id_2126 | During the past year, Zoe read more books than Jane. Jane read fewer books than Heather. | Heather read more books than Zoe. | n |
id_2127 | E-learning systems have been recently used by major corporations. These systems, in most cases, are used as an economical replacement for the long established leadership development programs. With the leadership role becoming increasingly complex on the one hand and budgets for one-on-one or group training shrinking on the other, e-learning is a simple and cost effective solution. By providing interactive courses using the Internet infrastructure directly to one's personal computer and supporting the learner in their own pace, e-learning can, at times, be more effective than other conventional forms of training. The content of such training varies significantly but is generally used for the adoption of new technologies, enhancing managerial capabilities and improving financial planning. | The internet infrastructure facilitates e-learning courses. | e |
id_2128 | E-learning systems have been recently used by major corporations. These systems, in most cases, are used as an economical replacement for the long established leadership development programs. With the leadership role becoming increasingly complex on the one hand and budgets for one-on-one or group training shrinking on the other, e-learning is a simple and cost effective solution. By providing interactive courses using the Internet infrastructure directly to one's personal computer and supporting the learner in their own pace, e-learning can, at times, be more effective than other conventional forms of training. The content of such training varies significantly but is generally used for the adoption of new technologies, enhancing managerial capabilities and improving financial planning. | One reason to cut down conventional training is the escalating costs of manpower. | n |
id_2129 | E-learning systems have been recently used by major corporations. These systems, in most cases, are used as an economical replacement for the long established leadership development programs. With the leadership role becoming increasingly complex on the one hand and budgets for one-on-one or group training shrinking on the other, e-learning is a simple and cost effective solution. By providing interactive courses using the Internet infrastructure directly to one's personal computer and supporting the learner in their own pace, e-learning can, at times, be more effective than other conventional forms of training. The content of such training varies significantly but is generally used for the adoption of new technologies, enhancing managerial capabilities and improving financial planning. | E-learning is better than other conventional training methods. | n |
id_2130 | EASTERN ENERGY We are here to help and provide you with personal advice on any matters connected with your bill or any other queries regarding your gas and electricity supply. Moving home Please give as much notice as possible if you are moving home, but at least 48 hours is required for us to make the necessary arrangements for your gas and electricity supply. Please telephone our 24-hour line on 01316 753219 with details of your move. In most cases we are happy to accept your meter reading on the day you move. Tell the new occupant that Eastern Energy supply the household, to ensure the service is not interrupted. Remember we can now supply electricity and gas at your new address, anywhere in the UK. If you do not contact us, you may be held responsible for the payment for electricity used after you have moved. Meter reading Eastern Energy uses various types of meter ranging from the traditional dial meter to new technology digital display meters. Always read the meter from left to right, ignoring any red dials. If you require assistance, contact our 24-hour line on 0600 7310 310. Energy Efficiency Line If you would like advice on the efficient use of energy, please call our Energy Efficiency Line on 0995 7626 513. Please do not use this number for any other enquiries. Special services Passwords you can choose a password so that, whenever we visit you at home, you will know it is us. For more information, ring our helpline on 0995 7290 290. Help and advice If you need help or advice with any issues, please contact us on 01316 440188. Complaints We hope you will never have a problem or cause to complain, but, if you do, please contact our complaints handling team at PO Box 220, Stanfield, ST55 6GF or telephone us on 01316 753270. Supply failure If you experience any problems with your electricity supply, please call free on 0600 7838 836,24 hours a day, seven days a week. | Eastern Energy supplies energy to households throughout the country. | e |
id_2131 | EASTERN ENERGY We are here to help and provide you with personal advice on any matters connected with your bill or any other queries regarding your gas and electricity supply. Moving home Please give as much notice as possible if you are moving home, but at least 48 hours is required for us to make the necessary arrangements for your gas and electricity supply. Please telephone our 24-hour line on 01316 753219 with details of your move. In most cases we are happy to accept your meter reading on the day you move. Tell the new occupant that Eastern Energy supply the household, to ensure the service is not interrupted. Remember we can now supply electricity and gas at your new address, anywhere in the UK. If you do not contact us, you may be held responsible for the payment for electricity used after you have moved. Meter reading Eastern Energy uses various types of meter ranging from the traditional dial meter to new technology digital display meters. Always read the meter from left to right, ignoring any red dials. If you require assistance, contact our 24-hour line on 0600 7310 310. Energy Efficiency Line If you would like advice on the efficient use of energy, please call our Energy Efficiency Line on 0995 7626 513. Please do not use this number for any other enquiries. Special services Passwords you can choose a password so that, whenever we visit you at home, you will know it is us. For more information, ring our helpline on 0995 7290 290. Help and advice If you need help or advice with any issues, please contact us on 01316 440188. Complaints We hope you will never have a problem or cause to complain, but, if you do, please contact our complaints handling team at PO Box 220, Stanfield, ST55 6GF or telephone us on 01316 753270. Supply failure If you experience any problems with your electricity supply, please call free on 0600 7838 836,24 hours a day, seven days a week. | The Energy Efficiency Line also handles queries about energy supply. | c |
id_2132 | EASTERN ENERGY We are here to help and provide you with personal advice on any matters connected with your bill or any other queries regarding your gas and electricity supply. Moving home Please give as much notice as possible if you are moving home, but at least 48 hours is required for us to make the necessary arrangements for your gas and electricity supply. Please telephone our 24-hour line on 01316 753219 with details of your move. In most cases we are happy to accept your meter reading on the day you move. Tell the new occupant that Eastern Energy supply the household, to ensure the service is not interrupted. Remember we can now supply electricity and gas at your new address, anywhere in the UK. If you do not contact us, you may be held responsible for the payment for electricity used after you have moved. Meter reading Eastern Energy uses various types of meter ranging from the traditional dial meter to new technology digital display meters. Always read the meter from left to right, ignoring any red dials. If you require assistance, contact our 24-hour line on 0600 7310 310. Energy Efficiency Line If you would like advice on the efficient use of energy, please call our Energy Efficiency Line on 0995 7626 513. Please do not use this number for any other enquiries. Special services Passwords you can choose a password so that, whenever we visit you at home, you will know it is us. For more information, ring our helpline on 0995 7290 290. Help and advice If you need help or advice with any issues, please contact us on 01316 440188. Complaints We hope you will never have a problem or cause to complain, but, if you do, please contact our complaints handling team at PO Box 220, Stanfield, ST55 6GF or telephone us on 01316 753270. Supply failure If you experience any problems with your electricity supply, please call free on 0600 7838 836,24 hours a day, seven days a week. | Customers should inform Eastern Energy of a change of address on arrival at their new home. | c |
id_2133 | EASTERN ENERGY We are here to help and provide you with personal advice on any matters connected with your bill or any other queries regarding your gas and electricity supply. Moving home Please give as much notice as possible if you are moving home, but at least 48 hours is required for us to make the necessary arrangements for your gas and electricity supply. Please telephone our 24-hour line on 01316 753219 with details of your move. In most cases we are happy to accept your meter reading on the day you move. Tell the new occupant that Eastern Energy supply the household, to ensure the service is not interrupted. Remember we can now supply electricity and gas at your new address, anywhere in the UK. If you do not contact us, you may be held responsible for the payment for electricity used after you have moved. Meter reading Eastern Energy uses various types of meter ranging from the traditional dial meter to new technology digital display meters. Always read the meter from left to right, ignoring any red dials. If you require assistance, contact our 24-hour line on 0600 7310 310. Energy Efficiency Line If you would like advice on the efficient use of energy, please call our Energy Efficiency Line on 0995 7626 513. Please do not use this number for any other enquiries. Special services Passwords you can choose a password so that, whenever we visit you at home, you will know it is us. For more information, ring our helpline on 0995 7290 290. Help and advice If you need help or advice with any issues, please contact us on 01316 440188. Complaints We hope you will never have a problem or cause to complain, but, if you do, please contact our complaints handling team at PO Box 220, Stanfield, ST55 6GF or telephone us on 01316 753270. Supply failure If you experience any problems with your electricity supply, please call free on 0600 7838 836,24 hours a day, seven days a week. | All complaints about energy supply should be made by phone. | c |
id_2134 | EASTERN ENERGY We are here to help and provide you with personal advice on any matters connected with your bill or any other queries regarding your gas and electricity supply. Moving home Please give as much notice as possible if you are moving home, but at least 48 hours is required for us to make the necessary arrangements for your gas and electricity supply. Please telephone our 24-hour line on 01316 753219 with details of your move. In most cases we are happy to accept your meter reading on the day you move. Tell the new occupant that Eastern Energy supply the household, to ensure the service is not interrupted. Remember we can now supply electricity and gas at your new address, anywhere in the UK. If you do not contact us, you may be held responsible for the payment for electricity used after you have moved. Meter reading Eastern Energy uses various types of meter ranging from the traditional dial meter to new technology digital display meters. Always read the meter from left to right, ignoring any red dials. If you require assistance, contact our 24-hour line on 0600 7310 310. Energy Efficiency Line If you would like advice on the efficient use of energy, please call our Energy Efficiency Line on 0995 7626 513. Please do not use this number for any other enquiries. Special services Passwords you can choose a password so that, whenever we visit you at home, you will know it is us. For more information, ring our helpline on 0995 7290 290. Help and advice If you need help or advice with any issues, please contact us on 01316 440188. Complaints We hope you will never have a problem or cause to complain, but, if you do, please contact our complaints handling team at PO Box 220, Stanfield, ST55 6GF or telephone us on 01316 753270. Supply failure If you experience any problems with your electricity supply, please call free on 0600 7838 836,24 hours a day, seven days a week. | Customers are expected to read their own gas or electricity meters. | e |
id_2135 | EASTERN ENERGY We are here to help and provide you with personal advice on any matters connected with your bill or any other queries regarding your gas and electricity supply. Moving home Please give as much notice as possible if you are moving home, but at least 48 hours is required for us to make the necessary arrangements for your gas and electricity supply. Please telephone our 24-hour line on 01316 753219 with details of your move. In most cases we are happy to accept your meter reading on the day you move. Tell the new occupant that Eastern Energy supply the household, to ensure the service is not interrupted. Remember we can now supply electricity and gas at your new address, anywhere in the UK. If you do not contact us, you may be held responsible for the payment for electricity used after you have moved. Meter reading Eastern Energy uses various types of meter ranging from the traditional dial meter to new technology digital display meters. Always read the meter from left to right, ignoring any red dials. If you require assistance, contact our 24-hour line on 0600 7310 310. Energy Efficiency Line If you would like advice on the efficient use of energy, please call our Energy Efficiency Line on 0995 7626 513. Please do not use this number for any other enquiries. Special services Passwords you can choose a password so that, whenever we visit you at home, you will know it is us. For more information, ring our helpline on 0995 7290 290. Help and advice If you need help or advice with any issues, please contact us on 01316 440188. Complaints We hope you will never have a problem or cause to complain, but, if you do, please contact our complaints handling team at PO Box 220, Stanfield, ST55 6GF or telephone us on 01316 753270. Supply failure If you experience any problems with your electricity supply, please call free on 0600 7838 836,24 hours a day, seven days a week. | Customers are not charged for the call when they report a fault in supply. | e |
id_2136 | EASTERN ENERGY We are here to help and provide you with personal advice on any matters connected with your bill or any other queries regarding your gas and electricity supply. Moving home Please give as much notice as possible if you are moving home, but at least 48 hours is required for us to make the necessary arrangements for your gas and electricity supply. Please telephone our 24-hour line on 01316 753219 with details of your move. In most cases we are happy to accept your meter reading on the day you move. Tell the new occupant that Eastern Energy supply the household, to ensure the service is not interrupted. Remember we can now supply electricity and gas at your new address, anywhere in the UK. If you do not contact us, you may be held responsible for the payment for electricity used after you have moved. Meter reading Eastern Energy uses various types of meter ranging from the traditional dial meter to new technology digital display meters. Always read the meter from left to right, ignoring any red dials. If you require assistance, contact our 24-hour line on 0600 7310 310. Energy Efficiency Line If you would like advice on the efficient use of energy, please call our Energy Efficiency Line on 0995 7626 513. Please do not use this number for any other enquiries. Special services Passwords you can choose a password so that, whenever we visit you at home, you will know it is us. For more information, ring our helpline on 0995 7290 290. Help and advice If you need help or advice with any issues, please contact us on 01316 440188. Complaints We hope you will never have a problem or cause to complain, but, if you do, please contact our complaints handling team at PO Box 220, Stanfield, ST55 6GF or telephone us on 01316 753270. Supply failure If you experience any problems with your electricity supply, please call free on 0600 7838 836,24 hours a day, seven days a week. | It is now cheaper to use gas rather than electricity as a form of heating. | n |
id_2137 | EDUCATING PSYCHE. Educating Psyche by Bernie Neville is a book which looks at radical new approaches to learning, describing the effects of emotion, imagination and the unconscious on learning. One theory discussed in the book is that proposed by George Lozanov, which focuses on the power of suggestion. Lozanovs instructional technique is based on the evidence that the connections made in the brain through unconscious processing (which he calls non-specific mental reactivity) are more durable than those made through conscious processing. Besides the laboratory evidence for this, we know from our experience that we often remember what we have perceived peripherally, long after we have forgotten what we set out to learn. If we think of a book we studied months or years ago, we will find it easier to recall peripheral details the colour, the binding, the typeface, the table at the library where we sat while studying it than the content on which we were concentrating. If we think of a lecture we listened to with great concentration, we will recall the lecturers appearance and mannerisms, our place in the auditorium, the failure of the air-conditioning, much more easily than the ideas we went to learn. Even if these peripheral details are a bit elusive, they come back readily in hypnosis or when we relive the event imaginatively, as in psychodrama. The details of the content of the lecture, on the other hand, seem to have gone forever. This phenomenon can be partly attributed to the common counterproductive approach to study (making extreme efforts to memorise, tensing muscles, inducing fatigue), but it also simply reflects the way the brain functions. Lozanov therefore made indirect instruction (suggestion) central to his teaching system. In suggestopedia, as he called his method, consciousness is shifted away from the curriculum to focus on something peripheral. The curriculum then becomes peripheral and is dealt with by the reserve capacity of the brain. The suggestopedic approach to foreign language learning provides a good illustration. In its most recent variant (1980), it consists of the reading of vocabulary and text while the class is listening to music. The first session is in two parts. In the first part, the music is classical (Mozart, Beethoven, Brahms) and the teacher reads the text slowly and solemnly, with attention to the dynamics of the music. The students follow the text in their books. This is followed by several minutes of silence. In the second part, they listen to baroque music (Bach, Corelli, Handel) while the teacher reads the text in a normal speaking voice. During this time they have their books closed. During the whole of this session, their attention is passive; they listen to the music but make no attempt to learn the material. Beforehand, the students have been carefully prepared for the language learning experience. Through meeting with the staff and satisfied students they develop the expectation that learning will be easy and pleasant and that they will successfully learn several hundred words of the foreign language during the class. In a preliminary talk, the teacher introduces them to the material to be covered, but does not teach it. Likewise, the students are instructed not to try to learn it during this introduction. Some hours after the two-part session, there is a follow-up class at which the students are stimulated to recall the material presented. Once again the approach is indirect. The students do not focus their attention on trying to remember the vocabulary, but focus on using the language to communicate (e. g. through games or improvised dramatisations). Such methods are not unusual in language teaching. What is distinctive in the suggestopedic method is that they are devoted entirely to assisting recall. The learning of the material is assumed to be automatic and effortless, accomplished while listening to music. The teachers task is to assist the students to apply what they have learned paraconsciously, and in doing so to make it easily accessible to consciousness. Another difference from conventional teaching is the evidence that students can regularly learn 1000 new words of a foreign language during a suggestopedic session, as well as grammar and idiom. Lozanov experimented with teaching by direct suggestion during sleep, hypnosis and trance states, but found such procedures unnecessary. Hypnosis, yoga, Silva mind-control, religious ceremonies and faith healing are all associated with successful suggestion, but none of their techniques seem to be essential to it. Such rituals may be seen as placebos. Lozanov acknowledges that the ritual surrounding suggestion in his own system is also a placebo, but maintains that without such a placebo people are unable or afraid to tap the reserve capacity of their brains. Like any placebo, it must be dispensed with authority to be effective. Just as a doctor calls on the full power of autocratic suggestion by insisting that the patient take precisely this white capsule precisely three times a day before meals, Lozanov is categoric in insisting that the suggestopedic session be conducted exactly in the manner designated, by trained and accredited suggestopedic teachers. While suggestopedia has gained some notoriety through success in the teaching of modern languages, few teachers are able to emulate the spectacular results of Lozanov and his associates. We can, perhaps, attribute mediocre results to an inadequate placebo effect. The students have not developed the appropriate mind set. They are often not motivated to learn through this method. They do not have enough faith. They do not see it as real teaching, especially as it does not seem to involve the work they have learned to believe is essential to learning. | As an indirect benefit, students notice improvements in their memory. | n |
id_2138 | EDUCATING PSYCHE. Educating Psyche by Bernie Neville is a book which looks at radical new approaches to learning, describing the effects of emotion, imagination and the unconscious on learning. One theory discussed in the book is that proposed by George Lozanov, which focuses on the power of suggestion. Lozanovs instructional technique is based on the evidence that the connections made in the brain through unconscious processing (which he calls non-specific mental reactivity) are more durable than those made through conscious processing. Besides the laboratory evidence for this, we know from our experience that we often remember what we have perceived peripherally, long after we have forgotten what we set out to learn. If we think of a book we studied months or years ago, we will find it easier to recall peripheral details the colour, the binding, the typeface, the table at the library where we sat while studying it than the content on which we were concentrating. If we think of a lecture we listened to with great concentration, we will recall the lecturers appearance and mannerisms, our place in the auditorium, the failure of the air-conditioning, much more easily than the ideas we went to learn. Even if these peripheral details are a bit elusive, they come back readily in hypnosis or when we relive the event imaginatively, as in psychodrama. The details of the content of the lecture, on the other hand, seem to have gone forever. This phenomenon can be partly attributed to the common counterproductive approach to study (making extreme efforts to memorise, tensing muscles, inducing fatigue), but it also simply reflects the way the brain functions. Lozanov therefore made indirect instruction (suggestion) central to his teaching system. In suggestopedia, as he called his method, consciousness is shifted away from the curriculum to focus on something peripheral. The curriculum then becomes peripheral and is dealt with by the reserve capacity of the brain. The suggestopedic approach to foreign language learning provides a good illustration. In its most recent variant (1980), it consists of the reading of vocabulary and text while the class is listening to music. The first session is in two parts. In the first part, the music is classical (Mozart, Beethoven, Brahms) and the teacher reads the text slowly and solemnly, with attention to the dynamics of the music. The students follow the text in their books. This is followed by several minutes of silence. In the second part, they listen to baroque music (Bach, Corelli, Handel) while the teacher reads the text in a normal speaking voice. During this time they have their books closed. During the whole of this session, their attention is passive; they listen to the music but make no attempt to learn the material. Beforehand, the students have been carefully prepared for the language learning experience. Through meeting with the staff and satisfied students they develop the expectation that learning will be easy and pleasant and that they will successfully learn several hundred words of the foreign language during the class. In a preliminary talk, the teacher introduces them to the material to be covered, but does not teach it. Likewise, the students are instructed not to try to learn it during this introduction. Some hours after the two-part session, there is a follow-up class at which the students are stimulated to recall the material presented. Once again the approach is indirect. The students do not focus their attention on trying to remember the vocabulary, but focus on using the language to communicate (e. g. through games or improvised dramatisations). Such methods are not unusual in language teaching. What is distinctive in the suggestopedic method is that they are devoted entirely to assisting recall. The learning of the material is assumed to be automatic and effortless, accomplished while listening to music. The teachers task is to assist the students to apply what they have learned paraconsciously, and in doing so to make it easily accessible to consciousness. Another difference from conventional teaching is the evidence that students can regularly learn 1000 new words of a foreign language during a suggestopedic session, as well as grammar and idiom. Lozanov experimented with teaching by direct suggestion during sleep, hypnosis and trance states, but found such procedures unnecessary. Hypnosis, yoga, Silva mind-control, religious ceremonies and faith healing are all associated with successful suggestion, but none of their techniques seem to be essential to it. Such rituals may be seen as placebos. Lozanov acknowledges that the ritual surrounding suggestion in his own system is also a placebo, but maintains that without such a placebo people are unable or afraid to tap the reserve capacity of their brains. Like any placebo, it must be dispensed with authority to be effective. Just as a doctor calls on the full power of autocratic suggestion by insisting that the patient take precisely this white capsule precisely three times a day before meals, Lozanov is categoric in insisting that the suggestopedic session be conducted exactly in the manner designated, by trained and accredited suggestopedic teachers. While suggestopedia has gained some notoriety through success in the teaching of modern languages, few teachers are able to emulate the spectacular results of Lozanov and his associates. We can, perhaps, attribute mediocre results to an inadequate placebo effect. The students have not developed the appropriate mind set. They are often not motivated to learn through this method. They do not have enough faith. They do not see it as real teaching, especially as it does not seem to involve the work they have learned to believe is essential to learning. | In the follow-up class, the teaching activities are similar to those used in conventional classes. | e |
id_2139 | EDUCATING PSYCHE. Educating Psyche by Bernie Neville is a book which looks at radical new approaches to learning, describing the effects of emotion, imagination and the unconscious on learning. One theory discussed in the book is that proposed by George Lozanov, which focuses on the power of suggestion. Lozanovs instructional technique is based on the evidence that the connections made in the brain through unconscious processing (which he calls non-specific mental reactivity) are more durable than those made through conscious processing. Besides the laboratory evidence for this, we know from our experience that we often remember what we have perceived peripherally, long after we have forgotten what we set out to learn. If we think of a book we studied months or years ago, we will find it easier to recall peripheral details the colour, the binding, the typeface, the table at the library where we sat while studying it than the content on which we were concentrating. If we think of a lecture we listened to with great concentration, we will recall the lecturers appearance and mannerisms, our place in the auditorium, the failure of the air-conditioning, much more easily than the ideas we went to learn. Even if these peripheral details are a bit elusive, they come back readily in hypnosis or when we relive the event imaginatively, as in psychodrama. The details of the content of the lecture, on the other hand, seem to have gone forever. This phenomenon can be partly attributed to the common counterproductive approach to study (making extreme efforts to memorise, tensing muscles, inducing fatigue), but it also simply reflects the way the brain functions. Lozanov therefore made indirect instruction (suggestion) central to his teaching system. In suggestopedia, as he called his method, consciousness is shifted away from the curriculum to focus on something peripheral. The curriculum then becomes peripheral and is dealt with by the reserve capacity of the brain. The suggestopedic approach to foreign language learning provides a good illustration. In its most recent variant (1980), it consists of the reading of vocabulary and text while the class is listening to music. The first session is in two parts. In the first part, the music is classical (Mozart, Beethoven, Brahms) and the teacher reads the text slowly and solemnly, with attention to the dynamics of the music. The students follow the text in their books. This is followed by several minutes of silence. In the second part, they listen to baroque music (Bach, Corelli, Handel) while the teacher reads the text in a normal speaking voice. During this time they have their books closed. During the whole of this session, their attention is passive; they listen to the music but make no attempt to learn the material. Beforehand, the students have been carefully prepared for the language learning experience. Through meeting with the staff and satisfied students they develop the expectation that learning will be easy and pleasant and that they will successfully learn several hundred words of the foreign language during the class. In a preliminary talk, the teacher introduces them to the material to be covered, but does not teach it. Likewise, the students are instructed not to try to learn it during this introduction. Some hours after the two-part session, there is a follow-up class at which the students are stimulated to recall the material presented. Once again the approach is indirect. The students do not focus their attention on trying to remember the vocabulary, but focus on using the language to communicate (e. g. through games or improvised dramatisations). Such methods are not unusual in language teaching. What is distinctive in the suggestopedic method is that they are devoted entirely to assisting recall. The learning of the material is assumed to be automatic and effortless, accomplished while listening to music. The teachers task is to assist the students to apply what they have learned paraconsciously, and in doing so to make it easily accessible to consciousness. Another difference from conventional teaching is the evidence that students can regularly learn 1000 new words of a foreign language during a suggestopedic session, as well as grammar and idiom. Lozanov experimented with teaching by direct suggestion during sleep, hypnosis and trance states, but found such procedures unnecessary. Hypnosis, yoga, Silva mind-control, religious ceremonies and faith healing are all associated with successful suggestion, but none of their techniques seem to be essential to it. Such rituals may be seen as placebos. Lozanov acknowledges that the ritual surrounding suggestion in his own system is also a placebo, but maintains that without such a placebo people are unable or afraid to tap the reserve capacity of their brains. Like any placebo, it must be dispensed with authority to be effective. Just as a doctor calls on the full power of autocratic suggestion by insisting that the patient take precisely this white capsule precisely three times a day before meals, Lozanov is categoric in insisting that the suggestopedic session be conducted exactly in the manner designated, by trained and accredited suggestopedic teachers. While suggestopedia has gained some notoriety through success in the teaching of modern languages, few teachers are able to emulate the spectacular results of Lozanov and his associates. We can, perhaps, attribute mediocre results to an inadequate placebo effect. The students have not developed the appropriate mind set. They are often not motivated to learn through this method. They do not have enough faith. They do not see it as real teaching, especially as it does not seem to involve the work they have learned to believe is essential to learning. | Teachers say they prefer suggestopedia to traditional approaches to language teaching. | n |
id_2140 | EDUCATING PSYCHE. Educating Psyche by Bernie Neville is a book which looks at radical new approaches to learning, describing the effects of emotion, imagination and the unconscious on learning. One theory discussed in the book is that proposed by George Lozanov, which focuses on the power of suggestion. Lozanovs instructional technique is based on the evidence that the connections made in the brain through unconscious processing (which he calls non-specific mental reactivity) are more durable than those made through conscious processing. Besides the laboratory evidence for this, we know from our experience that we often remember what we have perceived peripherally, long after we have forgotten what we set out to learn. If we think of a book we studied months or years ago, we will find it easier to recall peripheral details the colour, the binding, the typeface, the table at the library where we sat while studying it than the content on which we were concentrating. If we think of a lecture we listened to with great concentration, we will recall the lecturers appearance and mannerisms, our place in the auditorium, the failure of the air-conditioning, much more easily than the ideas we went to learn. Even if these peripheral details are a bit elusive, they come back readily in hypnosis or when we relive the event imaginatively, as in psychodrama. The details of the content of the lecture, on the other hand, seem to have gone forever. This phenomenon can be partly attributed to the common counterproductive approach to study (making extreme efforts to memorise, tensing muscles, inducing fatigue), but it also simply reflects the way the brain functions. Lozanov therefore made indirect instruction (suggestion) central to his teaching system. In suggestopedia, as he called his method, consciousness is shifted away from the curriculum to focus on something peripheral. The curriculum then becomes peripheral and is dealt with by the reserve capacity of the brain. The suggestopedic approach to foreign language learning provides a good illustration. In its most recent variant (1980), it consists of the reading of vocabulary and text while the class is listening to music. The first session is in two parts. In the first part, the music is classical (Mozart, Beethoven, Brahms) and the teacher reads the text slowly and solemnly, with attention to the dynamics of the music. The students follow the text in their books. This is followed by several minutes of silence. In the second part, they listen to baroque music (Bach, Corelli, Handel) while the teacher reads the text in a normal speaking voice. During this time they have their books closed. During the whole of this session, their attention is passive; they listen to the music but make no attempt to learn the material. Beforehand, the students have been carefully prepared for the language learning experience. Through meeting with the staff and satisfied students they develop the expectation that learning will be easy and pleasant and that they will successfully learn several hundred words of the foreign language during the class. In a preliminary talk, the teacher introduces them to the material to be covered, but does not teach it. Likewise, the students are instructed not to try to learn it during this introduction. Some hours after the two-part session, there is a follow-up class at which the students are stimulated to recall the material presented. Once again the approach is indirect. The students do not focus their attention on trying to remember the vocabulary, but focus on using the language to communicate (e. g. through games or improvised dramatisations). Such methods are not unusual in language teaching. What is distinctive in the suggestopedic method is that they are devoted entirely to assisting recall. The learning of the material is assumed to be automatic and effortless, accomplished while listening to music. The teachers task is to assist the students to apply what they have learned paraconsciously, and in doing so to make it easily accessible to consciousness. Another difference from conventional teaching is the evidence that students can regularly learn 1000 new words of a foreign language during a suggestopedic session, as well as grammar and idiom. Lozanov experimented with teaching by direct suggestion during sleep, hypnosis and trance states, but found such procedures unnecessary. Hypnosis, yoga, Silva mind-control, religious ceremonies and faith healing are all associated with successful suggestion, but none of their techniques seem to be essential to it. Such rituals may be seen as placebos. Lozanov acknowledges that the ritual surrounding suggestion in his own system is also a placebo, but maintains that without such a placebo people are unable or afraid to tap the reserve capacity of their brains. Like any placebo, it must be dispensed with authority to be effective. Just as a doctor calls on the full power of autocratic suggestion by insisting that the patient take precisely this white capsule precisely three times a day before meals, Lozanov is categoric in insisting that the suggestopedic session be conducted exactly in the manner designated, by trained and accredited suggestopedic teachers. While suggestopedia has gained some notoriety through success in the teaching of modern languages, few teachers are able to emulate the spectacular results of Lozanov and his associates. We can, perhaps, attribute mediocre results to an inadequate placebo effect. The students have not developed the appropriate mind set. They are often not motivated to learn through this method. They do not have enough faith. They do not see it as real teaching, especially as it does not seem to involve the work they have learned to believe is essential to learning. | Students in a suggestopedia class retain more new vocabulary than those in ordinary classes. | e |
id_2141 | EDUCATING PSYCHE. Educating Psyche by Bernie Neville is a book which looks at radical new approaches to learning, describing the effects of emotion, imagination and the unconscious on learning. One theory discussed in the book is that proposed by George Lozanov, which focuses on the power of suggestion. Lozanovs instructional technique is based on the evidence that the connections made in the brain through unconscious processing (which he calls non-specific mental reactivity) are more durable than those made through conscious processing. Besides the laboratory evidence for this, we know from our experience that we often remember what we have perceived peripherally, long after we have forgotten what we set out to learn. If we think of a book we studied months or years ago, we will find it easier to recall peripheral details the colour, the binding, the typeface, the table at the library where we sat while studying it than the content on which we were concentrating. If we think of a lecture we listened to with great concentration, we will recall the lecturers appearance and mannerisms, our place in the auditorium, the failure of the air-conditioning, much more easily than the ideas we went to learn. Even if these peripheral details are a bit elusive, they come back readily in hypnosis or when we relive the event imaginatively, as in psychodrama. The details of the content of the lecture, on the other hand, seem to have gone forever. This phenomenon can be partly attributed to the common counterproductive approach to study (making extreme efforts to memorise, tensing muscles, inducing fatigue), but it also simply reflects the way the brain functions. Lozanov therefore made indirect instruction (suggestion) central to his teaching system. In suggestopedia, as he called his method, consciousness is shifted away from the curriculum to focus on something peripheral. The curriculum then becomes peripheral and is dealt with by the reserve capacity of the brain. The suggestopedic approach to foreign language learning provides a good illustration. In its most recent variant (1980), it consists of the reading of vocabulary and text while the class is listening to music. The first session is in two parts. In the first part, the music is classical (Mozart, Beethoven, Brahms) and the teacher reads the text slowly and solemnly, with attention to the dynamics of the music. The students follow the text in their books. This is followed by several minutes of silence. In the second part, they listen to baroque music (Bach, Corelli, Handel) while the teacher reads the text in a normal speaking voice. During this time they have their books closed. During the whole of this session, their attention is passive; they listen to the music but make no attempt to learn the material. Beforehand, the students have been carefully prepared for the language learning experience. Through meeting with the staff and satisfied students they develop the expectation that learning will be easy and pleasant and that they will successfully learn several hundred words of the foreign language during the class. In a preliminary talk, the teacher introduces them to the material to be covered, but does not teach it. Likewise, the students are instructed not to try to learn it during this introduction. Some hours after the two-part session, there is a follow-up class at which the students are stimulated to recall the material presented. Once again the approach is indirect. The students do not focus their attention on trying to remember the vocabulary, but focus on using the language to communicate (e. g. through games or improvised dramatisations). Such methods are not unusual in language teaching. What is distinctive in the suggestopedic method is that they are devoted entirely to assisting recall. The learning of the material is assumed to be automatic and effortless, accomplished while listening to music. The teachers task is to assist the students to apply what they have learned paraconsciously, and in doing so to make it easily accessible to consciousness. Another difference from conventional teaching is the evidence that students can regularly learn 1000 new words of a foreign language during a suggestopedic session, as well as grammar and idiom. Lozanov experimented with teaching by direct suggestion during sleep, hypnosis and trance states, but found such procedures unnecessary. Hypnosis, yoga, Silva mind-control, religious ceremonies and faith healing are all associated with successful suggestion, but none of their techniques seem to be essential to it. Such rituals may be seen as placebos. Lozanov acknowledges that the ritual surrounding suggestion in his own system is also a placebo, but maintains that without such a placebo people are unable or afraid to tap the reserve capacity of their brains. Like any placebo, it must be dispensed with authority to be effective. Just as a doctor calls on the full power of autocratic suggestion by insisting that the patient take precisely this white capsule precisely three times a day before meals, Lozanov is categoric in insisting that the suggestopedic session be conducted exactly in the manner designated, by trained and accredited suggestopedic teachers. While suggestopedia has gained some notoriety through success in the teaching of modern languages, few teachers are able to emulate the spectacular results of Lozanov and his associates. We can, perhaps, attribute mediocre results to an inadequate placebo effect. The students have not developed the appropriate mind set. They are often not motivated to learn through this method. They do not have enough faith. They do not see it as real teaching, especially as it does not seem to involve the work they have learned to believe is essential to learning. | Prior to the suggestopedia class, students are made aware that the language experience will be demanding. | c |
id_2142 | EDUCATING PSYCHE. Educating Psyche by Bernie Neville is a book which looks at radical new approaches to learning, describing the effects of emotion, imagination and the unconscious on learning. One theory discussed in the book is that proposed by George Lozanov, which focuses on the power of suggestion. Lozanovs instructional technique is based on the evidence that the connections made in the brain through unconscious processing (which he calls non-specific mental reactivity) are more durable than those made through conscious processing. Besides the laboratory evidence for this, we know from our experience that we often remember what we have perceived peripherally, long after we have forgotten what we set out to learn. If we think of a book we studied months or years ago, we will find it easier to recall peripheral details the colour, the binding, the typeface, the table at the library where we sat while studying it than the content on which we were concentrating. If we think of a lecture we listened to with great concentration, we will recall the lecturers appearance and mannerisms, our place in the auditorium, the failure of the air-conditioning, much more easily than the ideas we went to learn. Even if these peripheral details are a bit elusive, they come back readily in hypnosis or when we relive the event imaginatively, as in psychodrama. The details of the content of the lecture, on the other hand, seem to have gone forever. This phenomenon can be partly attributed to the common counterproductive approach to study (making extreme efforts to memorise, tensing muscles, inducing fatigue), but it also simply reflects the way the brain functions. Lozanov therefore made indirect instruction (suggestion) central to his teaching system. In suggestopedia, as he called his method, consciousness is shifted away from the curriculum to focus on something peripheral. The curriculum then becomes peripheral and is dealt with by the reserve capacity of the brain. The suggestopedic approach to foreign language learning provides a good illustration. In its most recent variant (1980), it consists of the reading of vocabulary and text while the class is listening to music. The first session is in two parts. In the first part, the music is classical (Mozart, Beethoven, Brahms) and the teacher reads the text slowly and solemnly, with attention to the dynamics of the music. The students follow the text in their books. This is followed by several minutes of silence. In the second part, they listen to baroque music (Bach, Corelli, Handel) while the teacher reads the text in a normal speaking voice. During this time they have their books closed. During the whole of this session, their attention is passive; they listen to the music but make no attempt to learn the material. Beforehand, the students have been carefully prepared for the language learning experience. Through meeting with the staff and satisfied students they develop the expectation that learning will be easy and pleasant and that they will successfully learn several hundred words of the foreign language during the class. In a preliminary talk, the teacher introduces them to the material to be covered, but does not teach it. Likewise, the students are instructed not to try to learn it during this introduction. Some hours after the two-part session, there is a follow-up class at which the students are stimulated to recall the material presented. Once again the approach is indirect. The students do not focus their attention on trying to remember the vocabulary, but focus on using the language to communicate (e. g. through games or improvised dramatisations). Such methods are not unusual in language teaching. What is distinctive in the suggestopedic method is that they are devoted entirely to assisting recall. The learning of the material is assumed to be automatic and effortless, accomplished while listening to music. The teachers task is to assist the students to apply what they have learned paraconsciously, and in doing so to make it easily accessible to consciousness. Another difference from conventional teaching is the evidence that students can regularly learn 1000 new words of a foreign language during a suggestopedic session, as well as grammar and idiom. Lozanov experimented with teaching by direct suggestion during sleep, hypnosis and trance states, but found such procedures unnecessary. Hypnosis, yoga, Silva mind-control, religious ceremonies and faith healing are all associated with successful suggestion, but none of their techniques seem to be essential to it. Such rituals may be seen as placebos. Lozanov acknowledges that the ritual surrounding suggestion in his own system is also a placebo, but maintains that without such a placebo people are unable or afraid to tap the reserve capacity of their brains. Like any placebo, it must be dispensed with authority to be effective. Just as a doctor calls on the full power of autocratic suggestion by insisting that the patient take precisely this white capsule precisely three times a day before meals, Lozanov is categoric in insisting that the suggestopedic session be conducted exactly in the manner designated, by trained and accredited suggestopedic teachers. While suggestopedia has gained some notoriety through success in the teaching of modern languages, few teachers are able to emulate the spectacular results of Lozanov and his associates. We can, perhaps, attribute mediocre results to an inadequate placebo effect. The students have not developed the appropriate mind set. They are often not motivated to learn through this method. They do not have enough faith. They do not see it as real teaching, especially as it does not seem to involve the work they have learned to believe is essential to learning. | In the example of suggestopedic teaching in the fourth paragraph, the only variable that changes is the music. | c |
id_2143 | ET. From a space exploration point of view, a satellite is a human-made object placed into orbit around a planet, for example the Earth, Saturn or Jupiter. In astronomy, a satellite is any celestial body orbiting around a planet or star, so the moon is a natural or non-artificial satellite of the Earth; the other planets encircling the sun are natural satellites of the sun. Mercury and Venus are the only planets to have no moons. Mars has two small asteroid- like moons called Phobos and Deimos. Saturn has at least 30 orbiting moons. The largest of Saturns moons, Titan, is 1.5 times larger than the Earths moon, making it the second- largest moon in the solar system. Titan is larger than the planets Mercury and Pluto. The four largest of Jupiters 60 moons are Ganymede, Io, Callisto and Europa. These four Galilean satellites were discovered in 1610 and are all planet-sized. Europa has an icy surface at minus 170 C. However, heat linked to volcanic activity on Europa may be sufficient to maintain a layer of liquid water below the ice sheet, making it one of the few places in the solar system capable of sustaining life. This possibility featured in the 1984 science fiction film 2010, based on an Arthur C Clarke novel. NASA now plans to send a probe to Europa to see if it harbours life. | Jupiter has more moons than any other planet. | n |
id_2144 | ET. From a space exploration point of view, a satellite is a human-made object placed into orbit around a planet, for example the Earth, Saturn or Jupiter. In astronomy, a satellite is any celestial body orbiting around a planet or star, so the moon is a natural or non-artificial satellite of the Earth; the other planets encircling the sun are natural satellites of the sun. Mercury and Venus are the only planets to have no moons. Mars has two small asteroid- like moons called Phobos and Deimos. Saturn has at least 30 orbiting moons. The largest of Saturns moons, Titan, is 1.5 times larger than the Earths moon, making it the second- largest moon in the solar system. Titan is larger than the planets Mercury and Pluto. The four largest of Jupiters 60 moons are Ganymede, Io, Callisto and Europa. These four Galilean satellites were discovered in 1610 and are all planet-sized. Europa has an icy surface at minus 170 C. However, heat linked to volcanic activity on Europa may be sufficient to maintain a layer of liquid water below the ice sheet, making it one of the few places in the solar system capable of sustaining life. This possibility featured in the 1984 science fiction film 2010, based on an Arthur C Clarke novel. NASA now plans to send a probe to Europa to see if it harbours life. | The Earths moon is the third-largest moon in the solar system. | n |
id_2145 | ET. From a space exploration point of view, a satellite is a human-made object placed into orbit around a planet, for example the Earth, Saturn or Jupiter. In astronomy, a satellite is any celestial body orbiting around a planet or star, so the moon is a natural or non-artificial satellite of the Earth; the other planets encircling the sun are natural satellites of the sun. Mercury and Venus are the only planets to have no moons. Mars has two small asteroid- like moons called Phobos and Deimos. Saturn has at least 30 orbiting moons. The largest of Saturns moons, Titan, is 1.5 times larger than the Earths moon, making it the second- largest moon in the solar system. Titan is larger than the planets Mercury and Pluto. The four largest of Jupiters 60 moons are Ganymede, Io, Callisto and Europa. These four Galilean satellites were discovered in 1610 and are all planet-sized. Europa has an icy surface at minus 170 C. However, heat linked to volcanic activity on Europa may be sufficient to maintain a layer of liquid water below the ice sheet, making it one of the few places in the solar system capable of sustaining life. This possibility featured in the 1984 science fiction film 2010, based on an Arthur C Clarke novel. NASA now plans to send a probe to Europa to see if it harbours life. | The atmosphere on Europa is capable of sustaining life. | n |
id_2146 | ET. From a space exploration point of view, a satellite is a human-made object placed into orbit around a planet, for example the Earth, Saturn or Jupiter. In astronomy, a satellite is any celestial body orbiting around a planet or star, so the moon is a natural or non-artificial satellite of the Earth; the other planets encircling the sun are natural satellites of the sun. Mercury and Venus are the only planets to have no moons. Mars has two small asteroid- like moons called Phobos and Deimos. Saturn has at least 30 orbiting moons. The largest of Saturns moons, Titan, is 1.5 times larger than the Earths moon, making it the second- largest moon in the solar system. Titan is larger than the planets Mercury and Pluto. The four largest of Jupiters 60 moons are Ganymede, Io, Callisto and Europa. These four Galilean satellites were discovered in 1610 and are all planet-sized. Europa has an icy surface at minus 170 C. However, heat linked to volcanic activity on Europa may be sufficient to maintain a layer of liquid water below the ice sheet, making it one of the few places in the solar system capable of sustaining life. This possibility featured in the 1984 science fiction film 2010, based on an Arthur C Clarke novel. NASA now plans to send a probe to Europa to see if it harbours life. | The low number of craters on the surface of Europa results from volcanic activity and lava flows. | n |
id_2147 | EU national airlines will not make any money on their short-haul European operations this year despite heavy promotions of reduced fares and a campaign to cut costs. Most of these so-called flag carriers have reluctantly decided to pass on soaring fuel costs to short-haul passengers in the form of a surcharge on tickets. At the same time, they are encountering strong competition on most European routes from the budget carriers, who have pledged not to levy surcharges. Budget airlines have dramati- cally raised their short-haul market share, taking advantage of the financial difficulties at flag carriers. Short-haul accounts for almost one-fourth of the national carriers business while the remainder is derived from long-haul flights, demand for which has experienced a strong recovery. | Losses on short-haul flights are adding to the financial difficulties experienced by national airlines. | c |
id_2148 | EU national airlines will not make any money on their short-haul European operations this year despite heavy promotions of reduced fares and a campaign to cut costs. Most of these so-called flag carriers have reluctantly decided to pass on soaring fuel costs to short-haul passengers in the form of a surcharge on tickets. At the same time, they are encountering strong competition on most European routes from the budget carriers, who have pledged not to levy surcharges. Budget airlines have dramati- cally raised their short-haul market share, taking advantage of the financial difficulties at flag carriers. Short-haul accounts for almost one-fourth of the national carriers business while the remainder is derived from long-haul flights, demand for which has experienced a strong recovery. | The cost of flying on European routes has fallen. | e |
id_2149 | EU national airlines will not make any money on their short-haul European operations this year despite heavy promotions of reduced fares and a campaign to cut costs. Most of these so-called flag carriers have reluctantly decided to pass on soaring fuel costs to short-haul passengers in the form of a surcharge on tickets. At the same time, they are encountering strong competition on most European routes from the budget carriers, who have pledged not to levy surcharges. Budget airlines have dramati- cally raised their short-haul market share, taking advantage of the financial difficulties at flag carriers. Short-haul accounts for almost one-fourth of the national carriers business while the remainder is derived from long-haul flights, demand for which has experienced a strong recovery. | National airlines have imposed fuel surcharges on long-haul passengers too. | n |
id_2150 | EU officials will meet for what are expected to be the final talks on radical new rules cracking down on corruption between extractive companies and third world governments. Under the deal, all extractive companies will be required to declare all payments to and from governments over 100,000 on a country-by-country basis. These payments will have to be published for each individual project, while projects are defined as operational activities that are governed by a single contract, license, lease, concession or similar legal agreement, and form the basis for payment liabilities with a government. EU lawmakers emphasised the need to put an end to the so-called "resource curse, "whereby countries have remained poor despite being rich in natural and mineral resources due, in part, to high-level corruption. That being said, governments are expected to block a proposal supported by Members of the European Parliament to extend the scope of the legislation to cover the communications sector and construction. | The EU is putting forward new rules that crackdown on corruption because of their desire to eradicate corruption. | e |
id_2151 | EU officials will meet for what are expected to be the final talks on radical new rules cracking down on corruption between extractive companies and third world governments. Under the deal, all extractive companies will be required to declare all payments to and from governments over 100,000 on a country-by-country basis. These payments will have to be published for each individual project, while projects are defined as operational activities that are governed by a single contract, license, lease, concession or similar legal agreement, and form the basis for payment liabilities with a government. EU lawmakers emphasised the need to put an end to the so-called "resource curse, "whereby countries have remained poor despite being rich in natural and mineral resources due, in part, to high-level corruption. That being said, governments are expected to block a proposal supported by Members of the European Parliament to extend the scope of the legislation to cover the communications sector and construction. | An extractive company receiving several project payments from governments would not have to declare those payments, providing each payment exceeds 100,000 | c |
id_2152 | EU officials will meet for what are expected to be the final talks on radical new rules cracking down on corruption between extractive companies and third world governments. Under the deal, all extractive companies will be required to declare all payments to and from governments over 100,000 on a country-by-country basis. These payments will have to be published for each individual project, while projects are defined as operational activities that are governed by a single contract, license, lease, concession or similar legal agreement, and form the basis for payment liabilities with a government. EU lawmakers emphasised the need to put an end to the so-called "resource curse, "whereby countries have remained poor despite being rich in natural and mineral resources due, in part, to high-level corruption. That being said, governments are expected to block a proposal supported by Members of the European Parliament to extend the scope of the legislation to cover the communications sector and construction. | The "resource curse" is a term created to define the current situation of impoverished corrupt third-world countries. | n |
id_2153 | EU officials will meet for what are expected to be the final talks on radical new rules cracking down on corruption between extractive companies and third world governments. Under the deal, all extractive companies will be required to declare all payments to and from governments over 100,000 on a country-by-country basis. These payments will have to be published for each individual project, while projects are defined as operational activities that are governed by a single contract, license, lease, concession or similar legal agreement, and form the basis for payment liabilities with a government. EU lawmakers emphasised the need to put an end to the so-called "resource curse, "whereby countries have remained poor despite being rich in natural and mineral resources due, in part, to high-level corruption. That being said, governments are expected to block a proposal supported by Members of the European Parliament to extend the scope of the legislation to cover the communications sector and construction. | Governments are protective over the communications sector, due to their many joint endeavours and potential profits. | n |
id_2154 | EUROPEAN TRANSPORT SYSTEMS 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labourintensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative-sustainable development offers an opportunity for adapting the EU, s common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | Cars are prohibitively expensive in some EU candidate countries. | n |
id_2155 | EUROPEAN TRANSPORT SYSTEMS 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labourintensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative-sustainable development offers an opportunity for adapting the EU, s common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | The need for transport is growing, despite technological developments. | e |
id_2156 | EUROPEAN TRANSPORT SYSTEMS 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labourintensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative-sustainable development offers an opportunity for adapting the EU, s common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | The Gothenburg European Council was set up 30 years ago. | n |
id_2157 | EUROPEAN TRANSPORT SYSTEMS 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labourintensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative-sustainable development offers an opportunity for adapting the EU, s common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | To reduce production costs, some industries have been moved closer to their relevant consumers. | c |
id_2158 | EUROPEAN TRANSPORT SYSTEMS 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labourintensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative-sustainable development offers an opportunity for adapting the EU, s common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | By the end of this decade, CO2 emissions from transport are predicted to reach 739 billion tonnes. | c |
id_2159 | EUROPEAN TRANSPORT SYSTEMS. 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labour intensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative sustainable development offers an opportunity for adapting the EUs common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | To reduce production costs, some industries have been moved closer to their relevant consumers. | c |
id_2160 | EUROPEAN TRANSPORT SYSTEMS. 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labour intensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative sustainable development offers an opportunity for adapting the EUs common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | Cars are prohibitively expensive in some EU candidate countries. | n |
id_2161 | EUROPEAN TRANSPORT SYSTEMS. 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labour intensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative sustainable development offers an opportunity for adapting the EUs common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | By the end of this decade, CO2 emissions from transport are predicted to reach 739 billion tonnes. | c |
id_2162 | EUROPEAN TRANSPORT SYSTEMS. 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labour intensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative sustainable development offers an opportunity for adapting the EUs common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | The Gothenburg European Council was set up 30 years ago. | n |
id_2163 | EUROPEAN TRANSPORT SYSTEMS. 1990-2010 What have been the trends and what are the prospects for European transport systems? It is difficult to conceive of vigorous economic growth without an efficient transport system. Although modern information technologies can reduce the demand for physical transport by facilitating teleworking and teleservices, the requirement for transport continues to increase. There are two key factors behind this trend. For passenger transport, the determining factor is the spectacular growth in car use. The number of cars on European Union (EU) roads saw an increase of three million cars each year from 1990 to 2010, and in the next decade the EU will see a further substantial increase in its fleet. As far as goods transport is concerned, growth is due to a large extent to changes in the European economy and its system of production. In the last 20 years, as internal frontiers have been abolished, the EU has moved from a stock economy to a flow economy. This phenomenon has been emphasised by the relocation of some industries, particularly those which are labour intensive, to reduce production costs, even though the production site is hundreds or even thousands of kilometres away from the final assembly plant or away from users. The strong economic growth expected in countries which are candidates for entry to the EU will also increase transport flows, in particular road haulage traffic. In 1998, some of these countries already exported more than twice their 1990 volumes and imported more than five times their 1990 volumes. And although many candidate countries inherited a transport system which encourages rail, the distribution between modes has tipped sharply in favour of road transport since the 1990s. Between 1990 and 1998, road haulage increased by 19.4%, while during the same period rail haulage decreased by 43.5%, although and this could benefit the enlarged EU it is still on average at a much higher level than in existing member states. However, a new imperative sustainable development offers an opportunity for adapting the EUs common transport policy. This objective, agreed by the Gothenburg European Council, has to be achieved by integrating environmental considerations into Community policies, and shifting the balance between modes of transport lies at the heart of its strategy. The ambitious objective can only be fully achieved by 2020, but proposed measures are nonetheless a first essential step towards a sustainable transport system which will ideally be in place in 30 years time, that is by 2040. In 1998, energy consumption in the transport sector was to blame for 28% of emissions of CO2, the leading greenhouse gas. According to the latest estimates, if nothing is done to reverse the traffic growth trend, CO2 emissions from transport can be expected to increase by around 50% to 1,113 billion tonnes by 2020, compared with the 739 billion tonnes recorded in 1990. Once again, road transport is the main culprit since it alone accounts for 84% of the CO2 emissions attributable to transport. Using alternative fuels and improving energy efficiency is thus both an ecological necessity and a technological challenge. At the same time greater efforts must be made to achieve a modal shift. Such a change cannot be achieved overnight, all the less so after over half a century of constant deterioration in favour of road. This has reached such a pitch that today rail freight services are facing marginalisation, with just 8% of market share, and with international goods trains struggling along at an average speed of 18km/h. Three possible options have emerged. The first approach would consist of focusing on road transport solely through pricing. This option would not be accompanied by complementary measures in the other modes of transport. In the short term it might curb the growth in road transport through the better loading ratio of goods vehicles and occupancy rates of passenger vehicles expected as a result of the increase in the price of transport. However, the lack of measures available to revitalise other modes of transport would make it impossible for more sustainable modes of transport to take up the baton. The second approach also concentrates on road transport pricing but is accompanied by measures to increase the efficiency of the other modes (better quality of services, logistics, technology). However, this approach does not include investment in new infrastructure, nor does it guarantee better regional cohesion. It could help to achieve greater uncoupling than the first approach, but road transport would keep the lions share of the market and continue to concentrate on saturated arteries, despite being the most polluting of the modes. It is therefore not enough to guarantee the necessary shift of the balance. The third approach, which is not new, comprises a series of measures ranging from pricing to revitalising alternative modes of transport and targeting investment in the trans-European network. This integrated approach would allow the market shares of the other modes to return to their 1998 levels and thus make a shift of balance. It is far more ambitious than it looks, bearing in mind the historical imbalance in favour of roads for the last fifty years, but would achieve a marked break in the link between road transport growth and economic growth, without placing restrictions on the mobility of people and goods. | The need for transport is growing, despite technological developments. | e |
id_2164 | Early Childhood Education. New Zealands National Party spokesman on education, Dr Lockwood Smith, recently visited the US and Britain. Here he reports on the findings of his trip and what they could mean for New Zealands education policy. Education To Be More was published last August. It was the report of the New Zealand Governments Early Childhood Care and Education Working Group. The report argued for enhanced equity of access and better funding for childcare and early childhood education institutions. Unquestionably, thats a real need; but since parents don't normally send children to pre-schools until the age of three, are we missing out on the most important years of all? A 13-year study of early childhood development at Harvard University has shown that, by the age of three, most children have the potential to understand about 1000 words most of the language they will use in ordinary conversation for the rest of their lives. Furthermore, research has shown that while every child is born with a natural curiosity, it can be suppressed dramatically during the second and third years of life. Researchers claim that the human personality is formed during the first two years of life, and during the first three years children learn the basic skills they will use in all their later learning both at home and at school. Once over the age of three, children continue to expand on existing knowledge of the world. It is generally acknowledged that young people from poorer socio-economic backgrounds tend to do less well in our education system. Thats observed not just in New Zealand, but also in Australia, Britain and America. In an attempt to overcome that educational under-achievement, a nationwide programme called Head-start was launched in the United States in 1965. A lot of money was poured into it. It took children into pre-school institutions at the age of three and was supposed to help the children of poorer families succeed in school. Despite substantial funding, results have been disappointing. It is thought that there are two explanations for this. First, the programme began too late. Many children who entered it at the age of three were already behind their peers in language and measurable intelligence. Second, the parents were not involved. At the end of each day, Head-start children returned to the same disadvantaged home environment. As a result of the growing research evidence of the importance of the first three years of a child's life and the disappointing results from Head-start, a pilot programme was launched in Missouri in the US that focused on parents as the child first teachers. The Missouri programme was predicated on research showing that working with the family, rather than bypassing the parents, is the most effective way of helping children get off to the best possible start in life. The four-year pilot study included 380 families who were about to have their first child and who represented a cross-section of socio-economic status, age and family configurations. They included single-parent and two-parent families, families in which both parents worked, and families with either the mother or father at home. The programme involved trained parent educators visiting the parents home and working with the parent, or parents, and the child. Information on child development, and guidance on things to look for and expect as the child grows were provided, plus guidance in fostering the child's intellectual, language, social and motor-skill development. Periodic check-ups of the child's educational and sensory development (hearing and vision) were made to detect possible handicaps that interfere with growth and development. Medical problems were referred to professionals. Parent-educators made personal visits to homes and monthly group meetings were held with other new parents to share experience and discuss topics of interest. Parent resource centres, located in school buildings, offered learning materials for families and facilitators for child care. At the age of three, the children who had been involved in the Missouri programme were evaluated alongside a cross-section of children selected from the same range of socio-economic backgrounds and family situations, and also a random sample of children that age. The results were phenomenal. By the age of three, the children in the programme were significantly more advanced in language development than their peers, had made greater strides in problem solving and other intellectual skills, and were further along in social development. In fact, the average child on the programme was performing at the level of the top 15 to 20 per cent of their peers in such things as auditory comprehension, verbal ability and language ability. Most important of all, the traditional measures of risk, such as parents age and education, or whether they were a single parent, bore little or no relationship to the measures of achievement and language development. Children in the programme performed equally well regardless of socio-economic disadvantages. Child abuse was virtually eliminated. The one factor that was found to affect the child's development was family stress leading to a poor quality of parent-child interaction. That interaction was not necessarily bad in poorer families. These research findings are exciting. There is growing evidence in New Zealand that children from poorer socio-economic backgrounds are arriving at school less well developed and that our school system tends to perpetuate that disadvantage. The initiative outlined above could break that cycle of disadvantage. The concept of working with parents in their homes, or at their place of work, contrasts quite markedly with the report of the Early Childhood Care and Education Working Group. Their focus is on getting children and mothers access to childcare and institutionalised early childhood education. Education from the age of three to five is undoubtedly vital, but without a similar focus on parent education and on the vital importance of the first three years, some evidence indicates that it will not be enough to overcome educational inequity. | Missouri programme children of young, uneducated, single parents scored less highly on the tests. | c |
id_2165 | Early Childhood Education. New Zealands National Party spokesman on education, Dr Lockwood Smith, recently visited the US and Britain. Here he reports on the findings of his trip and what they could mean for New Zealands education policy. Education To Be More was published last August. It was the report of the New Zealand Governments Early Childhood Care and Education Working Group. The report argued for enhanced equity of access and better funding for childcare and early childhood education institutions. Unquestionably, thats a real need; but since parents don't normally send children to pre-schools until the age of three, are we missing out on the most important years of all? A 13-year study of early childhood development at Harvard University has shown that, by the age of three, most children have the potential to understand about 1000 words most of the language they will use in ordinary conversation for the rest of their lives. Furthermore, research has shown that while every child is born with a natural curiosity, it can be suppressed dramatically during the second and third years of life. Researchers claim that the human personality is formed during the first two years of life, and during the first three years children learn the basic skills they will use in all their later learning both at home and at school. Once over the age of three, children continue to expand on existing knowledge of the world. It is generally acknowledged that young people from poorer socio-economic backgrounds tend to do less well in our education system. Thats observed not just in New Zealand, but also in Australia, Britain and America. In an attempt to overcome that educational under-achievement, a nationwide programme called Head-start was launched in the United States in 1965. A lot of money was poured into it. It took children into pre-school institutions at the age of three and was supposed to help the children of poorer families succeed in school. Despite substantial funding, results have been disappointing. It is thought that there are two explanations for this. First, the programme began too late. Many children who entered it at the age of three were already behind their peers in language and measurable intelligence. Second, the parents were not involved. At the end of each day, Head-start children returned to the same disadvantaged home environment. As a result of the growing research evidence of the importance of the first three years of a child's life and the disappointing results from Head-start, a pilot programme was launched in Missouri in the US that focused on parents as the child first teachers. The Missouri programme was predicated on research showing that working with the family, rather than bypassing the parents, is the most effective way of helping children get off to the best possible start in life. The four-year pilot study included 380 families who were about to have their first child and who represented a cross-section of socio-economic status, age and family configurations. They included single-parent and two-parent families, families in which both parents worked, and families with either the mother or father at home. The programme involved trained parent educators visiting the parents home and working with the parent, or parents, and the child. Information on child development, and guidance on things to look for and expect as the child grows were provided, plus guidance in fostering the child's intellectual, language, social and motor-skill development. Periodic check-ups of the child's educational and sensory development (hearing and vision) were made to detect possible handicaps that interfere with growth and development. Medical problems were referred to professionals. Parent-educators made personal visits to homes and monthly group meetings were held with other new parents to share experience and discuss topics of interest. Parent resource centres, located in school buildings, offered learning materials for families and facilitators for child care. At the age of three, the children who had been involved in the Missouri programme were evaluated alongside a cross-section of children selected from the same range of socio-economic backgrounds and family situations, and also a random sample of children that age. The results were phenomenal. By the age of three, the children in the programme were significantly more advanced in language development than their peers, had made greater strides in problem solving and other intellectual skills, and were further along in social development. In fact, the average child on the programme was performing at the level of the top 15 to 20 per cent of their peers in such things as auditory comprehension, verbal ability and language ability. Most important of all, the traditional measures of risk, such as parents age and education, or whether they were a single parent, bore little or no relationship to the measures of achievement and language development. Children in the programme performed equally well regardless of socio-economic disadvantages. Child abuse was virtually eliminated. The one factor that was found to affect the child's development was family stress leading to a poor quality of parent-child interaction. That interaction was not necessarily bad in poorer families. These research findings are exciting. There is growing evidence in New Zealand that children from poorer socio-economic backgrounds are arriving at school less well developed and that our school system tends to perpetuate that disadvantage. The initiative outlined above could break that cycle of disadvantage. The concept of working with parents in their homes, or at their place of work, contrasts quite markedly with the report of the Early Childhood Care and Education Working Group. Their focus is on getting children and mothers access to childcare and institutionalised early childhood education. Education from the age of three to five is undoubtedly vital, but without a similar focus on parent education and on the vital importance of the first three years, some evidence indicates that it will not be enough to overcome educational inequity. | Most Missouri programme three-year-olds scored highly in areas such as listening speaking, reasoning and interacting with others. | e |
id_2166 | Early Childhood Education. New Zealands National Party spokesman on education, Dr Lockwood Smith, recently visited the US and Britain. Here he reports on the findings of his trip and what they could mean for New Zealands education policy. Education To Be More was published last August. It was the report of the New Zealand Governments Early Childhood Care and Education Working Group. The report argued for enhanced equity of access and better funding for childcare and early childhood education institutions. Unquestionably, thats a real need; but since parents don't normally send children to pre-schools until the age of three, are we missing out on the most important years of all? A 13-year study of early childhood development at Harvard University has shown that, by the age of three, most children have the potential to understand about 1000 words most of the language they will use in ordinary conversation for the rest of their lives. Furthermore, research has shown that while every child is born with a natural curiosity, it can be suppressed dramatically during the second and third years of life. Researchers claim that the human personality is formed during the first two years of life, and during the first three years children learn the basic skills they will use in all their later learning both at home and at school. Once over the age of three, children continue to expand on existing knowledge of the world. It is generally acknowledged that young people from poorer socio-economic backgrounds tend to do less well in our education system. Thats observed not just in New Zealand, but also in Australia, Britain and America. In an attempt to overcome that educational under-achievement, a nationwide programme called Head-start was launched in the United States in 1965. A lot of money was poured into it. It took children into pre-school institutions at the age of three and was supposed to help the children of poorer families succeed in school. Despite substantial funding, results have been disappointing. It is thought that there are two explanations for this. First, the programme began too late. Many children who entered it at the age of three were already behind their peers in language and measurable intelligence. Second, the parents were not involved. At the end of each day, Head-start children returned to the same disadvantaged home environment. As a result of the growing research evidence of the importance of the first three years of a child's life and the disappointing results from Head-start, a pilot programme was launched in Missouri in the US that focused on parents as the child first teachers. The Missouri programme was predicated on research showing that working with the family, rather than bypassing the parents, is the most effective way of helping children get off to the best possible start in life. The four-year pilot study included 380 families who were about to have their first child and who represented a cross-section of socio-economic status, age and family configurations. They included single-parent and two-parent families, families in which both parents worked, and families with either the mother or father at home. The programme involved trained parent educators visiting the parents home and working with the parent, or parents, and the child. Information on child development, and guidance on things to look for and expect as the child grows were provided, plus guidance in fostering the child's intellectual, language, social and motor-skill development. Periodic check-ups of the child's educational and sensory development (hearing and vision) were made to detect possible handicaps that interfere with growth and development. Medical problems were referred to professionals. Parent-educators made personal visits to homes and monthly group meetings were held with other new parents to share experience and discuss topics of interest. Parent resource centres, located in school buildings, offered learning materials for families and facilitators for child care. At the age of three, the children who had been involved in the Missouri programme were evaluated alongside a cross-section of children selected from the same range of socio-economic backgrounds and family situations, and also a random sample of children that age. The results were phenomenal. By the age of three, the children in the programme were significantly more advanced in language development than their peers, had made greater strides in problem solving and other intellectual skills, and were further along in social development. In fact, the average child on the programme was performing at the level of the top 15 to 20 per cent of their peers in such things as auditory comprehension, verbal ability and language ability. Most important of all, the traditional measures of risk, such as parents age and education, or whether they were a single parent, bore little or no relationship to the measures of achievement and language development. Children in the programme performed equally well regardless of socio-economic disadvantages. Child abuse was virtually eliminated. The one factor that was found to affect the child's development was family stress leading to a poor quality of parent-child interaction. That interaction was not necessarily bad in poorer families. These research findings are exciting. There is growing evidence in New Zealand that children from poorer socio-economic backgrounds are arriving at school less well developed and that our school system tends to perpetuate that disadvantage. The initiative outlined above could break that cycle of disadvantage. The concept of working with parents in their homes, or at their place of work, contrasts quite markedly with the report of the Early Childhood Care and Education Working Group. Their focus is on getting children and mothers access to childcare and institutionalised early childhood education. Education from the age of three to five is undoubtedly vital, but without a similar focus on parent education and on the vital importance of the first three years, some evidence indicates that it will not be enough to overcome educational inequity. | The richer families in the Missouri programme had higher stress levels. | n |
id_2167 | Early Writing Systems Scholars agree that writing originated somewhere in the Middle East, probably Mesopotamia, around the fourth millennium B. C. E. It is from the great libraries and word-hoards of these ancient lands that the first texts emerged. They were written on damp clay tablets with a wedged (or V-shaped) stick; since the Latin word for wedge is cunea, the texts are called cuneiform. The clay tablets usually were not fired; sun drying was probably reckoned enough to preserve the text for as long as it was being used. Fortunately, however, many tablets survived because they were accidentally fired when the buildings they were stored in burned. Cuneiform writing lasted for some 3,000 years, in a vast line of succession that ran through Sumer, Akkad, Assyria, Nineveh, and Babylon, and preserved for us fifteen languages in an area represented by modern-day Iraq, Syria, and western Iran. The oldest cuneiform texts recorded the transactions of tax collectors and merchants, the receipts and bills of sale of an urban society. They had to do with things like grain, goats, and real estate. Later, Babylonian scribes recorded the laws and kept other kinds of records. Knowledge conferred power. As a result, the scribes were assigned their own goddess, Nisaba, later replaced by the god Nabu of Borsippa, whose symbol is neither weapon nor dragon but something far more fearsome, the cuneiform stick. Cuneiform texts on science, astronomy, medicine, and mathematics abound, some offering astoundingly precise data. One tablet records the speed of the Moon over 248 days; another documents an early sighting of Halley's Comet, from September 22 to September 28, 164 B. C. E. More esoteric texts attempt to explain old Babylonian customs, such as the procedure for curing someone who is ill, which included rubbing tar and gypsum on the sick person's door and drawing a design at the foot of the person's bed. What is clear from the vast body of texts (some 20,000 tablets were found in King Ashurbanipal's library at Nineveh) is that scribes took pride in their writing and knowledge. The foremost cuneiform text, the Babylonian Epic of Gilgamesh, deals with humankind's attempts to conquer time. In it, Gilgamesh, king and warrior, is crushed by the death of his best friend and so sets out on adventures that prefigure mythical heroes of ancient Greek legends such as Hercules. His goal is not just to survive his ordeals but to make sense of this life. Remarkably, versions of Gilgamesh span 1,500 years, between 2100 B. C. E and 600 B. C. E. , making the story the epic of an entire civilization. The ancient Egyptians invented a different way of writing and a new substance to write on-papyrus, a precursor of paper, made from a wetland plant. The Greeks had a special name for this writing: hiero glyphic, literally "sacred writing. " This, they thought, was language fit for the gods, which explains why it was carved on walls of pyramids and other religious structures. Perhaps hieroglyphics are Egypt's great contribution to the history of writing: hieroglyphic writing, in use from 3100 B. C. E. until 394 C. E. , resulted in the creation of texts that were fine art as well as communication. Egypt gave us the tradition of the scribe not just as educated person but as artist and calligrapher. Scholars have detected some 6,000 separate hieroglyphic characters in use over the history of Egyptian writing, but it appears that never more than a thousand were in use during any one period. It still seems a lot to recall, but what was lost in efficiency was more than made up for in the beauty and richness of the texts. Writing was meant to impress the eye with the vastness of creation itself. Each symbol or glyph-the flowering reed (pronounced like "i"), the owl ("m"), the quail chick ("w"), etcetera-was a tiny work of art. Manuscripts were compiled with an eye to the overall design. Egyptologists have noticed that the glyphs that constitute individual words were sometimes shuffled to make the text more pleasing to the eye with little regard for sound or sense. | Egyptian hieroglyphics were associated with buildings that had a religious function. | e |
id_2168 | Early Writing Systems Scholars agree that writing originated somewhere in the Middle East, probably Mesopotamia, around the fourth millennium B. C. E. It is from the great libraries and word-hoards of these ancient lands that the first texts emerged. They were written on damp clay tablets with a wedged (or V-shaped) stick; since the Latin word for wedge is cunea, the texts are called cuneiform. The clay tablets usually were not fired; sun drying was probably reckoned enough to preserve the text for as long as it was being used. Fortunately, however, many tablets survived because they were accidentally fired when the buildings they were stored in burned. Cuneiform writing lasted for some 3,000 years, in a vast line of succession that ran through Sumer, Akkad, Assyria, Nineveh, and Babylon, and preserved for us fifteen languages in an area represented by modern-day Iraq, Syria, and western Iran. The oldest cuneiform texts recorded the transactions of tax collectors and merchants, the receipts and bills of sale of an urban society. They had to do with things like grain, goats, and real estate. Later, Babylonian scribes recorded the laws and kept other kinds of records. Knowledge conferred power. As a result, the scribes were assigned their own goddess, Nisaba, later replaced by the god Nabu of Borsippa, whose symbol is neither weapon nor dragon but something far more fearsome, the cuneiform stick. Cuneiform texts on science, astronomy, medicine, and mathematics abound, some offering astoundingly precise data. One tablet records the speed of the Moon over 248 days; another documents an early sighting of Halley's Comet, from September 22 to September 28, 164 B. C. E. More esoteric texts attempt to explain old Babylonian customs, such as the procedure for curing someone who is ill, which included rubbing tar and gypsum on the sick person's door and drawing a design at the foot of the person's bed. What is clear from the vast body of texts (some 20,000 tablets were found in King Ashurbanipal's library at Nineveh) is that scribes took pride in their writing and knowledge. The foremost cuneiform text, the Babylonian Epic of Gilgamesh, deals with humankind's attempts to conquer time. In it, Gilgamesh, king and warrior, is crushed by the death of his best friend and so sets out on adventures that prefigure mythical heroes of ancient Greek legends such as Hercules. His goal is not just to survive his ordeals but to make sense of this life. Remarkably, versions of Gilgamesh span 1,500 years, between 2100 B. C. E and 600 B. C. E. , making the story the epic of an entire civilization. The ancient Egyptians invented a different way of writing and a new substance to write on-papyrus, a precursor of paper, made from a wetland plant. The Greeks had a special name for this writing: hiero glyphic, literally "sacred writing. " This, they thought, was language fit for the gods, which explains why it was carved on walls of pyramids and other religious structures. Perhaps hieroglyphics are Egypt's great contribution to the history of writing: hieroglyphic writing, in use from 3100 B. C. E. until 394 C. E. , resulted in the creation of texts that were fine art as well as communication. Egypt gave us the tradition of the scribe not just as educated person but as artist and calligrapher. Scholars have detected some 6,000 separate hieroglyphic characters in use over the history of Egyptian writing, but it appears that never more than a thousand were in use during any one period. It still seems a lot to recall, but what was lost in efficiency was more than made up for in the beauty and richness of the texts. Writing was meant to impress the eye with the vastness of creation itself. Each symbol or glyph-the flowering reed (pronounced like "i"), the owl ("m"), the quail chick ("w"), etcetera-was a tiny work of art. Manuscripts were compiled with an eye to the overall design. Egyptologists have noticed that the glyphs that constitute individual words were sometimes shuffled to make the text more pleasing to the eye with little regard for sound or sense. | Egyptian hieroglyphics were used in Egypt for many centuries. | e |
id_2169 | Early Writing Systems Scholars agree that writing originated somewhere in the Middle East, probably Mesopotamia, around the fourth millennium B. C. E. It is from the great libraries and word-hoards of these ancient lands that the first texts emerged. They were written on damp clay tablets with a wedged (or V-shaped) stick; since the Latin word for wedge is cunea, the texts are called cuneiform. The clay tablets usually were not fired; sun drying was probably reckoned enough to preserve the text for as long as it was being used. Fortunately, however, many tablets survived because they were accidentally fired when the buildings they were stored in burned. Cuneiform writing lasted for some 3,000 years, in a vast line of succession that ran through Sumer, Akkad, Assyria, Nineveh, and Babylon, and preserved for us fifteen languages in an area represented by modern-day Iraq, Syria, and western Iran. The oldest cuneiform texts recorded the transactions of tax collectors and merchants, the receipts and bills of sale of an urban society. They had to do with things like grain, goats, and real estate. Later, Babylonian scribes recorded the laws and kept other kinds of records. Knowledge conferred power. As a result, the scribes were assigned their own goddess, Nisaba, later replaced by the god Nabu of Borsippa, whose symbol is neither weapon nor dragon but something far more fearsome, the cuneiform stick. Cuneiform texts on science, astronomy, medicine, and mathematics abound, some offering astoundingly precise data. One tablet records the speed of the Moon over 248 days; another documents an early sighting of Halley's Comet, from September 22 to September 28, 164 B. C. E. More esoteric texts attempt to explain old Babylonian customs, such as the procedure for curing someone who is ill, which included rubbing tar and gypsum on the sick person's door and drawing a design at the foot of the person's bed. What is clear from the vast body of texts (some 20,000 tablets were found in King Ashurbanipal's library at Nineveh) is that scribes took pride in their writing and knowledge. The foremost cuneiform text, the Babylonian Epic of Gilgamesh, deals with humankind's attempts to conquer time. In it, Gilgamesh, king and warrior, is crushed by the death of his best friend and so sets out on adventures that prefigure mythical heroes of ancient Greek legends such as Hercules. His goal is not just to survive his ordeals but to make sense of this life. Remarkably, versions of Gilgamesh span 1,500 years, between 2100 B. C. E and 600 B. C. E. , making the story the epic of an entire civilization. The ancient Egyptians invented a different way of writing and a new substance to write on-papyrus, a precursor of paper, made from a wetland plant. The Greeks had a special name for this writing: hiero glyphic, literally "sacred writing. " This, they thought, was language fit for the gods, which explains why it was carved on walls of pyramids and other religious structures. Perhaps hieroglyphics are Egypt's great contribution to the history of writing: hieroglyphic writing, in use from 3100 B. C. E. until 394 C. E. , resulted in the creation of texts that were fine art as well as communication. Egypt gave us the tradition of the scribe not just as educated person but as artist and calligrapher. Scholars have detected some 6,000 separate hieroglyphic characters in use over the history of Egyptian writing, but it appears that never more than a thousand were in use during any one period. It still seems a lot to recall, but what was lost in efficiency was more than made up for in the beauty and richness of the texts. Writing was meant to impress the eye with the vastness of creation itself. Each symbol or glyph-the flowering reed (pronounced like "i"), the owl ("m"), the quail chick ("w"), etcetera-was a tiny work of art. Manuscripts were compiled with an eye to the overall design. Egyptologists have noticed that the glyphs that constitute individual words were sometimes shuffled to make the text more pleasing to the eye with little regard for sound or sense. | Egyptian hieroglyphics were sometimes written on material made from plants. | e |
id_2170 | Early Writing Systems Scholars agree that writing originated somewhere in the Middle East, probably Mesopotamia, around the fourth millennium B. C. E. It is from the great libraries and word-hoards of these ancient lands that the first texts emerged. They were written on damp clay tablets with a wedged (or V-shaped) stick; since the Latin word for wedge is cunea, the texts are called cuneiform. The clay tablets usually were not fired; sun drying was probably reckoned enough to preserve the text for as long as it was being used. Fortunately, however, many tablets survived because they were accidentally fired when the buildings they were stored in burned. Cuneiform writing lasted for some 3,000 years, in a vast line of succession that ran through Sumer, Akkad, Assyria, Nineveh, and Babylon, and preserved for us fifteen languages in an area represented by modern-day Iraq, Syria, and western Iran. The oldest cuneiform texts recorded the transactions of tax collectors and merchants, the receipts and bills of sale of an urban society. They had to do with things like grain, goats, and real estate. Later, Babylonian scribes recorded the laws and kept other kinds of records. Knowledge conferred power. As a result, the scribes were assigned their own goddess, Nisaba, later replaced by the god Nabu of Borsippa, whose symbol is neither weapon nor dragon but something far more fearsome, the cuneiform stick. Cuneiform texts on science, astronomy, medicine, and mathematics abound, some offering astoundingly precise data. One tablet records the speed of the Moon over 248 days; another documents an early sighting of Halley's Comet, from September 22 to September 28, 164 B. C. E. More esoteric texts attempt to explain old Babylonian customs, such as the procedure for curing someone who is ill, which included rubbing tar and gypsum on the sick person's door and drawing a design at the foot of the person's bed. What is clear from the vast body of texts (some 20,000 tablets were found in King Ashurbanipal's library at Nineveh) is that scribes took pride in their writing and knowledge. The foremost cuneiform text, the Babylonian Epic of Gilgamesh, deals with humankind's attempts to conquer time. In it, Gilgamesh, king and warrior, is crushed by the death of his best friend and so sets out on adventures that prefigure mythical heroes of ancient Greek legends such as Hercules. His goal is not just to survive his ordeals but to make sense of this life. Remarkably, versions of Gilgamesh span 1,500 years, between 2100 B. C. E and 600 B. C. E. , making the story the epic of an entire civilization. The ancient Egyptians invented a different way of writing and a new substance to write on-papyrus, a precursor of paper, made from a wetland plant. The Greeks had a special name for this writing: hiero glyphic, literally "sacred writing. " This, they thought, was language fit for the gods, which explains why it was carved on walls of pyramids and other religious structures. Perhaps hieroglyphics are Egypt's great contribution to the history of writing: hieroglyphic writing, in use from 3100 B. C. E. until 394 C. E. , resulted in the creation of texts that were fine art as well as communication. Egypt gave us the tradition of the scribe not just as educated person but as artist and calligrapher. Scholars have detected some 6,000 separate hieroglyphic characters in use over the history of Egyptian writing, but it appears that never more than a thousand were in use during any one period. It still seems a lot to recall, but what was lost in efficiency was more than made up for in the beauty and richness of the texts. Writing was meant to impress the eye with the vastness of creation itself. Each symbol or glyph-the flowering reed (pronounced like "i"), the owl ("m"), the quail chick ("w"), etcetera-was a tiny work of art. Manuscripts were compiled with an eye to the overall design. Egyptologists have noticed that the glyphs that constitute individual words were sometimes shuffled to make the text more pleasing to the eye with little regard for sound or sense. | Egyptian hieroglyphics were believed to be a gift to humans from the gods. | c |
id_2171 | Eating organic foods will not make you healthier, say researchers at Stanford University. A meta-analysis of over two hundred studies assessing the health gains of organic over non organic foods has failed to identify any health benefits of eating organic foods over non organic foods, even though organic foods were thirty percent less likely to contain pesticides. Organic and non-organic fruit and vegetables were shown to have similar amounts of vitamins and minerals; milk was shown to have the same amount of fat and protein. Critics however say that more research is required, and until then it is inconclusive as to the effect of organic foods. Similarly it is stated that because none of the studies ran for longer than 2 years. | Over two hundred studies where assessed | e |
id_2172 | Eating organic foods will not make you healthier, say researchers at Stanford University. A meta-analysis of over two hundred studies assessing the health gains of organic over non organic foods has failed to identify any health benefits of eating organic foods over non organic foods, even though organic foods were thirty percent less likely to contain pesticides. Organic and non-organic fruit and vegetables were shown to have similar amounts of vitamins and minerals; milk was shown to have the same amount of fat and protein. Critics however say that more research is required, and until then it is inconclusive as to the effect of organic foods. Similarly it is stated that because none of the studies ran for longer than 2 years. | Organic and non-organic fruit has the same amount of vitamins. | e |
id_2173 | Eating organic foods will not make you healthier, say researchers at Stanford University. A meta-analysis of over two hundred studies assessing the health gains of organic over non organic foods has failed to identify any health benefits of eating organic foods over non organic foods, even though organic foods were thirty percent less likely to contain pesticides. Organic and non-organic fruit and vegetables were shown to have similar amounts of vitamins and minerals; milk was shown to have the same amount of fat and protein. Critics however say that more research is required, and until then it is inconclusive as to the effect of organic foods. Similarly it is stated that because none of the studies ran for longer than 2 years. | Organic and non-organic meat had the a same amount of protein. | c |
id_2174 | Eating organic foods will not make you healthier, say researchers at Stanford University. A meta-analysis of over two hundred studies assessing the health gains of organic over non organic foods has failed to identify any health benefits of eating organic foods over non organic foods, even though organic foods were thirty percent less likely to contain pesticides. Organic and non-organic fruit and vegetables were shown to have similar amounts of vitamins and minerals; milk was shown to have the same amount of fat and protein. Critics however say that more research is required, and until then it is inconclusive as to the effect of organic foods. Similarly it is stated that because none of the studies ran for longer than 2 years. | Organic and non-organic milk had the same amount of fat. | e |
id_2175 | Ecotourism can be defined as responsible travel to natural areas that aims to both conserve the environment and bring economic opportunities to local people. Ecotourism provides an alternative to environmentally damaging industries such as logging and mining, while also stimulating the local economy. However, its dependency on foreign investment leads to one of the main criticisms of the industry: that the profits generated from ecotourism do not benefit the local economy and work force. Furthermore, while the ideals behind ecotourism are unobjectionable, the industry is highly susceptible to greenwashing whereby a false impression of environmental friendliness is given. More radical opposition comes from those critics who believe that ecotourism is inherently flawed because travel that uses fossil fuels is damaging to the environment. Despite these voices of dissent, ecotourism has become the fastest-growing sector of the tourism industry, growing at an annual rate of twenty to thirty percent. Ironically, ecotourisms very success may counteract its environmental goals, as high levels of visitors even careful ones inevitably damage the ecosystem. | While ecotourism environmental benefits are disputed, there is consensus that it benefits local people economically. | c |
id_2176 | Ecotourism can be defined as responsible travel to natural areas that aims to both conserve the environment and bring economic opportunities to local people. Ecotourism provides an alternative to environmentally damaging industries such as logging and mining, while also stimulating the local economy. However, its dependency on foreign investment leads to one of the main criticisms of the industry: that the profits generated from ecotourism do not benefit the local economy and work force. Furthermore, while the ideals behind ecotourism are unobjectionable, the industry is highly susceptible to greenwashing whereby a false impression of environmental friendliness is given. More radical opposition comes from those critics who believe that ecotourism is inherently flawed because travel that uses fossil fuels is damaging to the environment. Despite these voices of dissent, ecotourism has become the fastest-growing sector of the tourism industry, growing at an annual rate of twenty to thirty percent. Ironically, ecotourisms very success may counteract its environmental goals, as high levels of visitors even careful ones inevitably damage the ecosystem. | The passage dismisses the ecotourism industry as an example of greenwashing. | c |
id_2177 | Ecotourism can be defined as responsible travel to natural areas that aims to both conserve the environment and bring economic opportunities to local people. Ecotourism provides an alternative to environmentally damaging industries such as logging and mining, while also stimulating the local economy. However, its dependency on foreign investment leads to one of the main criticisms of the industry: that the profits generated from ecotourism do not benefit the local economy and work force. Furthermore, while the ideals behind ecotourism are unobjectionable, the industry is highly susceptible to greenwashing whereby a false impression of environmental friendliness is given. More radical opposition comes from those critics who believe that ecotourism is inherently flawed because travel that uses fossil fuels is damaging to the environment. Despite these voices of dissent, ecotourism has become the fastest-growing sector of the tourism industry, growing at an annual rate of twenty to thirty percent. Ironically, ecotourisms very success may counteract its environmental goals, as high levels of visitors even careful ones inevitably damage the ecosystem. | The long-term environmental credentials of ecotourism are debatable. | e |
id_2178 | Ecotourism can be defined as responsible travel to natural areas that aims to both conserve the environment and bring economic opportunities to local people. Ecotourism provides an alternative to environmentally damaging industries such as logging and mining, while also stimulating the local economy. However, its dependency on foreign investment leads to one of the main criticisms of the industry: that the profits generated from ecotourism do not benefit the local economy and work force. Furthermore, while the ideals behind ecotourism are unobjectionable, the industry is highly susceptible to greenwashing whereby a false impression of environmental friendliness is given. More radical opposition comes from those critics who believe that ecotourism is inherently flawed because travel that uses fossil fuels is damaging to the environment. Despite these voices of dissent, ecotourism has become the fastest-growing sector of the tourism industry, growing at an annual rate of twenty to thirty percent. Ironically, ecotourisms very success may counteract its environmental goals, as high levels of visitors even careful ones inevitably damage the ecosystem. | Ecotourism strives to profit from a nations natural resources. | n |
id_2179 | Ecotourism can be defined as responsible travel to natural areas that aims to both conserve the environment and bring economic opportunities to local people. Ecotourism provides an alternative to environmentally damaging industries such as logging and mining, while also stimulating the local economy. However, its dependency on foreign investment leads to one of the main criticisms of the industry: that the profits generated from ecotourism do not benefit the local economy and work force. Furthermore, while the ideals behind ecotourism are unobjectionable, the industry is highly susceptible to greenwashing whereby a false impression of environmental friendliness is given. More radical opposition comes from those critics who believe that ecotourism is inherently flawed because travel that uses fossil fuels is damaging to the environment. Despite these voices of dissent, ecotourism has become the fastest-growing sector of the tourism industry, growing at an annual rate of twenty to thirty percent. Ironically, ecotourisms very success may counteract its environmental goals, as high levels of visitors even careful ones inevitably damage the ecosystem. | Ecotourisms critics believe that air travel contributes to global warming. | n |
id_2180 | Edgar Allan Poe was born in Boston, Massachusetts, the son of actress Elizabeth Arnold Hopkins Poe and actor David Poe, Jr. His father abandoned the family in 1810, and his mother died of tuberculosis when he was only two, so Poe was taken into the home of John Allan, a successful tobacco merchant in Richmond, Virginia. Although his middle name is often misspelled as Allen, it is actually Allan after this family. After attending the Misses Duborg boarding school in London and Manor School in Stoke Newington, London, England, Poe moved back to Richmond, Virginia, with the Allans in 1820. Poe registered at the University of Virginia in 1826, but only stayed there for one year. | Poes mother died before his father. | n |
id_2181 | Edgar Allan Poe was born in Boston, Massachusetts, the son of actress Elizabeth Arnold Hopkins Poe and actor David Poe, Jr. His father abandoned the family in 1810, and his mother died of tuberculosis when he was only two, so Poe was taken into the home of John Allan, a successful tobacco merchant in Richmond, Virginia. Although his middle name is often misspelled as Allen, it is actually Allan after this family. After attending the Misses Duborg boarding school in London and Manor School in Stoke Newington, London, England, Poe moved back to Richmond, Virginia, with the Allans in 1820. Poe registered at the University of Virginia in 1826, but only stayed there for one year. | Poe never gained a university degree. | n |
id_2182 | Edgar Allan Poe was born in Boston, Massachusetts, the son of actress Elizabeth Arnold Hopkins Poe and actor David Poe, Jr. His father abandoned the family in 1810, and his mother died of tuberculosis when he was only two, so Poe was taken into the home of John Allan, a successful tobacco merchant in Richmond, Virginia. Although his middle name is often misspelled as Allen, it is actually Allan after this family. After attending the Misses Duborg boarding school in London and Manor School in Stoke Newington, London, England, Poe moved back to Richmond, Virginia, with the Allans in 1820. Poe registered at the University of Virginia in 1826, but only stayed there for one year. | Edgar Allan Poe was a famous American author and Poet. | n |
id_2183 | Edgar Allan Poe was born in Boston, Massachusetts, the son of actress Elizabeth Arnold Hopkins Poe and actor David Poe, Jr. His father abandoned the family in 1810, and his mother died of tuberculosis when he was only two, so Poe was taken into the home of John Allan, a successful tobacco merchant in Richmond, Virginia. Although his middle name is often misspelled as Allen, it is actually Allan after this family. After attending the Misses Duborg boarding school in London and Manor School in Stoke Newington, London, England, Poe moved back to Richmond, Virginia, with the Allans in 1820. Poe registered at the University of Virginia in 1826, but only stayed there for one year. | Poe was born in Richmond Virginia. | e |
id_2184 | Edgar Allan Poe was born in Boston, Massachusetts, the son of actress Elizabeth Arnold Hopkins Poe and actor David Poe, Jr. His father abandoned the family in 1810, and his mother died of tuberculosis when he was only two, so Poe was taken into the home of John Allan, a successful tobacco merchant in Richmond, Virginia. Although his middle name is often misspelled as Allen, it is actually Allan after this family. After attending the Misses Duborg boarding school in London and Manor School in Stoke Newington, London, England, Poe moved back to Richmond, Virginia, with the Allans in 1820. Poe registered at the University of Virginia in 1826, but only stayed there for one year. | Poe spent part of his life in England. | c |
id_2185 | Effort and Science to Win In Mexico, the Medicine Direction and Applied Sciences of the National Commission of Deporte analyses all aspects of sports science from the role of the auditory system in sporting achievement to the power of the mind and its role in the ability to win. Everything, it seems, is open to scrutiny. Recently, the focus has been evaluating the visual acuity of cyclists and long distance runners but they also focus on the more traditional areas of sports research, among them psychology, nutrition, anthropology, biochemistry and odontology . From budding child athletes as young as 9 to the more mature-aged sportsperson, the facility at Deporte has attracted some of Mexicos most famous sporting and Olympic hopefuls. The study of elite athletes is now more scientific than ever says doctor Francisco Javier Squares, after each competition, athletes are exposed to vigorous medical examinations and follow-up training in order to help us arrive at a program that is tailor-made. The modern athlete has become big business, no longer is there a one-size-fits-all approach. For example, in the past two people both 1. 70 meters tall and weighing 70 kilograms would have been given the same program of athletic conditioning now this idea is obsolete. It may be that the first individual has 35 kgs of muscle and 15 kgs of fat and the other person, although the same height and weight, may have 30 kgs of muscle and 20 kgs of fat. Through detailed scientific evaluation here at our facility in Deporte, says Squares, ... we are able to construct a very specific training programme for each individual. Whereas many countries in the world focus on the elevation of the glorious champion, the Mexican Olympic team takes a slightly different approach. Psychologically speaking an athlete must bring to his endeavour a healthy dose of humility. As Squares said, When an athlete wins for Mexico, it is always as a result of a combined team effort with many people operating behind the scenes to realise the sporting achievement. When an athlete stands on the dais, it is because of great effort on the part of many. As is often the case in some poorer countries, sportsmen and women are stifled in their development due to budgetary constraints. However this has not been a factor for consideration with the team in Mexico. The Mexican government has allocated a substantial sum of money for the provision of the latest equipment and laboratories for sports research. In fact, the quality of Mexicos facilities puts them on a par with countries like Italy and Germany in terms of access to resources. One example of sophisticated equipment used at the Mexican facility is the hyperbaric chamber. This apparatus is used to enhance oxygen recovery after a vigorous physical workout. Says Squares, When you breathe the air while inside a hyperbaric chamber the natural state of the oxygen does not change. Green plants produced the oxygen, modern technology just increases the air pressure. This does not change the molecular composition of oxygen. Increased pressure just allows oxygen to get into tissues better. Due to our purchase of the hyperbaric chamber, athletes are able to recover from an intense workout in a much shorter space of time. We typically use the chamber for sessions of 45 to 60 minutes daily or three times per week. When pushed to the limit, the true indicator of fitness is not how hard the heart operates, but how quickly it can recover after an extreme workout. Therefore, another focus area of study for the team in Mexico has been the endurance of the heart. To measure this recovery rate, an electroencephalograph (EEG) is used. The EEG enables doctors to monitor the brainwave activity from sensors placed on the scalp. Athletes exert intense effort for a sustained period after which they are given time to rest and recover. During these periods between intense physical exertion and recovery, doctors are able to monitor any weaknesses in the way the heart responds. The EEG has had a big impact upon our ability to measure the muscular endurance of the heart. In 1796, the life expectancy of a human being was between 25 and 36 years, in 1886 that number basically doubled to between 45 and 50. In 1996, the life expectancy of an average Mexican stands at around 75 years. People are living longer and this is due in large part to the advances of modern science. It is not all sophisticated medical equipment that is playing a part; basic advances in engineering are also greatly assisting. Take for example, a professional tennis player. In the past, most tennis players shoes were constructed with fabric and a solid rubber sole. These shoes were of poor construction and resulted in hip and foot injuries. Today the technology of shoe construction has radically changed. Now some shoes are injected with silicone and made of more comfortable, ergonomic construction. This has helped not only the elite but also the recreational sportsperson and thus, helps in the preservation of the human body. | Mexican athletes have the support of their government. | e |
id_2186 | Effort and Science to Win In Mexico, the Medicine Direction and Applied Sciences of the National Commission of Deporte analyses all aspects of sports science from the role of the auditory system in sporting achievement to the power of the mind and its role in the ability to win. Everything, it seems, is open to scrutiny. Recently, the focus has been evaluating the visual acuity of cyclists and long distance runners but they also focus on the more traditional areas of sports research, among them psychology, nutrition, anthropology, biochemistry and odontology . From budding child athletes as young as 9 to the more mature-aged sportsperson, the facility at Deporte has attracted some of Mexicos most famous sporting and Olympic hopefuls. The study of elite athletes is now more scientific than ever says doctor Francisco Javier Squares, after each competition, athletes are exposed to vigorous medical examinations and follow-up training in order to help us arrive at a program that is tailor-made. The modern athlete has become big business, no longer is there a one-size-fits-all approach. For example, in the past two people both 1. 70 meters tall and weighing 70 kilograms would have been given the same program of athletic conditioning now this idea is obsolete. It may be that the first individual has 35 kgs of muscle and 15 kgs of fat and the other person, although the same height and weight, may have 30 kgs of muscle and 20 kgs of fat. Through detailed scientific evaluation here at our facility in Deporte, says Squares, ... we are able to construct a very specific training programme for each individual. Whereas many countries in the world focus on the elevation of the glorious champion, the Mexican Olympic team takes a slightly different approach. Psychologically speaking an athlete must bring to his endeavour a healthy dose of humility. As Squares said, When an athlete wins for Mexico, it is always as a result of a combined team effort with many people operating behind the scenes to realise the sporting achievement. When an athlete stands on the dais, it is because of great effort on the part of many. As is often the case in some poorer countries, sportsmen and women are stifled in their development due to budgetary constraints. However this has not been a factor for consideration with the team in Mexico. The Mexican government has allocated a substantial sum of money for the provision of the latest equipment and laboratories for sports research. In fact, the quality of Mexicos facilities puts them on a par with countries like Italy and Germany in terms of access to resources. One example of sophisticated equipment used at the Mexican facility is the hyperbaric chamber. This apparatus is used to enhance oxygen recovery after a vigorous physical workout. Says Squares, When you breathe the air while inside a hyperbaric chamber the natural state of the oxygen does not change. Green plants produced the oxygen, modern technology just increases the air pressure. This does not change the molecular composition of oxygen. Increased pressure just allows oxygen to get into tissues better. Due to our purchase of the hyperbaric chamber, athletes are able to recover from an intense workout in a much shorter space of time. We typically use the chamber for sessions of 45 to 60 minutes daily or three times per week. When pushed to the limit, the true indicator of fitness is not how hard the heart operates, but how quickly it can recover after an extreme workout. Therefore, another focus area of study for the team in Mexico has been the endurance of the heart. To measure this recovery rate, an electroencephalograph (EEG) is used. The EEG enables doctors to monitor the brainwave activity from sensors placed on the scalp. Athletes exert intense effort for a sustained period after which they are given time to rest and recover. During these periods between intense physical exertion and recovery, doctors are able to monitor any weaknesses in the way the heart responds. The EEG has had a big impact upon our ability to measure the muscular endurance of the heart. In 1796, the life expectancy of a human being was between 25 and 36 years, in 1886 that number basically doubled to between 45 and 50. In 1996, the life expectancy of an average Mexican stands at around 75 years. People are living longer and this is due in large part to the advances of modern science. It is not all sophisticated medical equipment that is playing a part; basic advances in engineering are also greatly assisting. Take for example, a professional tennis player. In the past, most tennis players shoes were constructed with fabric and a solid rubber sole. These shoes were of poor construction and resulted in hip and foot injuries. Today the technology of shoe construction has radically changed. Now some shoes are injected with silicone and made of more comfortable, ergonomic construction. This has helped not only the elite but also the recreational sportsperson and thus, helps in the preservation of the human body. | There are limits to the level of sporting enquiry. | c |
id_2187 | Effort and Science to Win In Mexico, the Medicine Direction and Applied Sciences of the National Commission of Deporte analyses all aspects of sports science from the role of the auditory system in sporting achievement to the power of the mind and its role in the ability to win. Everything, it seems, is open to scrutiny. Recently, the focus has been evaluating the visual acuity of cyclists and long distance runners but they also focus on the more traditional areas of sports research, among them psychology, nutrition, anthropology, biochemistry and odontology . From budding child athletes as young as 9 to the more mature-aged sportsperson, the facility at Deporte has attracted some of Mexicos most famous sporting and Olympic hopefuls. The study of elite athletes is now more scientific than ever says doctor Francisco Javier Squares, after each competition, athletes are exposed to vigorous medical examinations and follow-up training in order to help us arrive at a program that is tailor-made. The modern athlete has become big business, no longer is there a one-size-fits-all approach. For example, in the past two people both 1. 70 meters tall and weighing 70 kilograms would have been given the same program of athletic conditioning now this idea is obsolete. It may be that the first individual has 35 kgs of muscle and 15 kgs of fat and the other person, although the same height and weight, may have 30 kgs of muscle and 20 kgs of fat. Through detailed scientific evaluation here at our facility in Deporte, says Squares, ... we are able to construct a very specific training programme for each individual. Whereas many countries in the world focus on the elevation of the glorious champion, the Mexican Olympic team takes a slightly different approach. Psychologically speaking an athlete must bring to his endeavour a healthy dose of humility. As Squares said, When an athlete wins for Mexico, it is always as a result of a combined team effort with many people operating behind the scenes to realise the sporting achievement. When an athlete stands on the dais, it is because of great effort on the part of many. As is often the case in some poorer countries, sportsmen and women are stifled in their development due to budgetary constraints. However this has not been a factor for consideration with the team in Mexico. The Mexican government has allocated a substantial sum of money for the provision of the latest equipment and laboratories for sports research. In fact, the quality of Mexicos facilities puts them on a par with countries like Italy and Germany in terms of access to resources. One example of sophisticated equipment used at the Mexican facility is the hyperbaric chamber. This apparatus is used to enhance oxygen recovery after a vigorous physical workout. Says Squares, When you breathe the air while inside a hyperbaric chamber the natural state of the oxygen does not change. Green plants produced the oxygen, modern technology just increases the air pressure. This does not change the molecular composition of oxygen. Increased pressure just allows oxygen to get into tissues better. Due to our purchase of the hyperbaric chamber, athletes are able to recover from an intense workout in a much shorter space of time. We typically use the chamber for sessions of 45 to 60 minutes daily or three times per week. When pushed to the limit, the true indicator of fitness is not how hard the heart operates, but how quickly it can recover after an extreme workout. Therefore, another focus area of study for the team in Mexico has been the endurance of the heart. To measure this recovery rate, an electroencephalograph (EEG) is used. The EEG enables doctors to monitor the brainwave activity from sensors placed on the scalp. Athletes exert intense effort for a sustained period after which they are given time to rest and recover. During these periods between intense physical exertion and recovery, doctors are able to monitor any weaknesses in the way the heart responds. The EEG has had a big impact upon our ability to measure the muscular endurance of the heart. In 1796, the life expectancy of a human being was between 25 and 36 years, in 1886 that number basically doubled to between 45 and 50. In 1996, the life expectancy of an average Mexican stands at around 75 years. People are living longer and this is due in large part to the advances of modern science. It is not all sophisticated medical equipment that is playing a part; basic advances in engineering are also greatly assisting. Take for example, a professional tennis player. In the past, most tennis players shoes were constructed with fabric and a solid rubber sole. These shoes were of poor construction and resulted in hip and foot injuries. Today the technology of shoe construction has radically changed. Now some shoes are injected with silicone and made of more comfortable, ergonomic construction. This has helped not only the elite but also the recreational sportsperson and thus, helps in the preservation of the human body. | Wealthy countries enjoy greater athletic success. | n |
id_2188 | Effort and Science to Win In Mexico, the Medicine Direction and Applied Sciences of the National Commission of Deporte analyses all aspects of sports science from the role of the auditory system in sporting achievement to the power of the mind and its role in the ability to win. Everything, it seems, is open to scrutiny. Recently, the focus has been evaluating the visual acuity of cyclists and long distance runners but they also focus on the more traditional areas of sports research, among them psychology, nutrition, anthropology, biochemistry and odontology . From budding child athletes as young as 9 to the more mature-aged sportsperson, the facility at Deporte has attracted some of Mexicos most famous sporting and Olympic hopefuls. The study of elite athletes is now more scientific than ever says doctor Francisco Javier Squares, after each competition, athletes are exposed to vigorous medical examinations and follow-up training in order to help us arrive at a program that is tailor-made. The modern athlete has become big business, no longer is there a one-size-fits-all approach. For example, in the past two people both 1. 70 meters tall and weighing 70 kilograms would have been given the same program of athletic conditioning now this idea is obsolete. It may be that the first individual has 35 kgs of muscle and 15 kgs of fat and the other person, although the same height and weight, may have 30 kgs of muscle and 20 kgs of fat. Through detailed scientific evaluation here at our facility in Deporte, says Squares, ... we are able to construct a very specific training programme for each individual. Whereas many countries in the world focus on the elevation of the glorious champion, the Mexican Olympic team takes a slightly different approach. Psychologically speaking an athlete must bring to his endeavour a healthy dose of humility. As Squares said, When an athlete wins for Mexico, it is always as a result of a combined team effort with many people operating behind the scenes to realise the sporting achievement. When an athlete stands on the dais, it is because of great effort on the part of many. As is often the case in some poorer countries, sportsmen and women are stifled in their development due to budgetary constraints. However this has not been a factor for consideration with the team in Mexico. The Mexican government has allocated a substantial sum of money for the provision of the latest equipment and laboratories for sports research. In fact, the quality of Mexicos facilities puts them on a par with countries like Italy and Germany in terms of access to resources. One example of sophisticated equipment used at the Mexican facility is the hyperbaric chamber. This apparatus is used to enhance oxygen recovery after a vigorous physical workout. Says Squares, When you breathe the air while inside a hyperbaric chamber the natural state of the oxygen does not change. Green plants produced the oxygen, modern technology just increases the air pressure. This does not change the molecular composition of oxygen. Increased pressure just allows oxygen to get into tissues better. Due to our purchase of the hyperbaric chamber, athletes are able to recover from an intense workout in a much shorter space of time. We typically use the chamber for sessions of 45 to 60 minutes daily or three times per week. When pushed to the limit, the true indicator of fitness is not how hard the heart operates, but how quickly it can recover after an extreme workout. Therefore, another focus area of study for the team in Mexico has been the endurance of the heart. To measure this recovery rate, an electroencephalograph (EEG) is used. The EEG enables doctors to monitor the brainwave activity from sensors placed on the scalp. Athletes exert intense effort for a sustained period after which they are given time to rest and recover. During these periods between intense physical exertion and recovery, doctors are able to monitor any weaknesses in the way the heart responds. The EEG has had a big impact upon our ability to measure the muscular endurance of the heart. In 1796, the life expectancy of a human being was between 25 and 36 years, in 1886 that number basically doubled to between 45 and 50. In 1996, the life expectancy of an average Mexican stands at around 75 years. People are living longer and this is due in large part to the advances of modern science. It is not all sophisticated medical equipment that is playing a part; basic advances in engineering are also greatly assisting. Take for example, a professional tennis player. In the past, most tennis players shoes were constructed with fabric and a solid rubber sole. These shoes were of poor construction and resulted in hip and foot injuries. Today the technology of shoe construction has radically changed. Now some shoes are injected with silicone and made of more comfortable, ergonomic construction. This has helped not only the elite but also the recreational sportsperson and thus, helps in the preservation of the human body. | Lack of money is what stops athletic improvement in poor countries. | e |
id_2189 | Effort and Science to Win In Mexico, the Medicine Direction and Applied Sciences of the National Commission of Deporte analyses all aspects of sports science from the role of the auditory system in sporting achievement to the power of the mind and its role in the ability to win. Everything, it seems, is open to scrutiny. Recently, the focus has been evaluating the visual acuity of cyclists and long distance runners but they also focus on the more traditional areas of sports research, among them psychology, nutrition, anthropology, biochemistry and odontology . From budding child athletes as young as 9 to the more mature-aged sportsperson, the facility at Deporte has attracted some of Mexicos most famous sporting and Olympic hopefuls. The study of elite athletes is now more scientific than ever says doctor Francisco Javier Squares, after each competition, athletes are exposed to vigorous medical examinations and follow-up training in order to help us arrive at a program that is tailor-made. The modern athlete has become big business, no longer is there a one-size-fits-all approach. For example, in the past two people both 1. 70 meters tall and weighing 70 kilograms would have been given the same program of athletic conditioning now this idea is obsolete. It may be that the first individual has 35 kgs of muscle and 15 kgs of fat and the other person, although the same height and weight, may have 30 kgs of muscle and 20 kgs of fat. Through detailed scientific evaluation here at our facility in Deporte, says Squares, ... we are able to construct a very specific training programme for each individual. Whereas many countries in the world focus on the elevation of the glorious champion, the Mexican Olympic team takes a slightly different approach. Psychologically speaking an athlete must bring to his endeavour a healthy dose of humility. As Squares said, When an athlete wins for Mexico, it is always as a result of a combined team effort with many people operating behind the scenes to realise the sporting achievement. When an athlete stands on the dais, it is because of great effort on the part of many. As is often the case in some poorer countries, sportsmen and women are stifled in their development due to budgetary constraints. However this has not been a factor for consideration with the team in Mexico. The Mexican government has allocated a substantial sum of money for the provision of the latest equipment and laboratories for sports research. In fact, the quality of Mexicos facilities puts them on a par with countries like Italy and Germany in terms of access to resources. One example of sophisticated equipment used at the Mexican facility is the hyperbaric chamber. This apparatus is used to enhance oxygen recovery after a vigorous physical workout. Says Squares, When you breathe the air while inside a hyperbaric chamber the natural state of the oxygen does not change. Green plants produced the oxygen, modern technology just increases the air pressure. This does not change the molecular composition of oxygen. Increased pressure just allows oxygen to get into tissues better. Due to our purchase of the hyperbaric chamber, athletes are able to recover from an intense workout in a much shorter space of time. We typically use the chamber for sessions of 45 to 60 minutes daily or three times per week. When pushed to the limit, the true indicator of fitness is not how hard the heart operates, but how quickly it can recover after an extreme workout. Therefore, another focus area of study for the team in Mexico has been the endurance of the heart. To measure this recovery rate, an electroencephalograph (EEG) is used. The EEG enables doctors to monitor the brainwave activity from sensors placed on the scalp. Athletes exert intense effort for a sustained period after which they are given time to rest and recover. During these periods between intense physical exertion and recovery, doctors are able to monitor any weaknesses in the way the heart responds. The EEG has had a big impact upon our ability to measure the muscular endurance of the heart. In 1796, the life expectancy of a human being was between 25 and 36 years, in 1886 that number basically doubled to between 45 and 50. In 1996, the life expectancy of an average Mexican stands at around 75 years. People are living longer and this is due in large part to the advances of modern science. It is not all sophisticated medical equipment that is playing a part; basic advances in engineering are also greatly assisting. Take for example, a professional tennis player. In the past, most tennis players shoes were constructed with fabric and a solid rubber sole. These shoes were of poor construction and resulted in hip and foot injuries. Today the technology of shoe construction has radically changed. Now some shoes are injected with silicone and made of more comfortable, ergonomic construction. This has helped not only the elite but also the recreational sportsperson and thus, helps in the preservation of the human body. | Mexico and Germany have similar sporting resources. | e |
id_2190 | Effort and Science to Win In Mexico, the Medicine Direction and Applied Sciences of the National Commission of Deporte analyses all aspects of sports science from the role of the auditory system in sporting achievement to the power of the mind and its role in the ability to win. Everything, it seems, is open to scrutiny. Recently, the focus has been evaluating the visual acuity of cyclists and long distance runners but they also focus on the more traditional areas of sports research, among them psychology, nutrition, anthropology, biochemistry and odontology . From budding child athletes as young as 9 to the more mature-aged sportsperson, the facility at Deporte has attracted some of Mexicos most famous sporting and Olympic hopefuls. The study of elite athletes is now more scientific than ever says doctor Francisco Javier Squares, after each competition, athletes are exposed to vigorous medical examinations and follow-up training in order to help us arrive at a program that is tailor-made. The modern athlete has become big business, no longer is there a one-size-fits-all approach. For example, in the past two people both 1. 70 meters tall and weighing 70 kilograms would have been given the same program of athletic conditioning now this idea is obsolete. It may be that the first individual has 35 kgs of muscle and 15 kgs of fat and the other person, although the same height and weight, may have 30 kgs of muscle and 20 kgs of fat. Through detailed scientific evaluation here at our facility in Deporte, says Squares, ... we are able to construct a very specific training programme for each individual. Whereas many countries in the world focus on the elevation of the glorious champion, the Mexican Olympic team takes a slightly different approach. Psychologically speaking an athlete must bring to his endeavour a healthy dose of humility. As Squares said, When an athlete wins for Mexico, it is always as a result of a combined team effort with many people operating behind the scenes to realise the sporting achievement. When an athlete stands on the dais, it is because of great effort on the part of many. As is often the case in some poorer countries, sportsmen and women are stifled in their development due to budgetary constraints. However this has not been a factor for consideration with the team in Mexico. The Mexican government has allocated a substantial sum of money for the provision of the latest equipment and laboratories for sports research. In fact, the quality of Mexicos facilities puts them on a par with countries like Italy and Germany in terms of access to resources. One example of sophisticated equipment used at the Mexican facility is the hyperbaric chamber. This apparatus is used to enhance oxygen recovery after a vigorous physical workout. Says Squares, When you breathe the air while inside a hyperbaric chamber the natural state of the oxygen does not change. Green plants produced the oxygen, modern technology just increases the air pressure. This does not change the molecular composition of oxygen. Increased pressure just allows oxygen to get into tissues better. Due to our purchase of the hyperbaric chamber, athletes are able to recover from an intense workout in a much shorter space of time. We typically use the chamber for sessions of 45 to 60 minutes daily or three times per week. When pushed to the limit, the true indicator of fitness is not how hard the heart operates, but how quickly it can recover after an extreme workout. Therefore, another focus area of study for the team in Mexico has been the endurance of the heart. To measure this recovery rate, an electroencephalograph (EEG) is used. The EEG enables doctors to monitor the brainwave activity from sensors placed on the scalp. Athletes exert intense effort for a sustained period after which they are given time to rest and recover. During these periods between intense physical exertion and recovery, doctors are able to monitor any weaknesses in the way the heart responds. The EEG has had a big impact upon our ability to measure the muscular endurance of the heart. In 1796, the life expectancy of a human being was between 25 and 36 years, in 1886 that number basically doubled to between 45 and 50. In 1996, the life expectancy of an average Mexican stands at around 75 years. People are living longer and this is due in large part to the advances of modern science. It is not all sophisticated medical equipment that is playing a part; basic advances in engineering are also greatly assisting. Take for example, a professional tennis player. In the past, most tennis players shoes were constructed with fabric and a solid rubber sole. These shoes were of poor construction and resulted in hip and foot injuries. Today the technology of shoe construction has radically changed. Now some shoes are injected with silicone and made of more comfortable, ergonomic construction. This has helped not only the elite but also the recreational sportsperson and thus, helps in the preservation of the human body. | Specific athletic programs differ mostly between men and women. | n |
id_2191 | El Nino The cold Humboldt Current of the Pacific Ocean flows toward the equator along the coasts of Ecuador and Peru in South America. When the current approaches the equator, the westward-flowing trade winds cause nutrient-rich cold water along the coast to rise from deeper depths to more shallow ones. This upwelling of water has economic repercussions. Fishing, especially for anchovies, is a major local industry. Every year during the months of December and January, a weak, warm countercurrent replaces the normally cold coastal waters. Without the upwelling of nutrients from below to feed the fish, fishing comes to a standstill. Fishers in this region have known the phenomenon for hundreds of years. In fact, this is the time of year they traditionally set aside to tend to their equipment and await the return of cold water. The residents of the region have given this phenomenon the name of El Nino, which is Spanish for "the child", because it occurs at about the time of the celebration of birth of the Christ child. While the warm-water countercurrent usually lasts for two months or less, there are occasions when the disruption to the normal flow lasts for many months. In these situations, water temperatures are raised not just along the coast, but for thousands of kilometers offshore. Over the last few decades, the term El Nino has come to be used to describe these exceptionally strong episodes and not the annual event. During the past 60 years, at least ten El Ninos have been observed. Not only do El Ninos affect the temperature of the equatorial Pacific, but the strongest of them impact global weather. The processes that interact to produce an El Nino involve conditions all across the Pacific, not just in the waters off South America. Over 60 years ago, Sir Gilbert Walker, a British scientist, discovered a connection between surface pressure readings at weather stations on the eastern and western sides of the Pacific. He noted that a rise in atmospheric pressure in the eastern Pacific is usually accompanied by a fall in pressure in the western Pacific and vice versa. He called this seesaw pattern the Southern Oscillation. It was later realized that there is a close link between El Nino and the Southern Oscillation. In fact, the link between the two is so great that they are often referred to jointly as ENSO (El Nino-Southern Oscillation). During a typical year, the eastern Pacific has a higher pressure than the western Pacific does. This east-to-west pressure gradient enhances the trade winds over the equatorial waters. This results in a warm surface current that moves east to west at the equator. The western Pacific develops a thick, warm layer of water while the eastern Pacific has the cold Humboldt Current enhanced by upwelling. However, in other years the Southern Oscillation, for unknown reasons, swings in the opposite direction, dramatically changing the usual conditions described above, with pressure increasing in the western Pacific and decreasing in the eastern Pacific. This change in the pressure gradient causes the trade winds to weaken or, in some cases, to reverse. This then causes the warm water in the western Pacific to flow eastward, increasing sea-surface temperatures in the central and eastern Pacific. The eastward shift signals the beginning of an El Nino. Scientists try to document as many past El Nino events as possible by piecing together bits of historical evidence, such as sea-surface temperature records, daily observations of atmospheric pressure and rainfall, fisheries' records from South America, and the writings of Spanish colonists dating back to the fifteenth century. From such historical evidence we know that El Ninos have occurred as far back as records go. It would seem that they are becoming more frequent. Records indicate that during the sixteenth century, an El Nino occurred on average every six years. Evidence gathered over the past few decades indicates that El Ninos are now occurring on average a little over every two years. Even more alarming is the fact that they appear to be getting stronger. The 1997-1998 El Nino brought copious and damaging rainfall to the southern United States, from California to Florida. Snowstorms in the northeast portion of the United States were more frequent and intense than in most years. | Surface temperatures increase in the central and eastern Pacific. | e |
id_2192 | El Nino The cold Humboldt Current of the Pacific Ocean flows toward the equator along the coasts of Ecuador and Peru in South America. When the current approaches the equator, the westward-flowing trade winds cause nutrient-rich cold water along the coast to rise from deeper depths to more shallow ones. This upwelling of water has economic repercussions. Fishing, especially for anchovies, is a major local industry. Every year during the months of December and January, a weak, warm countercurrent replaces the normally cold coastal waters. Without the upwelling of nutrients from below to feed the fish, fishing comes to a standstill. Fishers in this region have known the phenomenon for hundreds of years. In fact, this is the time of year they traditionally set aside to tend to their equipment and await the return of cold water. The residents of the region have given this phenomenon the name of El Nino, which is Spanish for "the child", because it occurs at about the time of the celebration of birth of the Christ child. While the warm-water countercurrent usually lasts for two months or less, there are occasions when the disruption to the normal flow lasts for many months. In these situations, water temperatures are raised not just along the coast, but for thousands of kilometers offshore. Over the last few decades, the term El Nino has come to be used to describe these exceptionally strong episodes and not the annual event. During the past 60 years, at least ten El Ninos have been observed. Not only do El Ninos affect the temperature of the equatorial Pacific, but the strongest of them impact global weather. The processes that interact to produce an El Nino involve conditions all across the Pacific, not just in the waters off South America. Over 60 years ago, Sir Gilbert Walker, a British scientist, discovered a connection between surface pressure readings at weather stations on the eastern and western sides of the Pacific. He noted that a rise in atmospheric pressure in the eastern Pacific is usually accompanied by a fall in pressure in the western Pacific and vice versa. He called this seesaw pattern the Southern Oscillation. It was later realized that there is a close link between El Nino and the Southern Oscillation. In fact, the link between the two is so great that they are often referred to jointly as ENSO (El Nino-Southern Oscillation). During a typical year, the eastern Pacific has a higher pressure than the western Pacific does. This east-to-west pressure gradient enhances the trade winds over the equatorial waters. This results in a warm surface current that moves east to west at the equator. The western Pacific develops a thick, warm layer of water while the eastern Pacific has the cold Humboldt Current enhanced by upwelling. However, in other years the Southern Oscillation, for unknown reasons, swings in the opposite direction, dramatically changing the usual conditions described above, with pressure increasing in the western Pacific and decreasing in the eastern Pacific. This change in the pressure gradient causes the trade winds to weaken or, in some cases, to reverse. This then causes the warm water in the western Pacific to flow eastward, increasing sea-surface temperatures in the central and eastern Pacific. The eastward shift signals the beginning of an El Nino. Scientists try to document as many past El Nino events as possible by piecing together bits of historical evidence, such as sea-surface temperature records, daily observations of atmospheric pressure and rainfall, fisheries' records from South America, and the writings of Spanish colonists dating back to the fifteenth century. From such historical evidence we know that El Ninos have occurred as far back as records go. It would seem that they are becoming more frequent. Records indicate that during the sixteenth century, an El Nino occurred on average every six years. Evidence gathered over the past few decades indicates that El Ninos are now occurring on average a little over every two years. Even more alarming is the fact that they appear to be getting stronger. The 1997-1998 El Nino brought copious and damaging rainfall to the southern United States, from California to Florida. Snowstorms in the northeast portion of the United States were more frequent and intense than in most years. | Ocean currents speed up as they move eastward. | n |
id_2193 | El Nino The cold Humboldt Current of the Pacific Ocean flows toward the equator along the coasts of Ecuador and Peru in South America. When the current approaches the equator, the westward-flowing trade winds cause nutrient-rich cold water along the coast to rise from deeper depths to more shallow ones. This upwelling of water has economic repercussions. Fishing, especially for anchovies, is a major local industry. Every year during the months of December and January, a weak, warm countercurrent replaces the normally cold coastal waters. Without the upwelling of nutrients from below to feed the fish, fishing comes to a standstill. Fishers in this region have known the phenomenon for hundreds of years. In fact, this is the time of year they traditionally set aside to tend to their equipment and await the return of cold water. The residents of the region have given this phenomenon the name of El Nino, which is Spanish for "the child", because it occurs at about the time of the celebration of birth of the Christ child. While the warm-water countercurrent usually lasts for two months or less, there are occasions when the disruption to the normal flow lasts for many months. In these situations, water temperatures are raised not just along the coast, but for thousands of kilometers offshore. Over the last few decades, the term El Nino has come to be used to describe these exceptionally strong episodes and not the annual event. During the past 60 years, at least ten El Ninos have been observed. Not only do El Ninos affect the temperature of the equatorial Pacific, but the strongest of them impact global weather. The processes that interact to produce an El Nino involve conditions all across the Pacific, not just in the waters off South America. Over 60 years ago, Sir Gilbert Walker, a British scientist, discovered a connection between surface pressure readings at weather stations on the eastern and western sides of the Pacific. He noted that a rise in atmospheric pressure in the eastern Pacific is usually accompanied by a fall in pressure in the western Pacific and vice versa. He called this seesaw pattern the Southern Oscillation. It was later realized that there is a close link between El Nino and the Southern Oscillation. In fact, the link between the two is so great that they are often referred to jointly as ENSO (El Nino-Southern Oscillation). During a typical year, the eastern Pacific has a higher pressure than the western Pacific does. This east-to-west pressure gradient enhances the trade winds over the equatorial waters. This results in a warm surface current that moves east to west at the equator. The western Pacific develops a thick, warm layer of water while the eastern Pacific has the cold Humboldt Current enhanced by upwelling. However, in other years the Southern Oscillation, for unknown reasons, swings in the opposite direction, dramatically changing the usual conditions described above, with pressure increasing in the western Pacific and decreasing in the eastern Pacific. This change in the pressure gradient causes the trade winds to weaken or, in some cases, to reverse. This then causes the warm water in the western Pacific to flow eastward, increasing sea-surface temperatures in the central and eastern Pacific. The eastward shift signals the beginning of an El Nino. Scientists try to document as many past El Nino events as possible by piecing together bits of historical evidence, such as sea-surface temperature records, daily observations of atmospheric pressure and rainfall, fisheries' records from South America, and the writings of Spanish colonists dating back to the fifteenth century. From such historical evidence we know that El Ninos have occurred as far back as records go. It would seem that they are becoming more frequent. Records indicate that during the sixteenth century, an El Nino occurred on average every six years. Evidence gathered over the past few decades indicates that El Ninos are now occurring on average a little over every two years. Even more alarming is the fact that they appear to be getting stronger. The 1997-1998 El Nino brought copious and damaging rainfall to the southern United States, from California to Florida. Snowstorms in the northeast portion of the United States were more frequent and intense than in most years. | Pressure increases in the western Pacific and decreases in the eastern Pacific. | e |
id_2194 | El Nino The cold Humboldt Current of the Pacific Ocean flows toward the equator along the coasts of Ecuador and Peru in South America. When the current approaches the equator, the westward-flowing trade winds cause nutrient-rich cold water along the coast to rise from deeper depths to more shallow ones. This upwelling of water has economic repercussions. Fishing, especially for anchovies, is a major local industry. Every year during the months of December and January, a weak, warm countercurrent replaces the normally cold coastal waters. Without the upwelling of nutrients from below to feed the fish, fishing comes to a standstill. Fishers in this region have known the phenomenon for hundreds of years. In fact, this is the time of year they traditionally set aside to tend to their equipment and await the return of cold water. The residents of the region have given this phenomenon the name of El Nino, which is Spanish for "the child", because it occurs at about the time of the celebration of birth of the Christ child. While the warm-water countercurrent usually lasts for two months or less, there are occasions when the disruption to the normal flow lasts for many months. In these situations, water temperatures are raised not just along the coast, but for thousands of kilometers offshore. Over the last few decades, the term El Nino has come to be used to describe these exceptionally strong episodes and not the annual event. During the past 60 years, at least ten El Ninos have been observed. Not only do El Ninos affect the temperature of the equatorial Pacific, but the strongest of them impact global weather. The processes that interact to produce an El Nino involve conditions all across the Pacific, not just in the waters off South America. Over 60 years ago, Sir Gilbert Walker, a British scientist, discovered a connection between surface pressure readings at weather stations on the eastern and western sides of the Pacific. He noted that a rise in atmospheric pressure in the eastern Pacific is usually accompanied by a fall in pressure in the western Pacific and vice versa. He called this seesaw pattern the Southern Oscillation. It was later realized that there is a close link between El Nino and the Southern Oscillation. In fact, the link between the two is so great that they are often referred to jointly as ENSO (El Nino-Southern Oscillation). During a typical year, the eastern Pacific has a higher pressure than the western Pacific does. This east-to-west pressure gradient enhances the trade winds over the equatorial waters. This results in a warm surface current that moves east to west at the equator. The western Pacific develops a thick, warm layer of water while the eastern Pacific has the cold Humboldt Current enhanced by upwelling. However, in other years the Southern Oscillation, for unknown reasons, swings in the opposite direction, dramatically changing the usual conditions described above, with pressure increasing in the western Pacific and decreasing in the eastern Pacific. This change in the pressure gradient causes the trade winds to weaken or, in some cases, to reverse. This then causes the warm water in the western Pacific to flow eastward, increasing sea-surface temperatures in the central and eastern Pacific. The eastward shift signals the beginning of an El Nino. Scientists try to document as many past El Nino events as possible by piecing together bits of historical evidence, such as sea-surface temperature records, daily observations of atmospheric pressure and rainfall, fisheries' records from South America, and the writings of Spanish colonists dating back to the fifteenth century. From such historical evidence we know that El Ninos have occurred as far back as records go. It would seem that they are becoming more frequent. Records indicate that during the sixteenth century, an El Nino occurred on average every six years. Evidence gathered over the past few decades indicates that El Ninos are now occurring on average a little over every two years. Even more alarming is the fact that they appear to be getting stronger. The 1997-1998 El Nino brought copious and damaging rainfall to the southern United States, from California to Florida. Snowstorms in the northeast portion of the United States were more frequent and intense than in most years. | The trade winds decrease in intensity or reverse in the direction. | e |
id_2195 | El Nino, the cyclic warming of the Pacific Ocean, is largely responsible for the recent worldwide period of higher than average temperatures. February was the sixth warmest since records began in 1880, but Januarys record high means that the two-month period was the warmest worldwide. The averages were obtained by combining land and ocean surface temperatures. The only exceptions were areas of the Middle East and central areas of the United States, which did not experience record temperatures. Some of the largest temperature increases occurred in high latitudes around the Artic Circle, where wildlife has responded to the early spring-like weather. Should March not follow the trend and a wintry spell return, some of the species that have woken early from hibernation or started breeding prematurely may experience problems. | China did not experience record temperatures. | c |
id_2196 | El Nino, the cyclic warming of the Pacific Ocean, is largely responsible for the recent worldwide period of higher than average temperatures. February was the sixth warmest since records began in 1880, but Januarys record high means that the two-month period was the warmest worldwide. The averages were obtained by combining land and ocean surface temperatures. The only exceptions were areas of the Middle East and central areas of the United States, which did not experience record temperatures. Some of the largest temperature increases occurred in high latitudes around the Artic Circle, where wildlife has responded to the early spring-like weather. Should March not follow the trend and a wintry spell return, some of the species that have woken early from hibernation or started breeding prematurely may experience problems. | The passage can be correctly summarized as describing the worlds two warmest winter months since records began. | c |
id_2197 | El Nino, the cyclic warming of the Pacific Ocean, is largely responsible for the recent worldwide period of higher than average temperatures. February was the sixth warmest since records began in 1880, but Januarys record high means that the two-month period was the warmest worldwide. The averages were obtained by combining land and ocean surface temperatures. The only exceptions were areas of the Middle East and central areas of the United States, which did not experience record temperatures. Some of the largest temperature increases occurred in high latitudes around the Artic Circle, where wildlife has responded to the early spring-like weather. Should March not follow the trend and a wintry spell return, some of the species that have woken early from hibernation or started breeding prematurely may experience problems. | If the average had been based only on land temperatures rather than land and ocean temperatures, the result would have been cooler. | n |
id_2198 | Elements of Life The creation of life requires a set of chemical elements for making the components of cells. Life on Earth uses about 25 of the 92 naturally occurring chemical elements, although just 4 of these elementsoxygen, carbon, hydrogen, and nitrogenmake up about 96 percent of the mass of living organisms. Thus, a first requirement for life might be the presence of most or all of the elements used by life. Interestingly, this requirement can probably be met by almost any world. Scientists have determined that all chemical elements in the universe besides hydrogen and helium (and a trace amount of lithium) were produced by stars. These are known as heavy elements because they are heavier than hydrogen and helium. Although all of these heavy elements are quite rare compared to hydrogen and helium, they are found just about everywhere. Heavy elements are continually being manufactured by stars and released into space by stellar deaths, so their amount compared to hydrogen and helium gradually rises with time. Heavy elements make up about 2 percent of the chemical content (by mass) of our solar system, the other 98 percent is hydrogen and helium. In some very old star systems, which formed before many heavy elements were produced, the heavy-element share may be less than 0.1 percent. Nevertheless, every star system studied has at least some amount of all the elements used by life. Moreover, when planetesimalssmall, solid objects formed in the early solar system that may accumulate to become planetscondense within a forming star system, they are inevitably made from heavy elements because the more common hydrogen and helium remain gaseous. Thus, planetesimals everywhere should contain the elements needed for life, which means that objects built from planetesimalsplanets, moons, asteroids, and comets-also contain these elements. The nature of solar-system formation explains why Earth contains all the elements needed for life, and it is why we expect these elements to be present on other worlds throughout our solar system, galaxy, and universe. Note that this argument does not change, even if we allow for life very different from life on Earth. Life on Earth is carbon based, and most biologists believe that life elsewhere is likely to be carbon based as well. However, we cannot absolutely rule out the possibility of life with another chemical basis, such as silicon or nitrogen. The set of elements (or their relative proportions) used by life based on some other element might be somewhat different from that used by carbon-based life on Earth. But the elements are still products of stars and would still be present in planetesimals everywhere. No matter what kinds of life we are looking for, we are likely to find the necessary elements on almost every planet, moon, asteroid, and comet in the universe. A somewhat stricter requirement is the presence of these elements in molecules that can be used as ready-made building blocks for life, just as early Earth probably had an organic soup of amino acids and other complex molecules. Earth's organic molecules likely came from some combination of three sources: chemical reactions in the atmosphere, chemical reactions near deep-sea vents in the oceans, and molecules carried to Earth by asteroids and comets. The first two sources can occur only on worlds with atmospheres or oceans, respectively. But the third source should have brought similar molecules to nearly all worlds in our solar system. Studies of meteorites and comets suggest that organic molecules are widespread among both asteroids and comets. Because each body in the solar system was repeatedly struck by asteroids and comets during the period known as the heavy bombardment (about 4 billion years ago), each body should have received at least some organic molecules. However, these molecules tend to be destroyed by solar radiation on surfaces unprotected by atmospheres. Moreover, while these molecules might stay intact beneath the surface (as they evidently do on asteroids and comets), they probably cannot react with each other unless some kind of liquid or gas is available to move them about. Thus, if we limit our search to worlds on which organic molecules are likely to be involved in chemical reactions, we can probably rule out any world that lacks both an atmosphere and a surface or subsurface liquid medium, such as water. | Some of them probably formed in the atmosphere and oceans. | e |
id_2199 | Elements of Life The creation of life requires a set of chemical elements for making the components of cells. Life on Earth uses about 25 of the 92 naturally occurring chemical elements, although just 4 of these elementsoxygen, carbon, hydrogen, and nitrogenmake up about 96 percent of the mass of living organisms. Thus, a first requirement for life might be the presence of most or all of the elements used by life. Interestingly, this requirement can probably be met by almost any world. Scientists have determined that all chemical elements in the universe besides hydrogen and helium (and a trace amount of lithium) were produced by stars. These are known as heavy elements because they are heavier than hydrogen and helium. Although all of these heavy elements are quite rare compared to hydrogen and helium, they are found just about everywhere. Heavy elements are continually being manufactured by stars and released into space by stellar deaths, so their amount compared to hydrogen and helium gradually rises with time. Heavy elements make up about 2 percent of the chemical content (by mass) of our solar system, the other 98 percent is hydrogen and helium. In some very old star systems, which formed before many heavy elements were produced, the heavy-element share may be less than 0.1 percent. Nevertheless, every star system studied has at least some amount of all the elements used by life. Moreover, when planetesimalssmall, solid objects formed in the early solar system that may accumulate to become planetscondense within a forming star system, they are inevitably made from heavy elements because the more common hydrogen and helium remain gaseous. Thus, planetesimals everywhere should contain the elements needed for life, which means that objects built from planetesimalsplanets, moons, asteroids, and comets-also contain these elements. The nature of solar-system formation explains why Earth contains all the elements needed for life, and it is why we expect these elements to be present on other worlds throughout our solar system, galaxy, and universe. Note that this argument does not change, even if we allow for life very different from life on Earth. Life on Earth is carbon based, and most biologists believe that life elsewhere is likely to be carbon based as well. However, we cannot absolutely rule out the possibility of life with another chemical basis, such as silicon or nitrogen. The set of elements (or their relative proportions) used by life based on some other element might be somewhat different from that used by carbon-based life on Earth. But the elements are still products of stars and would still be present in planetesimals everywhere. No matter what kinds of life we are looking for, we are likely to find the necessary elements on almost every planet, moon, asteroid, and comet in the universe. A somewhat stricter requirement is the presence of these elements in molecules that can be used as ready-made building blocks for life, just as early Earth probably had an organic soup of amino acids and other complex molecules. Earth's organic molecules likely came from some combination of three sources: chemical reactions in the atmosphere, chemical reactions near deep-sea vents in the oceans, and molecules carried to Earth by asteroids and comets. The first two sources can occur only on worlds with atmospheres or oceans, respectively. But the third source should have brought similar molecules to nearly all worlds in our solar system. Studies of meteorites and comets suggest that organic molecules are widespread among both asteroids and comets. Because each body in the solar system was repeatedly struck by asteroids and comets during the period known as the heavy bombardment (about 4 billion years ago), each body should have received at least some organic molecules. However, these molecules tend to be destroyed by solar radiation on surfaces unprotected by atmospheres. Moreover, while these molecules might stay intact beneath the surface (as they evidently do on asteroids and comets), they probably cannot react with each other unless some kind of liquid or gas is available to move them about. Thus, if we limit our search to worlds on which organic molecules are likely to be involved in chemical reactions, we can probably rule out any world that lacks both an atmosphere and a surface or subsurface liquid medium, such as water. | Some of them were probably brought to Earth by asteroids or comets. | e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.