uid
stringlengths
4
7
premise
stringlengths
19
9.21k
hypothesis
stringlengths
13
488
label
stringclasses
3 values
id_400
AIMS AND OBJECTIVES OF HOSPITAL-WATCH To create security awareness To remove or reduce the risk of crime To prevent criminal injury or distress to staff and patients To protect property against theft or criminal damage To maintain the working relationship between the hospital and the police. SECURITY IN THE HOSPITAL ASK strangers to identify themselves IEALL visitors to wardPs or departments should identify themselves And Estates the nature of their business DON'T allow the removal of ANY equipment without proper authorisation CHECK that there is no-one left in the office or department ENSURE that portable items are locked away when not in use. Make sure they cannot be seen from ENSURE that all equipment is security marked by the Estates Department REPORT vandals immediately DON'T remove NHS property from the hospital - this is theft DO report anything suspicious. REPORTING SECURITY INCIDENTS All incidents/attempted incidents must be reported When an incident has occurred a Trust Incident Report form must be completed If you or a colleague are involved in a serious physical attack/threat and are requiring immediate assistance, use the panic attack alarm where fitted or ring Switchboard on 2222 In the case of theft or other serious crime it is the responsibility of the individual involved to report to the Police and then complete an Incident Report form Minor incidents should be reported on an Incident form In either case the Site Manager/Line Manager must be informed. PROTECT YOUR PROPERTY DON'T leave your handbag where it invites theft. Lock it away DON'T leave your purse in a shopping basket, in an office or empty room. Lock it away DON'T leave money or other valuables in your coat or jacket pocket. If you take your jacket off, take your This content is for your own individual study only. You cannot share or transmit it. Non compliance could result in legal action against you. wallet with you DO use clothes lockers in cloakrooms, where they are provided. Otherwise use a lockable drawer or cupboard. PROTECT YOURSELF DO avoid ill-lit streets and car parks, wasteland and unoccupied compartments on trains DO consider keeping a personal attack alarm in your hand or pocket DON'T leave house or car keys in your handbag - put them in your pocket DO check your car - an unnecessary breakdown could put you at risk. YOUR CAR DO make sure your car is locked, windows shut and valuables kept out of sight DO display your permit/parking ticket in the windscreen Watch out for prowlers Inform the police immediately Keep all ground floor windows closed or locked
It's advisable for women to keep an attack alarm in their handbags.
n
id_401
AIR TRAFFIC CONTROL IN THE USA An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating todays ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of Americas airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nations airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the planes instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only 40choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilots license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument- rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium- sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
All aircraft in Class E airspace must use IFR.
c
id_402
AIR TRAFFIC CONTROL IN THE USA An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating todays ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of Americas airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nations airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the planes instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only 40choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilots license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument- rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium- sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
Class F airspace is airspace which is below 365m and not near airports.
e
id_403
AIR TRAFFIC CONTROL IN THE USA An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating todays ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of Americas airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nations airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the planes instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only 40choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilots license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument- rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium- sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
Some improvements were made in radio communication during World War II.
e
id_404
AIR TRAFFIC CONTROL IN THE USA An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating todays ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of Americas airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nations airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the planes instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only 40choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilots license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument- rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium- sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
Beacons and flashing lights are still used by ATC today.
n
id_405
AIR TRAFFIC CONTROL IN THE USA An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating todays ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of Americas airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nations airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the planes instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only 40choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilots license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument- rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium- sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
Air Traffic Control started after the Grand Canyon crash in 1956.
c
id_406
AIR TRAFFIC CONTROL IN THE USA An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating todays ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of Americas airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nations airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the planes instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only 40choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilots license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument- rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium- sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
The FAA was created as a result of the introduction of the jet engine.
e
id_407
AIR TRAFFIC CONTROL IN THE USA An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating todays ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of Americas airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nations airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the planes instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only 40choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilots license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument- rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium- sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
A pilot entering Class C airspace is flying over an average-sized city.
e
id_408
AIR TRAFFIC CONTROL IN THE USA. An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating today's ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of America's airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots' margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nation's airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the plane's instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilot's license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument-rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium-sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
Some improvements were made in radio communication during World War II.
e
id_409
AIR TRAFFIC CONTROL IN THE USA. An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating today's ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of America's airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots' margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nation's airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the plane's instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilot's license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument-rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium-sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
Class F airspace is airspace which is below 365m and not near airports.
e
id_410
AIR TRAFFIC CONTROL IN THE USA. An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating today's ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of America's airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots' margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nation's airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the plane's instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilot's license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument-rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium-sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
All aircraft in Class E airspace must use IFR.
c
id_411
AIR TRAFFIC CONTROL IN THE USA. An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating today's ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of America's airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots' margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nation's airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the plane's instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilot's license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument-rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium-sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
A pilot entering Class C airspace is flying over an average-sized city.
e
id_412
AIR TRAFFIC CONTROL IN THE USA. An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating today's ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of America's airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots' margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nation's airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the plane's instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilot's license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument-rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium-sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
Air Traffic Control started after the Grand Canyon crash in 1956.
c
id_413
AIR TRAFFIC CONTROL IN THE USA. An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating today's ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of America's airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots' margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nation's airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the plane's instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilot's license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument-rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium-sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
Beacons and flashing lights are still used by ATC today.
n
id_414
AIR TRAFFIC CONTROL IN THE USA. An accident that occurred in the skies over the Grand Canyon in 1956 resulted in the establishment of the Federal Aviation Administration (FAA) to regulate and oversee the operation of aircraft in the skies over the United States, which were becoming quite congested. The resulting structure of air traffic control has greatly increased the safety of flight in the United States, and similar air traffic control procedures are also in place over much of the rest of the world. Rudimentary air traffic control (ATC) existed well before the Grand Canyon disaster. As early as the 1920s, the earliest air traffic controllers manually guided aircraft in the vicinity of the airports, using lights and flags, while beacons and flashing lights were placed along cross-country routes to establish the earliest airways. However, this purely visual system was useless in bad weather, and, by the 1930s, radio communication was coming into use for ATC. The first region to have something approximating today's ATC was New York City, with other major metropolitan areas following soon after. In the 1940s, ATC centres could and did take advantage of the newly developed radar and improved radio communication brought about by the Second World War, but the system remained rudimentary. It was only after the creation of the FAA that full-scale regulation of America's airspace took place, and this was fortuitous, for the advent of the jet engine suddenly resulted in a large number of very fast planes, reducing pilots' margin of error and practically demanding some set of rules to keep everyone well separated and operating safely in the air. Many people think that ATC consists of a row of controllers sitting in front of their radar screens at the nation's airports, telling arriving and departing traffic what to do. This is a very incomplete part of the picture. The FAA realised that the airspace over the United States would at any time have many different kinds of planes, flying for many different purposes, in a variety of weather conditions, and the same kind of structure was needed to accommodate all of them. To meet this challenge, the following elements were put into effect. First, ATC extends over virtually the entire United States. In general, from 365m above the ground and higher, the entire country is blanketed by controlled airspace. In certain areas, mainly near airports, controlled airspace extends down to 215m above the ground, and, in the immediate vicinity of an airport, all the way down to the surface. Controlled airspace is that airspace in which FAA regulations apply. Elsewhere, in uncontrolled airspace, pilots are bound by fewer regulations. In this way, the recreational pilot who simply wishes to go flying for a while without all the restrictions imposed by the FAA has only to stay in uncontrolled airspace, below 365m, while the pilot who does want the protection afforded by ATC can easily enter the controlled airspace. The FAA then recognised two types of operating environments. In good meteorological conditions, flying would be permitted under Visual Flight Rules (VFR), which suggests a strong reliance on visual cues to maintain an acceptable level of safety. Poor visibility necessitated a set of Instrumental Flight Rules (IFR), under which the pilot relied on altitude and navigational information provided by the plane's instrument panel to fly safely. On a clear day, a pilot in controlled airspace can choose a VFR or IFR flight plan, and the FAA regulations were devised in a way which accommodates both VFR and IFR operations in the same airspace. However, a pilot can only choose to fly IFR if they possess an instrument rating which is above and beyond the basic pilot's license that must also be held. Controlled airspace is divided into several different types, designated by letters of the alphabet. Uncontrolled airspace is designated Class F, while controlled airspace below 5,490m above sea level and not in the vicinity of an airport is Class E. All airspace above 5,490m is designated Class A. The reason for the division of Class E and Class A airspace stems from the type of planes operating in them. Generally, Class E airspace is where one finds general aviation aircraft (few of which can climb above 5,490m anyway), and commercial turboprop aircraft. Above 5,490m is the realm of the heavy jets, since jet engines operate more efficiently at higher altitudes. The difference between Class E and A airspace is that in Class A, all operations are IFR, and pilots must be instrument-rated, that is, skilled and licensed in aircraft instrumentation. This is because ATC control of the entire space is essential. Three other types of airspace, Classes D, C and B, govern the vicinity of airports. These correspond roughly to small municipal, medium-sized metropolitan and major metropolitan airports respectively, and encompass an increasingly rigorous set of regulations. For example, all a VFR pilot has to do to enter Class C airspace is establish two-way radio contact with ATC. No explicit permission from ATC to enter is needed, although the pilot must continue to obey all regulations governing VFR flight. To enter Class B airspace, such as on approach to a major metropolitan airport, an explicit ATC clearance is required. The private pilot who cruises without permission into this airspace risks losing their license.
The FAA was created as a result of the introduction of the jet engine.
c
id_415
ALBERT EINSTEIN Albert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing. Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life. It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics. In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist. Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einsteins granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Milevas son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.
Einstein had such difficulty with language that those around him thought he would never learn how to speak.
e
id_416
ALBERT EINSTEIN Albert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing. Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life. It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics. In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist. Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einsteins granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Milevas son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.
Einstein taught himself how to play the violin.
n
id_417
ALBERT EINSTEIN Albert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing. Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life. It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics. In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist. Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einsteins granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Milevas son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.
In 1933 Einstein moved to the United States where he became an American citizen.
n
id_418
ALBERT EINSTEIN Albert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing. Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life. It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics. In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist. Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einsteins granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Milevas son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.
The existence of a daughter only became known to the world between 1897 and 1903.
c
id_419
ALBERT EINSTEIN Albert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing. Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life. It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics. In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist. Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einsteins granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Milevas son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.
His daughter died of schizophrenia when she was two.
n
id_420
ALBERT EINSTEIN Albert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing. Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life. It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics. In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist. Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einsteins granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Milevas son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.
Einstein enjoyed the teaching methods in Switzerland.
c
id_421
ALBERT EINSTEIN Albert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing. Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life. It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics. In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist. Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einsteins granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Milevas son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.
It seemed to Einstein that nothing could be pushing the needle of the compass around except the wind.
c
id_422
ALBERT EINSTEIN Albert Einstein is perhaps the best-known scientist of the 20th century. He received the Nobel Prize in Physics in 1921 and his theories of special and general relativity are of great importance to many branches of physics and astronomy. He is well known for his theories about light, matter, gravity, space and time. His most famous idea is that energy and mass are different forms of the same thing. Einstein was born in Wurttemberg, Germany on 14th March 1879. His family was Jewish but he had not been very religious in his youth although he became very interested in Judaism in later life. It is well documented that Einstein did not begin speaking until after the age of three. In fact, he found speaking so difficult that his family were worried that he would never start to speak. When Einstein was four years old, his father gave him a magnetic compass. It was this compass that inspired him to explore the world of science. He wanted to understand why the needle always pointed north whichever way he turned the compass. It looked as if the needle was moving itself. But the needle was inside a closed case, so no other force (such as the wind) could have been moving it. And this is how Einstein became interested in studying science and mathematics. In fact, he was so clever that at the age of 12 he taught himself Euclidean geometry. At fifteen, he went to school in Munich which he found very boring. he finished secondary school in Aarau, Switzerland and entered the Swiss Federal Institute of Technology in Zurich from which he graduated in 1900. But Einstein did not like the teaching there either. He often missed classes and used the time to study physics on his own or to play the violin instead. However, he was able to pass his examinations by studying the notes of a classmate. His teachers did not have a good opinion of him and refused to recommend him for a university position. So, he got a job in a patent office in Switzerland. While he was working there, he wrote the papers that first made him famous as a great scientist. Einstein had two severely disabled children with his first wife, Mileva. His daughter (whose name we do not know) was born about a year before their marriage in January 1902. She was looked after by her Serbian grandparents until she died at the age of two. It is generally believed that she died from scarlet fever but there are those who believe that she may have suffered from a disorder known as Down Syndrome. But there is not enough evidence to know for sure. In fact, no one even knew that she had existed until Einsteins granddaughter found 54 love letters that Einstein and Mileva had written to each other between 1897 and 1903. She found these letters inside a shoe box in their attic in California. Einstein and Milevas son, Eduard, was diagnosed with schizophrenia. He spent decades in hospitals and died in Zurich in 1965. Just before the start of World War I, Einstein moved back to Germany and became director of a school there. But in 1933, following death threats from the Nazis, he moved to the United States, where he died on 18th April 1955.
The general theory of relativity is a very important theory in modern physics.
e
id_423
ALTERNATIVE MEDICINE IN AUSTRALIA. The first students to study alternative medicine at university level in Australia began their four-year, full-time course at the University of Technology, Sydney, in early 1994. Their course covered, among other therapies, acupuncture. The theory they learnt is based on the traditional Chinese explanation of this ancient healing art: that it can regulate the flow of 'Qi' or energy through pathways in the body. This course reflects how far some alternative therapies have come in their struggle for acceptance by the medical establishment. Australia has been unusual in the Western world in having a very conservative attitude to natural or alternative therapies, according to Dr Paul Laver, a lecturer in Public Health at the University of Sydney. 'We've had a tradition of doctors being fairly powerful and I guess they are pretty loath to allow any pretenders to their position to come into it. ' In many other industrialised countries, orthodox and alternative medicine have worked 'hand in glove' for years. In Europe, only orthodox doctors can prescribe herbal medicine. In Germany, plant remedies account for 10% of the national turnover of pharmaceuticals. Americans made more visits to alternative therapists than to orthodox doctors in 1990, and each year they spend about $US 12 billion on therapies that have not been scientifically tested. Disenchantment with orthodox medicine has seen the popularity of alternative therapies in Australia climb steadily during the past 20 years. In a 1983 national health survey, 1.9% of people said they had contacted a chiropractor, naturopath, osteopath, acupuncturist or herbalist in the two weeks prior to the survey. By 1990, this figure had risen to 2.6% of the population. The 550,000 consultations with alternative therapists reported in the 1990 survey represented about an eighth of the total number of consultations with medically qualified personnel covered by the survey, according to Dr Laver and colleagues writing in the Australian Journal of Public Health in 1993. 'A better educated and less accepting public has become disillusioned with the experts in general, and increasingly sceptical about science and empirically based knowledge, ' they said. 'The high standing of professionals, including doctors, has been eroded as a consequence. ' Rather than resisting or criticising this trend, increasing numbers of Australian doctors, particularly younger ones, are forming group practices with alternative therapists or taking courses themselves, particularly in acupuncture and herbalism. Part of the incentive was financial, Dr Laver said. 'The bottom line is that most general practitioners are business people. If they see potential clientele going elsewhere, they might want to be able to offer a similar service. ' In 1993, Dr Laver and his colleagues published a survey of 289 Sydney people who attended eight alternative therapists' practices in Sydney. These practices offered a wide range of alternative therapies from 25 therapists. Those surveyed had experienced chronic illnesses, for which orthodox medicine had been able to provide little relief. They commented that they liked the holistic approach of their alternative therapists and the friendly, concerned and detailed attention they had received. The cold, impersonal manner of orthodox doctors featured in the survey. An increasing exodus from their clinics, coupled with this and a number of other relevant surveys carried out in Australia, all pointing to orthodox doctors' inadequacies, have led mainstream doctors themselves to begin to admit they could learn from the personal style of alternative therapists. Dr Patrick Store, President of the Royal College of General Practitioners, concurs that orthodox doctors could learn a lot about bedside manner and advising patients on preventative health from alternative therapists. According to the Australian Journal of Public Health, 18% of patients visiting alternative therapists do so because they suffer from musculo-skeletal complaints; 12% suffer from digestive problems, which is only 1% more than those suffering from emotional problems. Those suffering from respiratory complaints represent 7% of their patients, and candida sufferers represent an equal percentage. Headache sufferers and those complaining of general ill health represent 6% and 5% of patients respectively, and a further 4% see therapists for general health maintenance. The survey suggested that complementary medicine is probably a better term than alternative medicine. Alternative medicine appears to be an adjunct, sought in times of disenchantment when conventional medicine seems not to offer the answer.
Australians have been turning to alternative therapies in increasing numbers over the past 20 years.
e
id_424
ALTERNATIVE MEDICINE IN AUSTRALIA. The first students to study alternative medicine at university level in Australia began their four-year, full-time course at the University of Technology, Sydney, in early 1994. Their course covered, among other therapies, acupuncture. The theory they learnt is based on the traditional Chinese explanation of this ancient healing art: that it can regulate the flow of 'Qi' or energy through pathways in the body. This course reflects how far some alternative therapies have come in their struggle for acceptance by the medical establishment. Australia has been unusual in the Western world in having a very conservative attitude to natural or alternative therapies, according to Dr Paul Laver, a lecturer in Public Health at the University of Sydney. 'We've had a tradition of doctors being fairly powerful and I guess they are pretty loath to allow any pretenders to their position to come into it. ' In many other industrialised countries, orthodox and alternative medicine have worked 'hand in glove' for years. In Europe, only orthodox doctors can prescribe herbal medicine. In Germany, plant remedies account for 10% of the national turnover of pharmaceuticals. Americans made more visits to alternative therapists than to orthodox doctors in 1990, and each year they spend about $US 12 billion on therapies that have not been scientifically tested. Disenchantment with orthodox medicine has seen the popularity of alternative therapies in Australia climb steadily during the past 20 years. In a 1983 national health survey, 1.9% of people said they had contacted a chiropractor, naturopath, osteopath, acupuncturist or herbalist in the two weeks prior to the survey. By 1990, this figure had risen to 2.6% of the population. The 550,000 consultations with alternative therapists reported in the 1990 survey represented about an eighth of the total number of consultations with medically qualified personnel covered by the survey, according to Dr Laver and colleagues writing in the Australian Journal of Public Health in 1993. 'A better educated and less accepting public has become disillusioned with the experts in general, and increasingly sceptical about science and empirically based knowledge, ' they said. 'The high standing of professionals, including doctors, has been eroded as a consequence. ' Rather than resisting or criticising this trend, increasing numbers of Australian doctors, particularly younger ones, are forming group practices with alternative therapists or taking courses themselves, particularly in acupuncture and herbalism. Part of the incentive was financial, Dr Laver said. 'The bottom line is that most general practitioners are business people. If they see potential clientele going elsewhere, they might want to be able to offer a similar service. ' In 1993, Dr Laver and his colleagues published a survey of 289 Sydney people who attended eight alternative therapists' practices in Sydney. These practices offered a wide range of alternative therapies from 25 therapists. Those surveyed had experienced chronic illnesses, for which orthodox medicine had been able to provide little relief. They commented that they liked the holistic approach of their alternative therapists and the friendly, concerned and detailed attention they had received. The cold, impersonal manner of orthodox doctors featured in the survey. An increasing exodus from their clinics, coupled with this and a number of other relevant surveys carried out in Australia, all pointing to orthodox doctors' inadequacies, have led mainstream doctors themselves to begin to admit they could learn from the personal style of alternative therapists. Dr Patrick Store, President of the Royal College of General Practitioners, concurs that orthodox doctors could learn a lot about bedside manner and advising patients on preventative health from alternative therapists. According to the Australian Journal of Public Health, 18% of patients visiting alternative therapists do so because they suffer from musculo-skeletal complaints; 12% suffer from digestive problems, which is only 1% more than those suffering from emotional problems. Those suffering from respiratory complaints represent 7% of their patients, and candida sufferers represent an equal percentage. Headache sufferers and those complaining of general ill health represent 6% and 5% of patients respectively, and a further 4% see therapists for general health maintenance. The survey suggested that complementary medicine is probably a better term than alternative medicine. Alternative medicine appears to be an adjunct, sought in times of disenchantment when conventional medicine seems not to offer the answer.
Between 1983 and 1990 the numbers of patients visiting alternative therapists rose to include a further 8% of the population.
c
id_425
ALTERNATIVE MEDICINE IN AUSTRALIA. The first students to study alternative medicine at university level in Australia began their four-year, full-time course at the University of Technology, Sydney, in early 1994. Their course covered, among other therapies, acupuncture. The theory they learnt is based on the traditional Chinese explanation of this ancient healing art: that it can regulate the flow of 'Qi' or energy through pathways in the body. This course reflects how far some alternative therapies have come in their struggle for acceptance by the medical establishment. Australia has been unusual in the Western world in having a very conservative attitude to natural or alternative therapies, according to Dr Paul Laver, a lecturer in Public Health at the University of Sydney. 'We've had a tradition of doctors being fairly powerful and I guess they are pretty loath to allow any pretenders to their position to come into it. ' In many other industrialised countries, orthodox and alternative medicine have worked 'hand in glove' for years. In Europe, only orthodox doctors can prescribe herbal medicine. In Germany, plant remedies account for 10% of the national turnover of pharmaceuticals. Americans made more visits to alternative therapists than to orthodox doctors in 1990, and each year they spend about $US 12 billion on therapies that have not been scientifically tested. Disenchantment with orthodox medicine has seen the popularity of alternative therapies in Australia climb steadily during the past 20 years. In a 1983 national health survey, 1.9% of people said they had contacted a chiropractor, naturopath, osteopath, acupuncturist or herbalist in the two weeks prior to the survey. By 1990, this figure had risen to 2.6% of the population. The 550,000 consultations with alternative therapists reported in the 1990 survey represented about an eighth of the total number of consultations with medically qualified personnel covered by the survey, according to Dr Laver and colleagues writing in the Australian Journal of Public Health in 1993. 'A better educated and less accepting public has become disillusioned with the experts in general, and increasingly sceptical about science and empirically based knowledge, ' they said. 'The high standing of professionals, including doctors, has been eroded as a consequence. ' Rather than resisting or criticising this trend, increasing numbers of Australian doctors, particularly younger ones, are forming group practices with alternative therapists or taking courses themselves, particularly in acupuncture and herbalism. Part of the incentive was financial, Dr Laver said. 'The bottom line is that most general practitioners are business people. If they see potential clientele going elsewhere, they might want to be able to offer a similar service. ' In 1993, Dr Laver and his colleagues published a survey of 289 Sydney people who attended eight alternative therapists' practices in Sydney. These practices offered a wide range of alternative therapies from 25 therapists. Those surveyed had experienced chronic illnesses, for which orthodox medicine had been able to provide little relief. They commented that they liked the holistic approach of their alternative therapists and the friendly, concerned and detailed attention they had received. The cold, impersonal manner of orthodox doctors featured in the survey. An increasing exodus from their clinics, coupled with this and a number of other relevant surveys carried out in Australia, all pointing to orthodox doctors' inadequacies, have led mainstream doctors themselves to begin to admit they could learn from the personal style of alternative therapists. Dr Patrick Store, President of the Royal College of General Practitioners, concurs that orthodox doctors could learn a lot about bedside manner and advising patients on preventative health from alternative therapists. According to the Australian Journal of Public Health, 18% of patients visiting alternative therapists do so because they suffer from musculo-skeletal complaints; 12% suffer from digestive problems, which is only 1% more than those suffering from emotional problems. Those suffering from respiratory complaints represent 7% of their patients, and candida sufferers represent an equal percentage. Headache sufferers and those complaining of general ill health represent 6% and 5% of patients respectively, and a further 4% see therapists for general health maintenance. The survey suggested that complementary medicine is probably a better term than alternative medicine. Alternative medicine appears to be an adjunct, sought in times of disenchantment when conventional medicine seems not to offer the answer.
The 1990 survey related to 550,000 consultations with alternative therapists.
e
id_426
ALTERNATIVE MEDICINE IN AUSTRALIA. The first students to study alternative medicine at university level in Australia began their four-year, full-time course at the University of Technology, Sydney, in early 1994. Their course covered, among other therapies, acupuncture. The theory they learnt is based on the traditional Chinese explanation of this ancient healing art: that it can regulate the flow of 'Qi' or energy through pathways in the body. This course reflects how far some alternative therapies have come in their struggle for acceptance by the medical establishment. Australia has been unusual in the Western world in having a very conservative attitude to natural or alternative therapies, according to Dr Paul Laver, a lecturer in Public Health at the University of Sydney. 'We've had a tradition of doctors being fairly powerful and I guess they are pretty loath to allow any pretenders to their position to come into it. ' In many other industrialised countries, orthodox and alternative medicine have worked 'hand in glove' for years. In Europe, only orthodox doctors can prescribe herbal medicine. In Germany, plant remedies account for 10% of the national turnover of pharmaceuticals. Americans made more visits to alternative therapists than to orthodox doctors in 1990, and each year they spend about $US 12 billion on therapies that have not been scientifically tested. Disenchantment with orthodox medicine has seen the popularity of alternative therapies in Australia climb steadily during the past 20 years. In a 1983 national health survey, 1.9% of people said they had contacted a chiropractor, naturopath, osteopath, acupuncturist or herbalist in the two weeks prior to the survey. By 1990, this figure had risen to 2.6% of the population. The 550,000 consultations with alternative therapists reported in the 1990 survey represented about an eighth of the total number of consultations with medically qualified personnel covered by the survey, according to Dr Laver and colleagues writing in the Australian Journal of Public Health in 1993. 'A better educated and less accepting public has become disillusioned with the experts in general, and increasingly sceptical about science and empirically based knowledge, ' they said. 'The high standing of professionals, including doctors, has been eroded as a consequence. ' Rather than resisting or criticising this trend, increasing numbers of Australian doctors, particularly younger ones, are forming group practices with alternative therapists or taking courses themselves, particularly in acupuncture and herbalism. Part of the incentive was financial, Dr Laver said. 'The bottom line is that most general practitioners are business people. If they see potential clientele going elsewhere, they might want to be able to offer a similar service. ' In 1993, Dr Laver and his colleagues published a survey of 289 Sydney people who attended eight alternative therapists' practices in Sydney. These practices offered a wide range of alternative therapies from 25 therapists. Those surveyed had experienced chronic illnesses, for which orthodox medicine had been able to provide little relief. They commented that they liked the holistic approach of their alternative therapists and the friendly, concerned and detailed attention they had received. The cold, impersonal manner of orthodox doctors featured in the survey. An increasing exodus from their clinics, coupled with this and a number of other relevant surveys carried out in Australia, all pointing to orthodox doctors' inadequacies, have led mainstream doctors themselves to begin to admit they could learn from the personal style of alternative therapists. Dr Patrick Store, President of the Royal College of General Practitioners, concurs that orthodox doctors could learn a lot about bedside manner and advising patients on preventative health from alternative therapists. According to the Australian Journal of Public Health, 18% of patients visiting alternative therapists do so because they suffer from musculo-skeletal complaints; 12% suffer from digestive problems, which is only 1% more than those suffering from emotional problems. Those suffering from respiratory complaints represent 7% of their patients, and candida sufferers represent an equal percentage. Headache sufferers and those complaining of general ill health represent 6% and 5% of patients respectively, and a further 4% see therapists for general health maintenance. The survey suggested that complementary medicine is probably a better term than alternative medicine. Alternative medicine appears to be an adjunct, sought in times of disenchantment when conventional medicine seems not to offer the answer.
In the past, Australians had a higher opinion of doctors than they do today.
e
id_427
ALTERNATIVE MEDICINE IN AUSTRALIA. The first students to study alternative medicine at university level in Australia began their four-year, full-time course at the University of Technology, Sydney, in early 1994. Their course covered, among other therapies, acupuncture. The theory they learnt is based on the traditional Chinese explanation of this ancient healing art: that it can regulate the flow of 'Qi' or energy through pathways in the body. This course reflects how far some alternative therapies have come in their struggle for acceptance by the medical establishment. Australia has been unusual in the Western world in having a very conservative attitude to natural or alternative therapies, according to Dr Paul Laver, a lecturer in Public Health at the University of Sydney. 'We've had a tradition of doctors being fairly powerful and I guess they are pretty loath to allow any pretenders to their position to come into it. ' In many other industrialised countries, orthodox and alternative medicine have worked 'hand in glove' for years. In Europe, only orthodox doctors can prescribe herbal medicine. In Germany, plant remedies account for 10% of the national turnover of pharmaceuticals. Americans made more visits to alternative therapists than to orthodox doctors in 1990, and each year they spend about $US 12 billion on therapies that have not been scientifically tested. Disenchantment with orthodox medicine has seen the popularity of alternative therapies in Australia climb steadily during the past 20 years. In a 1983 national health survey, 1.9% of people said they had contacted a chiropractor, naturopath, osteopath, acupuncturist or herbalist in the two weeks prior to the survey. By 1990, this figure had risen to 2.6% of the population. The 550,000 consultations with alternative therapists reported in the 1990 survey represented about an eighth of the total number of consultations with medically qualified personnel covered by the survey, according to Dr Laver and colleagues writing in the Australian Journal of Public Health in 1993. 'A better educated and less accepting public has become disillusioned with the experts in general, and increasingly sceptical about science and empirically based knowledge, ' they said. 'The high standing of professionals, including doctors, has been eroded as a consequence. ' Rather than resisting or criticising this trend, increasing numbers of Australian doctors, particularly younger ones, are forming group practices with alternative therapists or taking courses themselves, particularly in acupuncture and herbalism. Part of the incentive was financial, Dr Laver said. 'The bottom line is that most general practitioners are business people. If they see potential clientele going elsewhere, they might want to be able to offer a similar service. ' In 1993, Dr Laver and his colleagues published a survey of 289 Sydney people who attended eight alternative therapists' practices in Sydney. These practices offered a wide range of alternative therapies from 25 therapists. Those surveyed had experienced chronic illnesses, for which orthodox medicine had been able to provide little relief. They commented that they liked the holistic approach of their alternative therapists and the friendly, concerned and detailed attention they had received. The cold, impersonal manner of orthodox doctors featured in the survey. An increasing exodus from their clinics, coupled with this and a number of other relevant surveys carried out in Australia, all pointing to orthodox doctors' inadequacies, have led mainstream doctors themselves to begin to admit they could learn from the personal style of alternative therapists. Dr Patrick Store, President of the Royal College of General Practitioners, concurs that orthodox doctors could learn a lot about bedside manner and advising patients on preventative health from alternative therapists. According to the Australian Journal of Public Health, 18% of patients visiting alternative therapists do so because they suffer from musculo-skeletal complaints; 12% suffer from digestive problems, which is only 1% more than those suffering from emotional problems. Those suffering from respiratory complaints represent 7% of their patients, and candida sufferers represent an equal percentage. Headache sufferers and those complaining of general ill health represent 6% and 5% of patients respectively, and a further 4% see therapists for general health maintenance. The survey suggested that complementary medicine is probably a better term than alternative medicine. Alternative medicine appears to be an adjunct, sought in times of disenchantment when conventional medicine seems not to offer the answer.
Some Australian doctors are retraining in alternative therapies.
e
id_428
ALTERNATIVE MEDICINE IN AUSTRALIA. The first students to study alternative medicine at university level in Australia began their four-year, full-time course at the University of Technology, Sydney, in early 1994. Their course covered, among other therapies, acupuncture. The theory they learnt is based on the traditional Chinese explanation of this ancient healing art: that it can regulate the flow of 'Qi' or energy through pathways in the body. This course reflects how far some alternative therapies have come in their struggle for acceptance by the medical establishment. Australia has been unusual in the Western world in having a very conservative attitude to natural or alternative therapies, according to Dr Paul Laver, a lecturer in Public Health at the University of Sydney. 'We've had a tradition of doctors being fairly powerful and I guess they are pretty loath to allow any pretenders to their position to come into it. ' In many other industrialised countries, orthodox and alternative medicine have worked 'hand in glove' for years. In Europe, only orthodox doctors can prescribe herbal medicine. In Germany, plant remedies account for 10% of the national turnover of pharmaceuticals. Americans made more visits to alternative therapists than to orthodox doctors in 1990, and each year they spend about $US 12 billion on therapies that have not been scientifically tested. Disenchantment with orthodox medicine has seen the popularity of alternative therapies in Australia climb steadily during the past 20 years. In a 1983 national health survey, 1.9% of people said they had contacted a chiropractor, naturopath, osteopath, acupuncturist or herbalist in the two weeks prior to the survey. By 1990, this figure had risen to 2.6% of the population. The 550,000 consultations with alternative therapists reported in the 1990 survey represented about an eighth of the total number of consultations with medically qualified personnel covered by the survey, according to Dr Laver and colleagues writing in the Australian Journal of Public Health in 1993. 'A better educated and less accepting public has become disillusioned with the experts in general, and increasingly sceptical about science and empirically based knowledge, ' they said. 'The high standing of professionals, including doctors, has been eroded as a consequence. ' Rather than resisting or criticising this trend, increasing numbers of Australian doctors, particularly younger ones, are forming group practices with alternative therapists or taking courses themselves, particularly in acupuncture and herbalism. Part of the incentive was financial, Dr Laver said. 'The bottom line is that most general practitioners are business people. If they see potential clientele going elsewhere, they might want to be able to offer a similar service. ' In 1993, Dr Laver and his colleagues published a survey of 289 Sydney people who attended eight alternative therapists' practices in Sydney. These practices offered a wide range of alternative therapies from 25 therapists. Those surveyed had experienced chronic illnesses, for which orthodox medicine had been able to provide little relief. They commented that they liked the holistic approach of their alternative therapists and the friendly, concerned and detailed attention they had received. The cold, impersonal manner of orthodox doctors featured in the survey. An increasing exodus from their clinics, coupled with this and a number of other relevant surveys carried out in Australia, all pointing to orthodox doctors' inadequacies, have led mainstream doctors themselves to begin to admit they could learn from the personal style of alternative therapists. Dr Patrick Store, President of the Royal College of General Practitioners, concurs that orthodox doctors could learn a lot about bedside manner and advising patients on preventative health from alternative therapists. According to the Australian Journal of Public Health, 18% of patients visiting alternative therapists do so because they suffer from musculo-skeletal complaints; 12% suffer from digestive problems, which is only 1% more than those suffering from emotional problems. Those suffering from respiratory complaints represent 7% of their patients, and candida sufferers represent an equal percentage. Headache sufferers and those complaining of general ill health represent 6% and 5% of patients respectively, and a further 4% see therapists for general health maintenance. The survey suggested that complementary medicine is probably a better term than alternative medicine. Alternative medicine appears to be an adjunct, sought in times of disenchantment when conventional medicine seems not to offer the answer.
Alternative therapists earn higher salaries than doctors.
n
id_429
ALTERNATIVE MEDICINE IN AUSTRALIA. The first students to study alternative medicine at university level in Australia began their four-year, full-time course at the University of Technology, Sydney, in early 1994. Their course covered, among other therapies, acupuncture. The theory they learnt is based on the traditional Chinese explanation of this ancient healing art: that it can regulate the flow of 'Qi' or energy through pathways in the body. This course reflects how far some alternative therapies have come in their struggle for acceptance by the medical establishment. Australia has been unusual in the Western world in having a very conservative attitude to natural or alternative therapies, according to Dr Paul Laver, a lecturer in Public Health at the University of Sydney. 'We've had a tradition of doctors being fairly powerful and I guess they are pretty loath to allow any pretenders to their position to come into it. ' In many other industrialised countries, orthodox and alternative medicine have worked 'hand in glove' for years. In Europe, only orthodox doctors can prescribe herbal medicine. In Germany, plant remedies account for 10% of the national turnover of pharmaceuticals. Americans made more visits to alternative therapists than to orthodox doctors in 1990, and each year they spend about $US 12 billion on therapies that have not been scientifically tested. Disenchantment with orthodox medicine has seen the popularity of alternative therapies in Australia climb steadily during the past 20 years. In a 1983 national health survey, 1.9% of people said they had contacted a chiropractor, naturopath, osteopath, acupuncturist or herbalist in the two weeks prior to the survey. By 1990, this figure had risen to 2.6% of the population. The 550,000 consultations with alternative therapists reported in the 1990 survey represented about an eighth of the total number of consultations with medically qualified personnel covered by the survey, according to Dr Laver and colleagues writing in the Australian Journal of Public Health in 1993. 'A better educated and less accepting public has become disillusioned with the experts in general, and increasingly sceptical about science and empirically based knowledge, ' they said. 'The high standing of professionals, including doctors, has been eroded as a consequence. ' Rather than resisting or criticising this trend, increasing numbers of Australian doctors, particularly younger ones, are forming group practices with alternative therapists or taking courses themselves, particularly in acupuncture and herbalism. Part of the incentive was financial, Dr Laver said. 'The bottom line is that most general practitioners are business people. If they see potential clientele going elsewhere, they might want to be able to offer a similar service. ' In 1993, Dr Laver and his colleagues published a survey of 289 Sydney people who attended eight alternative therapists' practices in Sydney. These practices offered a wide range of alternative therapies from 25 therapists. Those surveyed had experienced chronic illnesses, for which orthodox medicine had been able to provide little relief. They commented that they liked the holistic approach of their alternative therapists and the friendly, concerned and detailed attention they had received. The cold, impersonal manner of orthodox doctors featured in the survey. An increasing exodus from their clinics, coupled with this and a number of other relevant surveys carried out in Australia, all pointing to orthodox doctors' inadequacies, have led mainstream doctors themselves to begin to admit they could learn from the personal style of alternative therapists. Dr Patrick Store, President of the Royal College of General Practitioners, concurs that orthodox doctors could learn a lot about bedside manner and advising patients on preventative health from alternative therapists. According to the Australian Journal of Public Health, 18% of patients visiting alternative therapists do so because they suffer from musculo-skeletal complaints; 12% suffer from digestive problems, which is only 1% more than those suffering from emotional problems. Those suffering from respiratory complaints represent 7% of their patients, and candida sufferers represent an equal percentage. Headache sufferers and those complaining of general ill health represent 6% and 5% of patients respectively, and a further 4% see therapists for general health maintenance. The survey suggested that complementary medicine is probably a better term than alternative medicine. Alternative medicine appears to be an adjunct, sought in times of disenchantment when conventional medicine seems not to offer the answer.
The 1993 Sydney survey involved 289 patients who visited alternative therapists for acupuncture treatment.
c
id_430
ALTERNATIVE MEDICINE IN AUSTRALIA. The first students to study alternative medicine at university level in Australia began their four-year, full-time course at the University of Technology, Sydney, in early 1994. Their course covered, among other therapies, acupuncture. The theory they learnt is based on the traditional Chinese explanation of this ancient healing art: that it can regulate the flow of 'Qi' or energy through pathways in the body. This course reflects how far some alternative therapies have come in their struggle for acceptance by the medical establishment. Australia has been unusual in the Western world in having a very conservative attitude to natural or alternative therapies, according to Dr Paul Laver, a lecturer in Public Health at the University of Sydney. 'We've had a tradition of doctors being fairly powerful and I guess they are pretty loath to allow any pretenders to their position to come into it. ' In many other industrialised countries, orthodox and alternative medicine have worked 'hand in glove' for years. In Europe, only orthodox doctors can prescribe herbal medicine. In Germany, plant remedies account for 10% of the national turnover of pharmaceuticals. Americans made more visits to alternative therapists than to orthodox doctors in 1990, and each year they spend about $US 12 billion on therapies that have not been scientifically tested. Disenchantment with orthodox medicine has seen the popularity of alternative therapies in Australia climb steadily during the past 20 years. In a 1983 national health survey, 1.9% of people said they had contacted a chiropractor, naturopath, osteopath, acupuncturist or herbalist in the two weeks prior to the survey. By 1990, this figure had risen to 2.6% of the population. The 550,000 consultations with alternative therapists reported in the 1990 survey represented about an eighth of the total number of consultations with medically qualified personnel covered by the survey, according to Dr Laver and colleagues writing in the Australian Journal of Public Health in 1993. 'A better educated and less accepting public has become disillusioned with the experts in general, and increasingly sceptical about science and empirically based knowledge, ' they said. 'The high standing of professionals, including doctors, has been eroded as a consequence. ' Rather than resisting or criticising this trend, increasing numbers of Australian doctors, particularly younger ones, are forming group practices with alternative therapists or taking courses themselves, particularly in acupuncture and herbalism. Part of the incentive was financial, Dr Laver said. 'The bottom line is that most general practitioners are business people. If they see potential clientele going elsewhere, they might want to be able to offer a similar service. ' In 1993, Dr Laver and his colleagues published a survey of 289 Sydney people who attended eight alternative therapists' practices in Sydney. These practices offered a wide range of alternative therapies from 25 therapists. Those surveyed had experienced chronic illnesses, for which orthodox medicine had been able to provide little relief. They commented that they liked the holistic approach of their alternative therapists and the friendly, concerned and detailed attention they had received. The cold, impersonal manner of orthodox doctors featured in the survey. An increasing exodus from their clinics, coupled with this and a number of other relevant surveys carried out in Australia, all pointing to orthodox doctors' inadequacies, have led mainstream doctors themselves to begin to admit they could learn from the personal style of alternative therapists. Dr Patrick Store, President of the Royal College of General Practitioners, concurs that orthodox doctors could learn a lot about bedside manner and advising patients on preventative health from alternative therapists. According to the Australian Journal of Public Health, 18% of patients visiting alternative therapists do so because they suffer from musculo-skeletal complaints; 12% suffer from digestive problems, which is only 1% more than those suffering from emotional problems. Those suffering from respiratory complaints represent 7% of their patients, and candida sufferers represent an equal percentage. Headache sufferers and those complaining of general ill health represent 6% and 5% of patients respectively, and a further 4% see therapists for general health maintenance. The survey suggested that complementary medicine is probably a better term than alternative medicine. Alternative medicine appears to be an adjunct, sought in times of disenchantment when conventional medicine seems not to offer the answer.
All the patients in the 1993 Sydney survey had long-term medical complaints.
e
id_431
ARE WE MANAGING TO DESTROY SCIENCE? The government in the UK was concerned about the efficiency of research institutions and set up a Research Assessment Exercise (RAE) to consider what was being done in each university. The article which follows is a response to the imposition of the RAE. In the year ahead, the UK government is due to carry out the next Research Assessment Exercise (RAE ). The goal of this regular five-yearly check-up of the university sector is easy to understand: to increase productivity within public sector research. But striving for such productivity can lead to unfortunate consequences. In the case of the RAE, one risk attached to this is the creation of an overly controlling management culture that threatens the future of imaginative science. Academic institutions are already preparing for the RAE with some anxietyunderstand-ably so, for the financial consequences of failure are severe. Departments with a current rating of four or five (research is rated on a five point scale, with five the highest) must maintain their score or face a considerable loss of funding. Meanwhile, those with ratings of two or three are fighting for their survival. The pressures are forcing research management onto the defensive. Common strategies for increasing academic output include grading individual researchers every year according to RAE criteria, pressurising them to publish anything regardless of quality, diverting funds from key and expensive laboratory science into areas of study such as management, and even threatening to close departments. Another strategy being readily adopted is to remove scientists who appear to be less active in research and replace them with new, probably younger, staff. Although such measures may deliver results in the RAE , they are putting unsustainable pressure on academic staff. Particularly insidious is the pressure to publish. Put simply, RAE committees in the laboratory sciences must produce four excellent peer-reviewed publications per member of staff to meet the assessment criteria. Hence this is becoming a minimum requirement for existing members of staff, and a benchmark against which to measure new recruits. But prolific publication does not necessarily add up to good science. Indeed, one young researcher was told in an interview for a lectureship that, although your publications are excellent, unfortunately, there are not enough of them. You should not worry so much about the quality of your publications. In a recent letter to Nature, the publication records of ten senior academics in the area of molecular microbiology were analysed. Each of these academics is now in very senior positions in universities or research institutes, with careers spanning a total of 262 years. All have achieved considerable status and respect within the UK and worldwide. However, their early publication records would preclude them from academic posts if the present criteria were applied. Although the quality of their work was clearly outstandingthey initiated novel and perhaps risky projects early in their careers, which have since been recognised as research of international importance they generally produced few papers over the first ten years after completing their PhDs. Indeed, over this period, they have an average gap of 3-8 years without the publication or production of a cited paper. In one case there was a five-year gap. Although these enquiries were limited to a specific area of research, it seems that this model of career progression is widespread in all of the chemical and biological sciences. It seems that the atmosphere surrounding the RAE may be stifling talented young researchers or driving them out of science altogether. There urgently needs to be a more considered and careful nurturing of our young scientific talent. A new member of academic staff in the chemical or biological laboratory sciences surely needs a commitment to resources over a five- to ten-year period to establish their research. Senior academics managing this situation might be well advised to demand a long-term view from the government. Unfortunately, management seems to be pulling in the opposite direction. Academics have to deal with more students than ever and the paperwork associated with the assessment of the quality of teaching is increasing. On top of that, the salary for university lecturers starts at only 32,665 (rising to 58,048). Tenure is rare, and most contracts are offered on a temporary contract basis. With the mean starting salary for new graduates now close to 36,000, it is surprising that anybody still wants a job in academia. It need not be like this. Dealings with the many senior research managers in the chemical and water industries at the QUESTOR Centre (Queens University Environmental Science and Technology Research Centre) provided some insight. The overall impression is that the private sector has a much more sensible and enlightened long-term view of research priorities. Why can the universities not develop the same attitude? All organisations need managers, yet these managers will make sure they survive even when those they manage are lost. Research management in UK universities is in danger of evolving into such an overly controlled state that it will allow little time for careful thinking and teaching, and will undermine the development of imaginative young scientists.
The private sector has produced more in the way of quality research than universities.
n
id_432
ARE WE MANAGING TO DESTROY SCIENCE? The government in the UK was concerned about the efficiency of research institutions and set up a Research Assessment Exercise (RAE) to consider what was being done in each university. The article which follows is a response to the imposition of the RAE. In the year ahead, the UK government is due to carry out the next Research Assessment Exercise (RAE ). The goal of this regular five-yearly check-up of the university sector is easy to understand: to increase productivity within public sector research. But striving for such productivity can lead to unfortunate consequences. In the case of the RAE, one risk attached to this is the creation of an overly controlling management culture that threatens the future of imaginative science. Academic institutions are already preparing for the RAE with some anxietyunderstand-ably so, for the financial consequences of failure are severe. Departments with a current rating of four or five (research is rated on a five point scale, with five the highest) must maintain their score or face a considerable loss of funding. Meanwhile, those with ratings of two or three are fighting for their survival. The pressures are forcing research management onto the defensive. Common strategies for increasing academic output include grading individual researchers every year according to RAE criteria, pressurising them to publish anything regardless of quality, diverting funds from key and expensive laboratory science into areas of study such as management, and even threatening to close departments. Another strategy being readily adopted is to remove scientists who appear to be less active in research and replace them with new, probably younger, staff. Although such measures may deliver results in the RAE , they are putting unsustainable pressure on academic staff. Particularly insidious is the pressure to publish. Put simply, RAE committees in the laboratory sciences must produce four excellent peer-reviewed publications per member of staff to meet the assessment criteria. Hence this is becoming a minimum requirement for existing members of staff, and a benchmark against which to measure new recruits. But prolific publication does not necessarily add up to good science. Indeed, one young researcher was told in an interview for a lectureship that, although your publications are excellent, unfortunately, there are not enough of them. You should not worry so much about the quality of your publications. In a recent letter to Nature, the publication records of ten senior academics in the area of molecular microbiology were analysed. Each of these academics is now in very senior positions in universities or research institutes, with careers spanning a total of 262 years. All have achieved considerable status and respect within the UK and worldwide. However, their early publication records would preclude them from academic posts if the present criteria were applied. Although the quality of their work was clearly outstandingthey initiated novel and perhaps risky projects early in their careers, which have since been recognised as research of international importance they generally produced few papers over the first ten years after completing their PhDs. Indeed, over this period, they have an average gap of 3-8 years without the publication or production of a cited paper. In one case there was a five-year gap. Although these enquiries were limited to a specific area of research, it seems that this model of career progression is widespread in all of the chemical and biological sciences. It seems that the atmosphere surrounding the RAE may be stifling talented young researchers or driving them out of science altogether. There urgently needs to be a more considered and careful nurturing of our young scientific talent. A new member of academic staff in the chemical or biological laboratory sciences surely needs a commitment to resources over a five- to ten-year period to establish their research. Senior academics managing this situation might be well advised to demand a long-term view from the government. Unfortunately, management seems to be pulling in the opposite direction. Academics have to deal with more students than ever and the paperwork associated with the assessment of the quality of teaching is increasing. On top of that, the salary for university lecturers starts at only 32,665 (rising to 58,048). Tenure is rare, and most contracts are offered on a temporary contract basis. With the mean starting salary for new graduates now close to 36,000, it is surprising that anybody still wants a job in academia. It need not be like this. Dealings with the many senior research managers in the chemical and water industries at the QUESTOR Centre (Queens University Environmental Science and Technology Research Centre) provided some insight. The overall impression is that the private sector has a much more sensible and enlightened long-term view of research priorities. Why can the universities not develop the same attitude? All organisations need managers, yet these managers will make sure they survive even when those they manage are lost. Research management in UK universities is in danger of evolving into such an overly controlled state that it will allow little time for careful thinking and teaching, and will undermine the development of imaginative young scientists.
People in industry seem to understand the long-term nature of research.
e
id_433
ARE WE MANAGING TO DESTROY SCIENCE? The government in the UK was concerned about the efficiency of research institutions and set up a Research Assessment Exercise (RAE) to consider what was being done in each university. The article which follows is a response to the imposition of the RAE. In the year ahead, the UK government is due to carry out the next Research Assessment Exercise (RAE ). The goal of this regular five-yearly check-up of the university sector is easy to understand: to increase productivity within public sector research. But striving for such productivity can lead to unfortunate consequences. In the case of the RAE, one risk attached to this is the creation of an overly controlling management culture that threatens the future of imaginative science. Academic institutions are already preparing for the RAE with some anxietyunderstand-ably so, for the financial consequences of failure are severe. Departments with a current rating of four or five (research is rated on a five point scale, with five the highest) must maintain their score or face a considerable loss of funding. Meanwhile, those with ratings of two or three are fighting for their survival. The pressures are forcing research management onto the defensive. Common strategies for increasing academic output include grading individual researchers every year according to RAE criteria, pressurising them to publish anything regardless of quality, diverting funds from key and expensive laboratory science into areas of study such as management, and even threatening to close departments. Another strategy being readily adopted is to remove scientists who appear to be less active in research and replace them with new, probably younger, staff. Although such measures may deliver results in the RAE , they are putting unsustainable pressure on academic staff. Particularly insidious is the pressure to publish. Put simply, RAE committees in the laboratory sciences must produce four excellent peer-reviewed publications per member of staff to meet the assessment criteria. Hence this is becoming a minimum requirement for existing members of staff, and a benchmark against which to measure new recruits. But prolific publication does not necessarily add up to good science. Indeed, one young researcher was told in an interview for a lectureship that, although your publications are excellent, unfortunately, there are not enough of them. You should not worry so much about the quality of your publications. In a recent letter to Nature, the publication records of ten senior academics in the area of molecular microbiology were analysed. Each of these academics is now in very senior positions in universities or research institutes, with careers spanning a total of 262 years. All have achieved considerable status and respect within the UK and worldwide. However, their early publication records would preclude them from academic posts if the present criteria were applied. Although the quality of their work was clearly outstandingthey initiated novel and perhaps risky projects early in their careers, which have since been recognised as research of international importance they generally produced few papers over the first ten years after completing their PhDs. Indeed, over this period, they have an average gap of 3-8 years without the publication or production of a cited paper. In one case there was a five-year gap. Although these enquiries were limited to a specific area of research, it seems that this model of career progression is widespread in all of the chemical and biological sciences. It seems that the atmosphere surrounding the RAE may be stifling talented young researchers or driving them out of science altogether. There urgently needs to be a more considered and careful nurturing of our young scientific talent. A new member of academic staff in the chemical or biological laboratory sciences surely needs a commitment to resources over a five- to ten-year period to establish their research. Senior academics managing this situation might be well advised to demand a long-term view from the government. Unfortunately, management seems to be pulling in the opposite direction. Academics have to deal with more students than ever and the paperwork associated with the assessment of the quality of teaching is increasing. On top of that, the salary for university lecturers starts at only 32,665 (rising to 58,048). Tenure is rare, and most contracts are offered on a temporary contract basis. With the mean starting salary for new graduates now close to 36,000, it is surprising that anybody still wants a job in academia. It need not be like this. Dealings with the many senior research managers in the chemical and water industries at the QUESTOR Centre (Queens University Environmental Science and Technology Research Centre) provided some insight. The overall impression is that the private sector has a much more sensible and enlightened long-term view of research priorities. Why can the universities not develop the same attitude? All organisations need managers, yet these managers will make sure they survive even when those they manage are lost. Research management in UK universities is in danger of evolving into such an overly controlled state that it will allow little time for careful thinking and teaching, and will undermine the development of imaginative young scientists.
Good researchers are usually prolific publishers.
c
id_434
ARE WE MANAGING TO DESTROY SCIENCE? The government in the UK was concerned about the efficiency of research institutions and set up a Research Assessment Exercise (RAE) to consider what was being done in each university. The article which follows is a response to the imposition of the RAE. In the year ahead, the UK government is due to carry out the next Research Assessment Exercise (RAE ). The goal of this regular five-yearly check-up of the university sector is easy to understand: to increase productivity within public sector research. But striving for such productivity can lead to unfortunate consequences. In the case of the RAE, one risk attached to this is the creation of an overly controlling management culture that threatens the future of imaginative science. Academic institutions are already preparing for the RAE with some anxietyunderstand-ably so, for the financial consequences of failure are severe. Departments with a current rating of four or five (research is rated on a five point scale, with five the highest) must maintain their score or face a considerable loss of funding. Meanwhile, those with ratings of two or three are fighting for their survival. The pressures are forcing research management onto the defensive. Common strategies for increasing academic output include grading individual researchers every year according to RAE criteria, pressurising them to publish anything regardless of quality, diverting funds from key and expensive laboratory science into areas of study such as management, and even threatening to close departments. Another strategy being readily adopted is to remove scientists who appear to be less active in research and replace them with new, probably younger, staff. Although such measures may deliver results in the RAE , they are putting unsustainable pressure on academic staff. Particularly insidious is the pressure to publish. Put simply, RAE committees in the laboratory sciences must produce four excellent peer-reviewed publications per member of staff to meet the assessment criteria. Hence this is becoming a minimum requirement for existing members of staff, and a benchmark against which to measure new recruits. But prolific publication does not necessarily add up to good science. Indeed, one young researcher was told in an interview for a lectureship that, although your publications are excellent, unfortunately, there are not enough of them. You should not worry so much about the quality of your publications. In a recent letter to Nature, the publication records of ten senior academics in the area of molecular microbiology were analysed. Each of these academics is now in very senior positions in universities or research institutes, with careers spanning a total of 262 years. All have achieved considerable status and respect within the UK and worldwide. However, their early publication records would preclude them from academic posts if the present criteria were applied. Although the quality of their work was clearly outstandingthey initiated novel and perhaps risky projects early in their careers, which have since been recognised as research of international importance they generally produced few papers over the first ten years after completing their PhDs. Indeed, over this period, they have an average gap of 3-8 years without the publication or production of a cited paper. In one case there was a five-year gap. Although these enquiries were limited to a specific area of research, it seems that this model of career progression is widespread in all of the chemical and biological sciences. It seems that the atmosphere surrounding the RAE may be stifling talented young researchers or driving them out of science altogether. There urgently needs to be a more considered and careful nurturing of our young scientific talent. A new member of academic staff in the chemical or biological laboratory sciences surely needs a commitment to resources over a five- to ten-year period to establish their research. Senior academics managing this situation might be well advised to demand a long-term view from the government. Unfortunately, management seems to be pulling in the opposite direction. Academics have to deal with more students than ever and the paperwork associated with the assessment of the quality of teaching is increasing. On top of that, the salary for university lecturers starts at only 32,665 (rising to 58,048). Tenure is rare, and most contracts are offered on a temporary contract basis. With the mean starting salary for new graduates now close to 36,000, it is surprising that anybody still wants a job in academia. It need not be like this. Dealings with the many senior research managers in the chemical and water industries at the QUESTOR Centre (Queens University Environmental Science and Technology Research Centre) provided some insight. The overall impression is that the private sector has a much more sensible and enlightened long-term view of research priorities. Why can the universities not develop the same attitude? All organisations need managers, yet these managers will make sure they survive even when those they manage are lost. Research management in UK universities is in danger of evolving into such an overly controlled state that it will allow little time for careful thinking and teaching, and will undermine the development of imaginative young scientists.
Management may be the only winners under the new system.
e
id_435
Abdominal pain in children may be a symptom of emotional disturbance, especially where it appears in conjunction with phobias or sleep disorders such as nightmares or sleep-walking. It may also be linked to eating habits: a study carried out in the USA found that children with pain tended to be more fussy about what and how much they ate, and to have over-anxious parents who spent a considerable time trying to persuade them to eat. Although abdominal pain had previously been linked to excessive milk-drinking, this research found that children with pain drank rather less milk than those in the control group.
Abdominal pain in children may be psychosomatic in nature.
n
id_436
Abdominal pain in children may be a symptom of emotional disturbance, especially where it appears in conjunction with phobias or sleep disorders such as nightmares or sleep-walking. It may also be linked to eating habits: a study carried out in the USA found that children with pain tended to be more fussy about what and how much they ate, and to have over-anxious parents who spent a considerable time trying to persuade them to eat. Although abdominal pain had previously been linked to excessive milk-drinking, this research found that children with pain drank rather less milk than those in the control group.
Drinking milk may help to prevent abdominal pain in children.
n
id_437
Abdominal pain in children may be a symptom of emotional disturbance, especially where it appears in conjunction with phobias or sleep disorders such as nightmares or sleep-walking. It may also be linked to eating habits: a study carried out in the USA found that children with pain tended to be more fussy about what and how much they ate, and to have over-anxious parents who spent a considerable time trying to persuade them to eat. Although abdominal pain had previously been linked to excessive milk-drinking, this research found that children with pain drank rather less milk than those in the control group.
There is no clear cause for abdominal pain in children.
e
id_438
Abdominal pain in children may be a symptom of emotional disturbance, especially where it appears in conjunction with phobias or sleep disorders such as nightmares or sleep-walking. It may also be linked to eating habits: a study carried out in the USA found that children with pain tended to be more fussy about what and how much they ate, and to have over-anxious parents who spent a considerable time trying to persuade them to eat. Although abdominal pain had previously been linked to excessive milk-drinking, this research found that children with pain drank rather less milk than those in the control group.
Children who have problems sleeping are more likely to suffer from abdominal pain.
e
id_439
About 40 percent of urban Indian husbands think it is a good idea for wives with school age children to work outside the home. Only about ten percent of rural Indian husbands approve of the same. Every second urban Indian wife, and one in four rural Indian wives with school age children has a job outside her home.
Rural Indian families have more children than urban Indian families.
n
id_440
About 40 percent of urban Indian husbands think it is a good idea for wives with school age children to work outside the home. Only about ten percent of rural Indian husbands approve of the same. Every second urban Indian wife, and one in four rural Indian wives with school age children has a job outside her home.
Rural Indian husbands would seem to be less satisfied about working wives who have school age children than urban Indian husbands.
e
id_441
About 40 percent of urban Indian husbands think it is a good idea for wives with school age children to work outside the home. Only about ten percent of rural Indian husbands approve of the same. Every second urban Indian wife, and one in four rural Indian wives with school age children has a job outside her home.
Rural Indian husbands would seem to be less satisfied about working wives who have school-age children than urban Indian husbands.
e
id_442
About 40 percent of urban Indian husbands think it is a good idea for wives with school age children to work outside the home. Only about ten percent of rural Indian husbands approve of the same. Every second urban Indian wife, and one in four rural Indian wives with school age children has a job outside her home.
Employment opportunities for urban Indian wives are greater than for rural Indian wives.
c
id_443
About 40 percent of urban Indian husbands think it is a good idea for wives with school age children to work outside the home. Only about ten percent of rural Indian husbands approve of the same. Every second urban Indian wife, and one in four rural Indian wives with school age children has a job outside her home.
Rural Indian families have more children than urban Indian families.
c
id_444
About 40 percent of urban Indian husbands think it is a good idea for wives with school age children to work outside the home. Only about ten percent of rural Indian husbands approve of the same. Every second urban Indian wife, and one in four rural Indian wives with school age children has a job outside her home.
Urban Indian husbands have a less conservative attitude than rural Indian husbands.
c
id_445
According to a new legislation concerning the income tax law, tax payers may be taxed for positive income tax or negative income tax depending on their annual revenues. Contrary to previous arrangements where income tax was charged only if an individual had reached a certain revenue threshold, the new legislation suggests compensation in the form of a negative income tax for those who failed to reach a minimum annual income. 'Income' for this purpose includes most sources of monetary contributions such as paid salary; personal property rental earnings, interest, dividend earnings and passive income. The only exemption would be a transfer of funds between first degree family members including children younger than 18 years old.
Earning less than the minimum annual income without having other substantial sources of income will probably lead to a negative income tax.
n
id_446
According to a new legislation concerning the income tax law, tax payers may be taxed for positive income tax or negative income tax depending on their annual revenues. Contrary to previous arrangements where income tax was charged only if an individual had reached a certain revenue threshold, the new legislation suggests compensation in the form of a negative income tax for those who failed to reach a minimum annual income. 'Income' for this purpose includes most sources of monetary contributions such as paid salary; personal property rental earnings, interest, dividend earnings and passive income. The only exemption would be a transfer of funds between first degree family members including children younger than 18 years old.
The vast majority of taxpayers will continue to be taxed positively.
n
id_447
According to a new legislation concerning the income tax law, tax payers may be taxed for positive income tax or negative income tax depending on their annual revenues. Contrary to previous arrangements where income tax was charged only if an individual had reached a certain revenue threshold, the new legislation suggests compensation in the form of a negative income tax for those who failed to reach a minimum annual income. 'Income' for this purpose includes most sources of monetary contributions such as paid salary; personal property rental earnings, interest, dividend earnings and passive income. The only exemption would be a transfer of funds between first degree family members including children younger than 18 years old.
Paid salary, personal property rental earnings, interest, dividend earnings and passive income are the only types of income that can determine a person's eligibility for negative income tax compensation.
n
id_448
According to recently published figures, internet sales last year comprised nearly five per cent of the UKs retail spending. It was the only retail channel showing growth, with a 20 per cent rise on the previous years sales. The flourishing of e-commerce can be largely attributed to the increasing popularity of on-line supermarket shopping and shoppers preference for staying home. The UK leads its European neighbours in internet shopping revenue, in part because of higher credit card usage than countries such as Germany and France. Compared with continental Europe, the UK also has higher levels of computer ownership and wider access to the broadband services that facilitate internet purchases. Business experts forecast a trebling of internet retail sales in the UK and France over the next five years.
Low credit card usage ts the mam reason that continental European countries lag behind the UK in internet retail.
n
id_449
According to recently published figures, internet sales last year comprised nearly five per cent of the UKs retail spending. It was the only retail channel showing growth, with a 20 per cent rise on the previous years sales. The flourishing of e-commerce can be largely attributed to the increasing popularity of on-line supermarket shopping and shoppers preference for staying home. The UK leads its European neighbours in internet shopping revenue, in part because of higher credit card usage than countries such as Germany and France. Compared with continental Europe, the UK also has higher levels of computer ownership and wider access to the broadband services that facilitate internet purchases. Business experts forecast a trebling of internet retail sales in the UK and France over the next five years.
The passage suggests that internet shopping appeals to consumers who do not like going out to shop at shopping centres.
e
id_450
According to recently published figures, internet sales last year comprised nearly five per cent of the UKs retail spending. It was the only retail channel showing growth, with a 20 per cent rise on the previous years sales. The flourishing of e-commerce can be largely attributed to the increasing popularity of on-line supermarket shopping and shoppers preference for staying home. The UK leads its European neighbours in internet shopping revenue, in part because of higher credit card usage than countries such as Germany and France. Compared with continental Europe, the UK also has higher levels of computer ownership and wider access to the broadband services that facilitate internet purchases. Business experts forecast a trebling of internet retail sales in the UK and France over the next five years.
The UK 1s at the forefront of increasing European internet sales.
e
id_451
According to recently published figures, internet sales last year comprised nearly five per cent of the UKs retail spending. It was the only retail channel showing growth, with a 20 per cent rise on the previous years sales. The flourishing of e-commerce can be largely attributed to the increasing popularity of on-line supermarket shopping and shoppers preference for staying home. The UK leads its European neighbours in internet shopping revenue, in part because of higher credit card usage than countries such as Germany and France. Compared with continental Europe, the UK also has higher levels of computer ownership and wider access to the broadband services that facilitate internet purchases. Business experts forecast a trebling of internet retail sales in the UK and France over the next five years.
Internet retail sales in the UK and France will be higher next year.
n
id_452
According to recently published figures, internet sales last year comprised nearly five per cent of the UKs retail spending. It was the only retail channel showing growth, with a 20 per cent rise on the previous years sales. The flourishing of e-commerce can be largely attributed to the increasing popularity of on-line supermarket shopping and shoppers preference for staying home. The UK leads its European neighbours in internet shopping revenue, in part because of higher credit card usage than countries such as Germany and France. Compared with continental Europe, the UK also has higher levels of computer ownership and wider access to the broadband services that facilitate internet purchases. Business experts forecast a trebling of internet retail sales in the UK and France over the next five years.
Sales trends for internet shopping in Germany have mirrored those in the UK.
c
id_453
According to the Business School Admission Council, last year applications for full-time MBA programmes declined at 75 per cent of educational mstitutions offermg the degree, with applications down by more than 20 per cent at over half of the schools surveyed. MBAs have traditionally been seen as a fast-track to higher salaries and senior management positions, but the proliferation of MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per cent of whom rated ther programmes as excellent. But critics remain unconvinced that MBAs remam a necessary qualification for a high-power career. Chris Wilson, Senior Partner at Wilson Recruitment & Selection, says: An MBA, even from a top-tier school, 1s no substitute for experience and a successful performance record. Given the expense of full-time programmes, tt is perhaps unsurprising that application figures for part-time executive MBA programmes increased by 50 per cent last year.
Last year, part-time MBA programs received more applications than traditional full-time courses.
n
id_454
According to the Business School Admission Council, last year applications for full-time MBA programmes declined at 75 per cent of educational mstitutions offermg the degree, with applications down by more than 20 per cent at over half of the schools surveyed. MBAs have traditionally been seen as a fast-track to higher salaries and senior management positions, but the proliferation of MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per cent of whom rated ther programmes as excellent. But critics remain unconvinced that MBAs remam a necessary qualification for a high-power career. Chris Wilson, Senior Partner at Wilson Recruitment & Selection, says: An MBA, even from a top-tier school, 1s no substitute for experience and a successful performance record. Given the expense of full-time programmes, tt is perhaps unsurprising that application figures for part-time executive MBA programmes increased by 50 per cent last year.
The quality of MBA courses has declined as the number of programs on offer has increased.
n
id_455
According to the Business School Admission Council, last year applications for full-time MBA programmes declined at 75 per cent of educational mstitutions offermg the degree, with applications down by more than 20 per cent at over half of the schools surveyed. MBAs have traditionally been seen as a fast-track to higher salaries and senior management positions, but the proliferation of MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per cent of whom rated ther programmes as excellent. But critics remain unconvinced that MBAs remam a necessary qualification for a high-power career. Chris Wilson, Senior Partner at Wilson Recruitment & Selection, says: An MBA, even from a top-tier school, 1s no substitute for experience and a successful performance record. Given the expense of full-time programmes, tt is perhaps unsurprising that application figures for part-time executive MBA programmes increased by 50 per cent last year.
The passage argues that there is no longer any value in attaining an MBA.
c
id_456
According to the Business School Admission Council, last year applications for full-time MBA programmes declined at 75 per cent of educational mstitutions offermg the degree, with applications down by more than 20 per cent at over half of the schools surveyed. MBAs have traditionally been seen as a fast-track to higher salaries and senior management positions, but the proliferation of MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per cent of whom rated ther programmes as excellent. But critics remain unconvinced that MBAs remam a necessary qualification for a high-power career. Chris Wilson, Senior Partner at Wilson Recruitment & Selection, says: An MBA, even from a top-tier school, 1s no substitute for experience and a successful performance record. Given the expense of full-time programmes, tt is perhaps unsurprising that application figures for part-time executive MBA programmes increased by 50 per cent last year.
The passage suggests that historically MBAs were used to build a high-powered career.
e
id_457
According to the Business School Admission Council, last year applications for full-time MBA programmes declined at 75 per cent of educational mstitutions offermg the degree, with applications down by more than 20 per cent at over half of the schools surveyed. MBAs have traditionally been seen as a fast-track to higher salaries and senior management positions, but the proliferation of MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per MBA programmes has raised questions about the value of the degree. Business schools argue that they offer essential management traming, and pomt to a poll of recent MBA graduates, over 80 per cent of whom rated ther programmes as excellent. But critics remain unconvinced that MBAs remam a necessary qualification for a high-power career. Chris Wilson, Senior Partner at Wilson Recruitment & Selection, says: An MBA, even from a top-tier school, 1s no substitute for experience and a successful performance record. Given the expense of full-time programmes, tt is perhaps unsurprising that application figures for part-time executive MBA programmes increased by 50 per cent last year.
The passage suggests that the MBA degree has been undermined by the plethora of programs on offer.
e
id_458
According to the best MBA annual survey, The Lynx Business School has been the best in the world for the last three years. Four of the best five schools are based in the United States; the fifth, based in Europe, is the Glasgow Business School. Stanford and Harvard were in second and third place respectively. In seventh place, up three from the last survey, came the University of the North West. Currently in nineteenth place is the Bombay School, the highest ranked business school outside of the United States and Europe.
By best the author of the survey most likely means the school voted by a panel of experts to be the pre-eminent.
n
id_459
According to the best MBA annual survey, The Lynx Business School has been the best in the world for the last three years. Four of the best five schools are based in the United States; the fifth, based in Europe, is the Glasgow Business School. Stanford and Harvard were in second and third place respectively. In seventh place, up three from the last survey, came the University of the North West. Currently in nineteenth place is the Bombay School, the highest ranked business school outside of the United States and Europe.
The Glasgow Business School came fifth in the survey.
n
id_460
According to the best MBA annual survey, The Lynx Business School has been the best in the world for the last three years. Four of the best five schools are based in the United States; the fifth, based in Europe, is the Glasgow Business School. Stanford and Harvard were in second and third place respectively. In seventh place, up three from the last survey, came the University of the North West. Currently in nineteenth place is the Bombay School, the highest ranked business school outside of the United States and Europe.
The Lynx Business School is based in the United States.
e
id_461
Accountants are pressing the government to allow them a client confidentiality defence if they suspect a client of money laundering or tax avoidance. The government is relying on new legislation concerning the issue of disclosure by lawyers and accountants to make substantial cuts in both money hidden by criminals and the amount of unpaid tax. Consultation has begun about giving accountants the same protection as lawyers on the issue of laundering but the government has rejected claims that lawyers have any more protection than accountants on the issue of disclosing tax avoidance.
There are two issues at stake and it would seem that the current rules are different for lawyers and accountants.
e
id_462
Accountants are pressing the government to allow them a client confidentiality defence if they suspect a client of money laundering or tax avoidance. The government is relying on new legislation concerning the issue of disclosure by lawyers and accountants to make substantial cuts in both money hidden by criminals and the amount of unpaid tax. Consultation has begun about giving accountants the same protection as lawyers on the issue of laundering but the government has rejected claims that lawyers have any more protection than accountants on the issue of disclosing tax avoidance.
The legislation will require lawyers and accountants to provide confidential information to the government about all their clients.
c
id_463
Accountants are pressing the government to allow them a client confidentiality defence if they suspect a client of money laundering or tax avoidance. The government is relying on new legislation concerning the issue of disclosure by lawyers and accountants to make substantial cuts in both money hidden by criminals and the amount of unpaid tax. Consultation has begun about giving accountants the same protection as lawyers on the issue of laundering but the government has rejected claims that lawyers have any more protection than accountants on the issue of disclosing tax avoidance.
The accountants want parity with lawyers.
n
id_464
Activities for Children Twenty-five years ago, children in London walked to school and played in parks and playing fields after school and at the weekend. Today they are usually driven to school by parents anxious about safety and spend hours glued to television screens or computer games. Meanwhile, community playing fields are being sold off to property developers at an alarming rate. This change in lifestyle has, sadly, meant greater restrictions on children, says Neil Armstrong, Professor of Health and Exercise Sciences at the University of Exeter. If children continue to be this inactive, theyll be storing up big problems for the future. In 1985, Professor Armstrong headed a five-year research project into childrens fitness. The results, published in 1990, were alarming. The survey, which monitored 700 11-16-year-olds, found that 48 per cent of girls and 41 per cent of boys already exceeded safe cholesterol levels set for children by the American Heart Foundation. Armstrong adds, heart is a muscle and need exercise, or it loses its strength. It also found that 13 per cent of boys and 10 per cent of girls were overweight. More disturbingly, the survey found that over a four-day period, half the girls and one-third of the boys did less exercise than the equivalent of a brisk 10-minute walk. High levels of cholesterol, excess body fat and inactivity are believed to increase the risk of coronary heart disease. Physical education is under pressure in the UK most schools devote little more than 100 minutes a week to it in curriculum time, which is less than many other European countries. Three European countries are giving children a head start in PE, France, Austria and Switzerland offer at least two hours in primary and secondary schools. These findings, from the European Union of Physical Education Associations, prompted specialists in childrens physiology to call on European governments to give youngsters a daily PE programme. The survey shows that the UK ranks 13th out of the 25 countries, with Ireland bottom, averaging under an hour a week for PE. From age six to 18, British children received, on average, 106 minutes of PE a week. Professor Armstrong, who presented the findings at the meeting, noted that since the introduction of the national curriculum there had been a marked fall in the time devoted to PE in UK schools, with only a minority of pupils getting two hours a week. As a former junior football international, Professor Armstrong is a passionate advocate for sport. Although the Government has poured millions into beefing up sport in the community, there is less commitment to it as part of the crammed school curriculum. This means that many children never acquire the necessary skills to thrive in team games. If they are no good at them, they lose interest and establish an inactive pattern of behaviour. When this is coupled with a poor diet, it will lead inevitably to weight gain. Seventy per cent of British children give up all sport when they leave school, compared with only 20 per cent of French teenagers. Professor Armstrong believes that there is far too great an emphasis on team games at school. We need to look at the time devoted to PE and balance it between individual and pair activities, such as aerobics and badminton, as well as team sports. He added that children need to have the opportunity to take part in a wide variety of individual, partner and team sports. The good news, however, is that a few small companies and childrens activity groups have reacted positively and creatively to the problem. Take That, shouts Gloria Thomas, striking a disco pose astride her mini-spacehopper. Take That, echo a flock of toddlers, adopting outrageous postures astride their space hoppers. Michael Jackson, she shouts, and they all do a spoof fan-crazed shriek. During the wild and chaotic hopper race across the studio floor, commands like this are issued and responded to with untrammelled glee. The sight of 15 bouncing seven-year-olds who seem about to launch into orbit at every bounce brings tears to the eyes. Uncoordinated, loud, excited and emotional, children provide raw comedy. Any cardiovascular exercise is a good option, and it doesnt necessarily have to be high intensity. It can be anything that gets your heart rate up: such as walking the dog, swimming, miming, skipping, hiking. Even walking through the grocery store can be exercise, Samis-Smith said. What they dont know is that theyre at a Fit Kids class, and that the fun is a disguise for the serious exercise plan theyre covertly being taken through. Fit Kids trains parents to run fitness classes for children. Ninety per cent of children dont like team sports, says company director, Gillian Gale. A Prevention survey found that children whose parents keep in shape are much more likely to have healthy body weights themselves. Theres nothing worse than telling a child what he needs to do and not doing it yourself, says Elizabeth Ward, R. D. , a Boston nutritional consultant and author of Healthy Foods, Healthy Kids . Set a good example and get your nutritional house in order first. In the 1930s and 40s, kids expended 800 calories a day just walking, carrying water, and doing other chores, notes Fima Lifshitz, M. D. , a pediatric endocrinologist in Santa Barbara. Now, kids in obese families are expending only 200 calories a day in physical activity, says Lifshitz, incorporate more movement in your familys lifepark farther away from the stores at the mall, take stairs instead of the elevator, and walk to nearby friends houses instead of driving.
British children generally do less exercise than some other European countries.
e
id_465
Activities for Children Twenty-five years ago, children in London walked to school and played in parks and playing fields after school and at the weekend. Today they are usually driven to school by parents anxious about safety and spend hours glued to television screens or computer games. Meanwhile, community playing fields are being sold off to property developers at an alarming rate. This change in lifestyle has, sadly, meant greater restrictions on children, says Neil Armstrong, Professor of Health and Exercise Sciences at the University of Exeter. If children continue to be this inactive, theyll be storing up big problems for the future. In 1985, Professor Armstrong headed a five-year research project into childrens fitness. The results, published in 1990, were alarming. The survey, which monitored 700 11-16-year-olds, found that 48 per cent of girls and 41 per cent of boys already exceeded safe cholesterol levels set for children by the American Heart Foundation. Armstrong adds, heart is a muscle and need exercise, or it loses its strength. It also found that 13 per cent of boys and 10 per cent of girls were overweight. More disturbingly, the survey found that over a four-day period, half the girls and one-third of the boys did less exercise than the equivalent of a brisk 10-minute walk. High levels of cholesterol, excess body fat and inactivity are believed to increase the risk of coronary heart disease. Physical education is under pressure in the UK most schools devote little more than 100 minutes a week to it in curriculum time, which is less than many other European countries. Three European countries are giving children a head start in PE, France, Austria and Switzerland offer at least two hours in primary and secondary schools. These findings, from the European Union of Physical Education Associations, prompted specialists in childrens physiology to call on European governments to give youngsters a daily PE programme. The survey shows that the UK ranks 13th out of the 25 countries, with Ireland bottom, averaging under an hour a week for PE. From age six to 18, British children received, on average, 106 minutes of PE a week. Professor Armstrong, who presented the findings at the meeting, noted that since the introduction of the national curriculum there had been a marked fall in the time devoted to PE in UK schools, with only a minority of pupils getting two hours a week. As a former junior football international, Professor Armstrong is a passionate advocate for sport. Although the Government has poured millions into beefing up sport in the community, there is less commitment to it as part of the crammed school curriculum. This means that many children never acquire the necessary skills to thrive in team games. If they are no good at them, they lose interest and establish an inactive pattern of behaviour. When this is coupled with a poor diet, it will lead inevitably to weight gain. Seventy per cent of British children give up all sport when they leave school, compared with only 20 per cent of French teenagers. Professor Armstrong believes that there is far too great an emphasis on team games at school. We need to look at the time devoted to PE and balance it between individual and pair activities, such as aerobics and badminton, as well as team sports. He added that children need to have the opportunity to take part in a wide variety of individual, partner and team sports. The good news, however, is that a few small companies and childrens activity groups have reacted positively and creatively to the problem. Take That, shouts Gloria Thomas, striking a disco pose astride her mini-spacehopper. Take That, echo a flock of toddlers, adopting outrageous postures astride their space hoppers. Michael Jackson, she shouts, and they all do a spoof fan-crazed shriek. During the wild and chaotic hopper race across the studio floor, commands like this are issued and responded to with untrammelled glee. The sight of 15 bouncing seven-year-olds who seem about to launch into orbit at every bounce brings tears to the eyes. Uncoordinated, loud, excited and emotional, children provide raw comedy. Any cardiovascular exercise is a good option, and it doesnt necessarily have to be high intensity. It can be anything that gets your heart rate up: such as walking the dog, swimming, miming, skipping, hiking. Even walking through the grocery store can be exercise, Samis-Smith said. What they dont know is that theyre at a Fit Kids class, and that the fun is a disguise for the serious exercise plan theyre covertly being taken through. Fit Kids trains parents to run fitness classes for children. Ninety per cent of children dont like team sports, says company director, Gillian Gale. A Prevention survey found that children whose parents keep in shape are much more likely to have healthy body weights themselves. Theres nothing worse than telling a child what he needs to do and not doing it yourself, says Elizabeth Ward, R. D. , a Boston nutritional consultant and author of Healthy Foods, Healthy Kids . Set a good example and get your nutritional house in order first. In the 1930s and 40s, kids expended 800 calories a day just walking, carrying water, and doing other chores, notes Fima Lifshitz, M. D. , a pediatric endocrinologist in Santa Barbara. Now, kids in obese families are expending only 200 calories a day in physical activity, says Lifshitz, incorporate more movement in your familys lifepark farther away from the stores at the mall, take stairs instead of the elevator, and walk to nearby friends houses instead of driving.
According to American Heart Foundation, cholesterol levels of boys are higher than girls.
n
id_466
Activities for Children Twenty-five years ago, children in London walked to school and played in parks and playing fields after school and at the weekend. Today they are usually driven to school by parents anxious about safety and spend hours glued to television screens or computer games. Meanwhile, community playing fields are being sold off to property developers at an alarming rate. This change in lifestyle has, sadly, meant greater restrictions on children, says Neil Armstrong, Professor of Health and Exercise Sciences at the University of Exeter. If children continue to be this inactive, theyll be storing up big problems for the future. In 1985, Professor Armstrong headed a five-year research project into childrens fitness. The results, published in 1990, were alarming. The survey, which monitored 700 11-16-year-olds, found that 48 per cent of girls and 41 per cent of boys already exceeded safe cholesterol levels set for children by the American Heart Foundation. Armstrong adds, heart is a muscle and need exercise, or it loses its strength. It also found that 13 per cent of boys and 10 per cent of girls were overweight. More disturbingly, the survey found that over a four-day period, half the girls and one-third of the boys did less exercise than the equivalent of a brisk 10-minute walk. High levels of cholesterol, excess body fat and inactivity are believed to increase the risk of coronary heart disease. Physical education is under pressure in the UK most schools devote little more than 100 minutes a week to it in curriculum time, which is less than many other European countries. Three European countries are giving children a head start in PE, France, Austria and Switzerland offer at least two hours in primary and secondary schools. These findings, from the European Union of Physical Education Associations, prompted specialists in childrens physiology to call on European governments to give youngsters a daily PE programme. The survey shows that the UK ranks 13th out of the 25 countries, with Ireland bottom, averaging under an hour a week for PE. From age six to 18, British children received, on average, 106 minutes of PE a week. Professor Armstrong, who presented the findings at the meeting, noted that since the introduction of the national curriculum there had been a marked fall in the time devoted to PE in UK schools, with only a minority of pupils getting two hours a week. As a former junior football international, Professor Armstrong is a passionate advocate for sport. Although the Government has poured millions into beefing up sport in the community, there is less commitment to it as part of the crammed school curriculum. This means that many children never acquire the necessary skills to thrive in team games. If they are no good at them, they lose interest and establish an inactive pattern of behaviour. When this is coupled with a poor diet, it will lead inevitably to weight gain. Seventy per cent of British children give up all sport when they leave school, compared with only 20 per cent of French teenagers. Professor Armstrong believes that there is far too great an emphasis on team games at school. We need to look at the time devoted to PE and balance it between individual and pair activities, such as aerobics and badminton, as well as team sports. He added that children need to have the opportunity to take part in a wide variety of individual, partner and team sports. The good news, however, is that a few small companies and childrens activity groups have reacted positively and creatively to the problem. Take That, shouts Gloria Thomas, striking a disco pose astride her mini-spacehopper. Take That, echo a flock of toddlers, adopting outrageous postures astride their space hoppers. Michael Jackson, she shouts, and they all do a spoof fan-crazed shriek. During the wild and chaotic hopper race across the studio floor, commands like this are issued and responded to with untrammelled glee. The sight of 15 bouncing seven-year-olds who seem about to launch into orbit at every bounce brings tears to the eyes. Uncoordinated, loud, excited and emotional, children provide raw comedy. Any cardiovascular exercise is a good option, and it doesnt necessarily have to be high intensity. It can be anything that gets your heart rate up: such as walking the dog, swimming, miming, skipping, hiking. Even walking through the grocery store can be exercise, Samis-Smith said. What they dont know is that theyre at a Fit Kids class, and that the fun is a disguise for the serious exercise plan theyre covertly being taken through. Fit Kids trains parents to run fitness classes for children. Ninety per cent of children dont like team sports, says company director, Gillian Gale. A Prevention survey found that children whose parents keep in shape are much more likely to have healthy body weights themselves. Theres nothing worse than telling a child what he needs to do and not doing it yourself, says Elizabeth Ward, R. D. , a Boston nutritional consultant and author of Healthy Foods, Healthy Kids . Set a good example and get your nutritional house in order first. In the 1930s and 40s, kids expended 800 calories a day just walking, carrying water, and doing other chores, notes Fima Lifshitz, M. D. , a pediatric endocrinologist in Santa Barbara. Now, kids in obese families are expending only 200 calories a day in physical activity, says Lifshitz, incorporate more movement in your familys lifepark farther away from the stores at the mall, take stairs instead of the elevator, and walk to nearby friends houses instead of driving.
According to Healthy Kids, the first task is for parents to encourage their children to keep the same healthy body weight.
c
id_467
Activities for Children Twenty-five years ago, children in London walked to school and played in parks and playing fields after school and at the weekend. Today they are usually driven to school by parents anxious about safety and spend hours glued to television screens or computer games. Meanwhile, community playing fields are being sold off to property developers at an alarming rate. This change in lifestyle has, sadly, meant greater restrictions on children, says Neil Armstrong, Professor of Health and Exercise Sciences at the University of Exeter. If children continue to be this inactive, theyll be storing up big problems for the future. In 1985, Professor Armstrong headed a five-year research project into childrens fitness. The results, published in 1990, were alarming. The survey, which monitored 700 11-16-year-olds, found that 48 per cent of girls and 41 per cent of boys already exceeded safe cholesterol levels set for children by the American Heart Foundation. Armstrong adds, heart is a muscle and need exercise, or it loses its strength. It also found that 13 per cent of boys and 10 per cent of girls were overweight. More disturbingly, the survey found that over a four-day period, half the girls and one-third of the boys did less exercise than the equivalent of a brisk 10-minute walk. High levels of cholesterol, excess body fat and inactivity are believed to increase the risk of coronary heart disease. Physical education is under pressure in the UK most schools devote little more than 100 minutes a week to it in curriculum time, which is less than many other European countries. Three European countries are giving children a head start in PE, France, Austria and Switzerland offer at least two hours in primary and secondary schools. These findings, from the European Union of Physical Education Associations, prompted specialists in childrens physiology to call on European governments to give youngsters a daily PE programme. The survey shows that the UK ranks 13th out of the 25 countries, with Ireland bottom, averaging under an hour a week for PE. From age six to 18, British children received, on average, 106 minutes of PE a week. Professor Armstrong, who presented the findings at the meeting, noted that since the introduction of the national curriculum there had been a marked fall in the time devoted to PE in UK schools, with only a minority of pupils getting two hours a week. As a former junior football international, Professor Armstrong is a passionate advocate for sport. Although the Government has poured millions into beefing up sport in the community, there is less commitment to it as part of the crammed school curriculum. This means that many children never acquire the necessary skills to thrive in team games. If they are no good at them, they lose interest and establish an inactive pattern of behaviour. When this is coupled with a poor diet, it will lead inevitably to weight gain. Seventy per cent of British children give up all sport when they leave school, compared with only 20 per cent of French teenagers. Professor Armstrong believes that there is far too great an emphasis on team games at school. We need to look at the time devoted to PE and balance it between individual and pair activities, such as aerobics and badminton, as well as team sports. He added that children need to have the opportunity to take part in a wide variety of individual, partner and team sports. The good news, however, is that a few small companies and childrens activity groups have reacted positively and creatively to the problem. Take That, shouts Gloria Thomas, striking a disco pose astride her mini-spacehopper. Take That, echo a flock of toddlers, adopting outrageous postures astride their space hoppers. Michael Jackson, she shouts, and they all do a spoof fan-crazed shriek. During the wild and chaotic hopper race across the studio floor, commands like this are issued and responded to with untrammelled glee. The sight of 15 bouncing seven-year-olds who seem about to launch into orbit at every bounce brings tears to the eyes. Uncoordinated, loud, excited and emotional, children provide raw comedy. Any cardiovascular exercise is a good option, and it doesnt necessarily have to be high intensity. It can be anything that gets your heart rate up: such as walking the dog, swimming, miming, skipping, hiking. Even walking through the grocery store can be exercise, Samis-Smith said. What they dont know is that theyre at a Fit Kids class, and that the fun is a disguise for the serious exercise plan theyre covertly being taken through. Fit Kids trains parents to run fitness classes for children. Ninety per cent of children dont like team sports, says company director, Gillian Gale. A Prevention survey found that children whose parents keep in shape are much more likely to have healthy body weights themselves. Theres nothing worse than telling a child what he needs to do and not doing it yourself, says Elizabeth Ward, R. D. , a Boston nutritional consultant and author of Healthy Foods, Healthy Kids . Set a good example and get your nutritional house in order first. In the 1930s and 40s, kids expended 800 calories a day just walking, carrying water, and doing other chores, notes Fima Lifshitz, M. D. , a pediatric endocrinologist in Santa Barbara. Now, kids in obese families are expending only 200 calories a day in physical activity, says Lifshitz, incorporate more movement in your familys lifepark farther away from the stores at the mall, take stairs instead of the elevator, and walk to nearby friends houses instead of driving.
Skipping becomes more and more popular in schools of UK.
n
id_468
Adult Intelligence Over 90 years ago, Binet and Simon delineated two different methods of assessing intelligence. These were the psychological method (which concentrates mostly on intellectual processes, such as memory and abstract reasoning) and the pedagogical method (which concentrates on assessing what an individual knows). The main concern of Binet and Simon was to predict elementary school performance independently from the social and economic background of the individual student. As a result, they settled on the psychological method, and they spawned an intelligence assessment paradigm, which has been substantially unchanged from their original tests. With few exceptions, the development of adult intelligence assessment instruments proceeded along the same lines of the Binet-Simon tests. Nevertheless, the difficulty of items was increased for older examinees. Thus, extant adult intelligence tests were created as little more than upward extensions of the original Binet-Simon scales. The Binet-Simon tests are quite effective in predicting school success in both primary and secondary educational environments. However, they have been found to be much less predictive of success in post-secondary academic and occupational domains. Such a discrepancy provokes fundamental questions about intelligence. One highly debated question asks whether college success is actually dependent on currently used forms of measured intelligence, or if present measures of intelligence are inadequately sampling the wider domain of adult intellect. One possible answer to this question lies in questioning the preference of the psychological method over the pedagogical method for assessing adult intellect. Recent research across the fields of education, cognitive science, and adult development suggests that much of adult intellect is indeed not adequately sampled by extant intelligence measures and might be better assessed through the pedagogical method (Ackerman, 1996; Gregory, 1994). Several lines of research have also converged on a redefinition of adult intellect that places a greater emphasis on content (knowledge) over process. Substantial strides have been made in delineating knowledge aspects of intellectual performance which are divergent from traditional measures of intelligence (e. g. , Wagner, 1987) and in demonstrating that adult performance is greatly influenced by prior topic and domain knowledge (e. g. , Alexander et al. , 1994). Even some older testing literature seems to indicate that the knowledge measured by the Graduate Records Examination (GRE) is a comparable or better indicator of future graduate school success and post-graduate performance than traditional aptitude measures (Willingham, 1974). Knowledge and Intelligence When an adult is presented with a completely novel problem (e. g. , memorizing a random set of numbers or letters), the basic intellectual processes are typically implicated in predicting which individuals will be successful in solving problems. The dilemma for adult intellectual assessment is that the adult is rarely presented with a completely novel problem in the real world of academic or occupational endeavors. Rather, the problems that an adult is asked to solve almost inevitably draw greatly on his/her accumulated knowledge and skillsone does not build a house by only memorizing physics formulae. For an adult, intellect is better conceptualized by the tasks that the person can accomplish and the skills that he/she has developed rather than the number of digits that can be stored in working memory or the number of syllogistic reasoning items that can be correctly evaluated. Thus, the content of the intellect is at least as important as the processes of intellect in determining an adults real-world problem-solving efficacy. From the artificial intelligence field, researchers have discarded the idea of a useful general problem solver in favor of knowledge-based expert systems. This is because no amount of processing power can achieve real-world problem-solving proficiency without an extensive set of domain-relevant knowledge structures. Gregory (1994) describes the difference between such concepts as potential intelligence (knowledge) and kinetic intelligence (process). Similarly, Schank and Birnbaum (1994) say that what makes someone intelligent is what he [/she] knows. One line of relevant educational research is from the examination of expert- novice differences which indicates that the typical expert is found to mainly differ from the novice in terms of experience and the knowledge structures that are developed through that experience rather than in terms of intellectual processes (e. g. , Glaser, 1991). Additional research from developmental and gerontological perspectives has also shown that various aspects of adult intellectual functioning are greatly determined by knowledge structures and less influenced by the kinds of process measures, which have been shown to decline with age over adult development (e. g. , Schooler, 1987; Willis & Tosti-Vasey, 1990). Shifting Paradigms By bringing together a variety of sources of research evidence, it is clear that our current methods of assessing adult intellect are insufficient. When we are confronted with situations in which the intellectual performance of adults must be predicted (e. g. , continuing education or adult learning programs), we must begin to take account of what they know in addition to the traditional assessment of intellectual processes. Because adults are quite diverse in their knowledge structures (e. g. , a physicist may know many different things than a carpenter), the challenge for educational assessment researchers in the future will be to develop batteries of tests that can be used to assess different sources of intellectual knowledge for different individuals. When adult knowledge structures are broadly examined with tests such as the Advanced Placement [AP] -and College Level Exam Program CLEP, it may be possible to improve such things as the prediction of adult performance in specific educational endeavors, the placement of individuals, and adult educational counseling.
Better methods of measuring adult intelligence need to be developed.
e
id_469
Adult Intelligence Over 90 years ago, Binet and Simon delineated two different methods of assessing intelligence. These were the psychological method (which concentrates mostly on intellectual processes, such as memory and abstract reasoning) and the pedagogical method (which concentrates on assessing what an individual knows). The main concern of Binet and Simon was to predict elementary school performance independently from the social and economic background of the individual student. As a result, they settled on the psychological method, and they spawned an intelligence assessment paradigm, which has been substantially unchanged from their original tests. With few exceptions, the development of adult intelligence assessment instruments proceeded along the same lines of the Binet-Simon tests. Nevertheless, the difficulty of items was increased for older examinees. Thus, extant adult intelligence tests were created as little more than upward extensions of the original Binet-Simon scales. The Binet-Simon tests are quite effective in predicting school success in both primary and secondary educational environments. However, they have been found to be much less predictive of success in post-secondary academic and occupational domains. Such a discrepancy provokes fundamental questions about intelligence. One highly debated question asks whether college success is actually dependent on currently used forms of measured intelligence, or if present measures of intelligence are inadequately sampling the wider domain of adult intellect. One possible answer to this question lies in questioning the preference of the psychological method over the pedagogical method for assessing adult intellect. Recent research across the fields of education, cognitive science, and adult development suggests that much of adult intellect is indeed not adequately sampled by extant intelligence measures and might be better assessed through the pedagogical method (Ackerman, 1996; Gregory, 1994). Several lines of research have also converged on a redefinition of adult intellect that places a greater emphasis on content (knowledge) over process. Substantial strides have been made in delineating knowledge aspects of intellectual performance which are divergent from traditional measures of intelligence (e. g. , Wagner, 1987) and in demonstrating that adult performance is greatly influenced by prior topic and domain knowledge (e. g. , Alexander et al. , 1994). Even some older testing literature seems to indicate that the knowledge measured by the Graduate Records Examination (GRE) is a comparable or better indicator of future graduate school success and post-graduate performance than traditional aptitude measures (Willingham, 1974). Knowledge and Intelligence When an adult is presented with a completely novel problem (e. g. , memorizing a random set of numbers or letters), the basic intellectual processes are typically implicated in predicting which individuals will be successful in solving problems. The dilemma for adult intellectual assessment is that the adult is rarely presented with a completely novel problem in the real world of academic or occupational endeavors. Rather, the problems that an adult is asked to solve almost inevitably draw greatly on his/her accumulated knowledge and skillsone does not build a house by only memorizing physics formulae. For an adult, intellect is better conceptualized by the tasks that the person can accomplish and the skills that he/she has developed rather than the number of digits that can be stored in working memory or the number of syllogistic reasoning items that can be correctly evaluated. Thus, the content of the intellect is at least as important as the processes of intellect in determining an adults real-world problem-solving efficacy. From the artificial intelligence field, researchers have discarded the idea of a useful general problem solver in favor of knowledge-based expert systems. This is because no amount of processing power can achieve real-world problem-solving proficiency without an extensive set of domain-relevant knowledge structures. Gregory (1994) describes the difference between such concepts as potential intelligence (knowledge) and kinetic intelligence (process). Similarly, Schank and Birnbaum (1994) say that what makes someone intelligent is what he [/she] knows. One line of relevant educational research is from the examination of expert- novice differences which indicates that the typical expert is found to mainly differ from the novice in terms of experience and the knowledge structures that are developed through that experience rather than in terms of intellectual processes (e. g. , Glaser, 1991). Additional research from developmental and gerontological perspectives has also shown that various aspects of adult intellectual functioning are greatly determined by knowledge structures and less influenced by the kinds of process measures, which have been shown to decline with age over adult development (e. g. , Schooler, 1987; Willis & Tosti-Vasey, 1990). Shifting Paradigms By bringing together a variety of sources of research evidence, it is clear that our current methods of assessing adult intellect are insufficient. When we are confronted with situations in which the intellectual performance of adults must be predicted (e. g. , continuing education or adult learning programs), we must begin to take account of what they know in addition to the traditional assessment of intellectual processes. Because adults are quite diverse in their knowledge structures (e. g. , a physicist may know many different things than a carpenter), the challenge for educational assessment researchers in the future will be to develop batteries of tests that can be used to assess different sources of intellectual knowledge for different individuals. When adult knowledge structures are broadly examined with tests such as the Advanced Placement [AP] -and College Level Exam Program CLEP, it may be possible to improve such things as the prediction of adult performance in specific educational endeavors, the placement of individuals, and adult educational counseling.
Knowledge structures in adults decrease with age.
c
id_470
Adult Intelligence Over 90 years ago, Binet and Simon delineated two different methods of assessing intelligence. These were the psychological method (which concentrates mostly on intellectual processes, such as memory and abstract reasoning) and the pedagogical method (which concentrates on assessing what an individual knows). The main concern of Binet and Simon was to predict elementary school performance independently from the social and economic background of the individual student. As a result, they settled on the psychological method, and they spawned an intelligence assessment paradigm, which has been substantially unchanged from their original tests. With few exceptions, the development of adult intelligence assessment instruments proceeded along the same lines of the Binet-Simon tests. Nevertheless, the difficulty of items was increased for older examinees. Thus, extant adult intelligence tests were created as little more than upward extensions of the original Binet-Simon scales. The Binet-Simon tests are quite effective in predicting school success in both primary and secondary educational environments. However, they have been found to be much less predictive of success in post-secondary academic and occupational domains. Such a discrepancy provokes fundamental questions about intelligence. One highly debated question asks whether college success is actually dependent on currently used forms of measured intelligence, or if present measures of intelligence are inadequately sampling the wider domain of adult intellect. One possible answer to this question lies in questioning the preference of the psychological method over the pedagogical method for assessing adult intellect. Recent research across the fields of education, cognitive science, and adult development suggests that much of adult intellect is indeed not adequately sampled by extant intelligence measures and might be better assessed through the pedagogical method (Ackerman, 1996; Gregory, 1994). Several lines of research have also converged on a redefinition of adult intellect that places a greater emphasis on content (knowledge) over process. Substantial strides have been made in delineating knowledge aspects of intellectual performance which are divergent from traditional measures of intelligence (e. g. , Wagner, 1987) and in demonstrating that adult performance is greatly influenced by prior topic and domain knowledge (e. g. , Alexander et al. , 1994). Even some older testing literature seems to indicate that the knowledge measured by the Graduate Records Examination (GRE) is a comparable or better indicator of future graduate school success and post-graduate performance than traditional aptitude measures (Willingham, 1974). Knowledge and Intelligence When an adult is presented with a completely novel problem (e. g. , memorizing a random set of numbers or letters), the basic intellectual processes are typically implicated in predicting which individuals will be successful in solving problems. The dilemma for adult intellectual assessment is that the adult is rarely presented with a completely novel problem in the real world of academic or occupational endeavors. Rather, the problems that an adult is asked to solve almost inevitably draw greatly on his/her accumulated knowledge and skillsone does not build a house by only memorizing physics formulae. For an adult, intellect is better conceptualized by the tasks that the person can accomplish and the skills that he/she has developed rather than the number of digits that can be stored in working memory or the number of syllogistic reasoning items that can be correctly evaluated. Thus, the content of the intellect is at least as important as the processes of intellect in determining an adults real-world problem-solving efficacy. From the artificial intelligence field, researchers have discarded the idea of a useful general problem solver in favor of knowledge-based expert systems. This is because no amount of processing power can achieve real-world problem-solving proficiency without an extensive set of domain-relevant knowledge structures. Gregory (1994) describes the difference between such concepts as potential intelligence (knowledge) and kinetic intelligence (process). Similarly, Schank and Birnbaum (1994) say that what makes someone intelligent is what he [/she] knows. One line of relevant educational research is from the examination of expert- novice differences which indicates that the typical expert is found to mainly differ from the novice in terms of experience and the knowledge structures that are developed through that experience rather than in terms of intellectual processes (e. g. , Glaser, 1991). Additional research from developmental and gerontological perspectives has also shown that various aspects of adult intellectual functioning are greatly determined by knowledge structures and less influenced by the kinds of process measures, which have been shown to decline with age over adult development (e. g. , Schooler, 1987; Willis & Tosti-Vasey, 1990). Shifting Paradigms By bringing together a variety of sources of research evidence, it is clear that our current methods of assessing adult intellect are insufficient. When we are confronted with situations in which the intellectual performance of adults must be predicted (e. g. , continuing education or adult learning programs), we must begin to take account of what they know in addition to the traditional assessment of intellectual processes. Because adults are quite diverse in their knowledge structures (e. g. , a physicist may know many different things than a carpenter), the challenge for educational assessment researchers in the future will be to develop batteries of tests that can be used to assess different sources of intellectual knowledge for different individuals. When adult knowledge structures are broadly examined with tests such as the Advanced Placement [AP] -and College Level Exam Program CLEP, it may be possible to improve such things as the prediction of adult performance in specific educational endeavors, the placement of individuals, and adult educational counseling.
Research suggests that experts generally have more developed intellectual processes than novices.
c
id_471
Adult Intelligence Over 90 years ago, Binet and Simon delineated two different methods of assessing intelligence. These were the psychological method (which concentrates mostly on intellectual processes, such as memory and abstract reasoning) and the pedagogical method (which concentrates on assessing what an individual knows). The main concern of Binet and Simon was to predict elementary school performance independently from the social and economic background of the individual student. As a result, they settled on the psychological method, and they spawned an intelligence assessment paradigm, which has been substantially unchanged from their original tests. With few exceptions, the development of adult intelligence assessment instruments proceeded along the same lines of the Binet-Simon tests. Nevertheless, the difficulty of items was increased for older examinees. Thus, extant adult intelligence tests were created as little more than upward extensions of the original Binet-Simon scales. The Binet-Simon tests are quite effective in predicting school success in both primary and secondary educational environments. However, they have been found to be much less predictive of success in post-secondary academic and occupational domains. Such a discrepancy provokes fundamental questions about intelligence. One highly debated question asks whether college success is actually dependent on currently used forms of measured intelligence, or if present measures of intelligence are inadequately sampling the wider domain of adult intellect. One possible answer to this question lies in questioning the preference of the psychological method over the pedagogical method for assessing adult intellect. Recent research across the fields of education, cognitive science, and adult development suggests that much of adult intellect is indeed not adequately sampled by extant intelligence measures and might be better assessed through the pedagogical method (Ackerman, 1996; Gregory, 1994). Several lines of research have also converged on a redefinition of adult intellect that places a greater emphasis on content (knowledge) over process. Substantial strides have been made in delineating knowledge aspects of intellectual performance which are divergent from traditional measures of intelligence (e. g. , Wagner, 1987) and in demonstrating that adult performance is greatly influenced by prior topic and domain knowledge (e. g. , Alexander et al. , 1994). Even some older testing literature seems to indicate that the knowledge measured by the Graduate Records Examination (GRE) is a comparable or better indicator of future graduate school success and post-graduate performance than traditional aptitude measures (Willingham, 1974). Knowledge and Intelligence When an adult is presented with a completely novel problem (e. g. , memorizing a random set of numbers or letters), the basic intellectual processes are typically implicated in predicting which individuals will be successful in solving problems. The dilemma for adult intellectual assessment is that the adult is rarely presented with a completely novel problem in the real world of academic or occupational endeavors. Rather, the problems that an adult is asked to solve almost inevitably draw greatly on his/her accumulated knowledge and skillsone does not build a house by only memorizing physics formulae. For an adult, intellect is better conceptualized by the tasks that the person can accomplish and the skills that he/she has developed rather than the number of digits that can be stored in working memory or the number of syllogistic reasoning items that can be correctly evaluated. Thus, the content of the intellect is at least as important as the processes of intellect in determining an adults real-world problem-solving efficacy. From the artificial intelligence field, researchers have discarded the idea of a useful general problem solver in favor of knowledge-based expert systems. This is because no amount of processing power can achieve real-world problem-solving proficiency without an extensive set of domain-relevant knowledge structures. Gregory (1994) describes the difference between such concepts as potential intelligence (knowledge) and kinetic intelligence (process). Similarly, Schank and Birnbaum (1994) say that what makes someone intelligent is what he [/she] knows. One line of relevant educational research is from the examination of expert- novice differences which indicates that the typical expert is found to mainly differ from the novice in terms of experience and the knowledge structures that are developed through that experience rather than in terms of intellectual processes (e. g. , Glaser, 1991). Additional research from developmental and gerontological perspectives has also shown that various aspects of adult intellectual functioning are greatly determined by knowledge structures and less influenced by the kinds of process measures, which have been shown to decline with age over adult development (e. g. , Schooler, 1987; Willis & Tosti-Vasey, 1990). Shifting Paradigms By bringing together a variety of sources of research evidence, it is clear that our current methods of assessing adult intellect are insufficient. When we are confronted with situations in which the intellectual performance of adults must be predicted (e. g. , continuing education or adult learning programs), we must begin to take account of what they know in addition to the traditional assessment of intellectual processes. Because adults are quite diverse in their knowledge structures (e. g. , a physicist may know many different things than a carpenter), the challenge for educational assessment researchers in the future will be to develop batteries of tests that can be used to assess different sources of intellectual knowledge for different individuals. When adult knowledge structures are broadly examined with tests such as the Advanced Placement [AP] -and College Level Exam Program CLEP, it may be possible to improve such things as the prediction of adult performance in specific educational endeavors, the placement of individuals, and adult educational counseling.
Success in elementary school is a predictor of success in college.
c
id_472
Adult Intelligence Over 90 years ago, Binet and Simon delineated two different methods of assessing intelligence. These were the psychological method (which concentrates mostly on intellectual processes, such as memory and abstract reasoning) and the pedagogical method (which concentrates on assessing what an individual knows). The main concern of Binet and Simon was to predict elementary school performance independently from the social and economic background of the individual student. As a result, they settled on the psychological method, and they spawned an intelligence assessment paradigm, which has been substantially unchanged from their original tests. With few exceptions, the development of adult intelligence assessment instruments proceeded along the same lines of the Binet-Simon tests. Nevertheless, the difficulty of items was increased for older examinees. Thus, extant adult intelligence tests were created as little more than upward extensions of the original Binet-Simon scales. The Binet-Simon tests are quite effective in predicting school success in both primary and secondary educational environments. However, they have been found to be much less predictive of success in post-secondary academic and occupational domains. Such a discrepancy provokes fundamental questions about intelligence. One highly debated question asks whether college success is actually dependent on currently used forms of measured intelligence, or if present measures of intelligence are inadequately sampling the wider domain of adult intellect. One possible answer to this question lies in questioning the preference of the psychological method over the pedagogical method for assessing adult intellect. Recent research across the fields of education, cognitive science, and adult development suggests that much of adult intellect is indeed not adequately sampled by extant intelligence measures and might be better assessed through the pedagogical method (Ackerman, 1996; Gregory, 1994). Several lines of research have also converged on a redefinition of adult intellect that places a greater emphasis on content (knowledge) over process. Substantial strides have been made in delineating knowledge aspects of intellectual performance which are divergent from traditional measures of intelligence (e. g. , Wagner, 1987) and in demonstrating that adult performance is greatly influenced by prior topic and domain knowledge (e. g. , Alexander et al. , 1994). Even some older testing literature seems to indicate that the knowledge measured by the Graduate Records Examination (GRE) is a comparable or better indicator of future graduate school success and post-graduate performance than traditional aptitude measures (Willingham, 1974). Knowledge and Intelligence When an adult is presented with a completely novel problem (e. g. , memorizing a random set of numbers or letters), the basic intellectual processes are typically implicated in predicting which individuals will be successful in solving problems. The dilemma for adult intellectual assessment is that the adult is rarely presented with a completely novel problem in the real world of academic or occupational endeavors. Rather, the problems that an adult is asked to solve almost inevitably draw greatly on his/her accumulated knowledge and skillsone does not build a house by only memorizing physics formulae. For an adult, intellect is better conceptualized by the tasks that the person can accomplish and the skills that he/she has developed rather than the number of digits that can be stored in working memory or the number of syllogistic reasoning items that can be correctly evaluated. Thus, the content of the intellect is at least as important as the processes of intellect in determining an adults real-world problem-solving efficacy. From the artificial intelligence field, researchers have discarded the idea of a useful general problem solver in favor of knowledge-based expert systems. This is because no amount of processing power can achieve real-world problem-solving proficiency without an extensive set of domain-relevant knowledge structures. Gregory (1994) describes the difference between such concepts as potential intelligence (knowledge) and kinetic intelligence (process). Similarly, Schank and Birnbaum (1994) say that what makes someone intelligent is what he [/she] knows. One line of relevant educational research is from the examination of expert- novice differences which indicates that the typical expert is found to mainly differ from the novice in terms of experience and the knowledge structures that are developed through that experience rather than in terms of intellectual processes (e. g. , Glaser, 1991). Additional research from developmental and gerontological perspectives has also shown that various aspects of adult intellectual functioning are greatly determined by knowledge structures and less influenced by the kinds of process measures, which have been shown to decline with age over adult development (e. g. , Schooler, 1987; Willis & Tosti-Vasey, 1990). Shifting Paradigms By bringing together a variety of sources of research evidence, it is clear that our current methods of assessing adult intellect are insufficient. When we are confronted with situations in which the intellectual performance of adults must be predicted (e. g. , continuing education or adult learning programs), we must begin to take account of what they know in addition to the traditional assessment of intellectual processes. Because adults are quite diverse in their knowledge structures (e. g. , a physicist may know many different things than a carpenter), the challenge for educational assessment researchers in the future will be to develop batteries of tests that can be used to assess different sources of intellectual knowledge for different individuals. When adult knowledge structures are broadly examined with tests such as the Advanced Placement [AP] -and College Level Exam Program CLEP, it may be possible to improve such things as the prediction of adult performance in specific educational endeavors, the placement of individuals, and adult educational counseling.
The Binet-Simon tests have not changed significantly over the years.
e
id_473
Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken. Many studies have shown that children harbour misconceptions about pure, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers. Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children's ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools. The study surveys children's scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term rainforest. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats. Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life. The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as we are. About 18% of the pupils referred specifically to logging activity. One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth. In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important. The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils responses indicate some misconceptions in basic scientific knowledge of rainforests ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests. Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.
The plight of the rainforests has largely been ignored by the media.
c
id_474
Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken. Many studies have shown that children harbour misconceptions about pure, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers. Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children's ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools. The study surveys children's scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term rainforest. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats. Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life. The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as we are. About 18% of the pupils referred specifically to logging activity. One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth. In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important. The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils responses indicate some misconceptions in basic scientific knowledge of rainforests ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests. Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.
The fact that children's ideas about science form part of a larger framework of ideas means that it is easier to change them.
e
id_475
Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken. Many studies have shown that children harbour misconceptions about pure, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers. Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children's ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools. The study surveys children's scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term rainforest. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats. Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life. The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as we are. About 18% of the pupils referred specifically to logging activity. One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth. In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important. The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils responses indicate some misconceptions in basic scientific knowledge of rainforests ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests. Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.
It has been suggested that children hold mistaken views about the pure science that they study at school.
e
id_476
Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken. Many studies have shown that children harbour misconceptions about pure, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers. Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children's ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools. The study surveys children's scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term rainforest. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats. Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life. The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as we are. About 18% of the pupils referred specifically to logging activity. One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth. In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important. The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils responses indicate some misconceptions in basic scientific knowledge of rainforests ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests. Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.
Children only accept opinions on rainforests that they encounter in their classrooms.
c
id_477
Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken. Many studies have shown that children harbour misconceptions about pure, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers. Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children's ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools. The study surveys children's scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term rainforest. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats. Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life. The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as we are. About 18% of the pupils referred specifically to logging activity. One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth. In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important. The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils responses indicate some misconceptions in basic scientific knowledge of rainforests ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests. Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.
Girls are more likely than boys to hold mistaken views about the rainforests destruction.
n
id_478
Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken. Many studies have shown that children harbour misconceptions about pure, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers. Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children's ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools. The study surveys children's scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term rainforest. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats. Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life. The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as we are. About 18% of the pupils referred specifically to logging activity. One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth. In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important. The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils responses indicate some misconceptions in basic scientific knowledge of rainforests ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests. Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.
The study reported here follows on from a series of studies that have looked at children's understanding of rainforests.
e
id_479
Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken. Many studies have shown that children harbour misconceptions about pure, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers. Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children's ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools. The study surveys children's scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term rainforest. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats. Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life. The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as we are. About 18% of the pupils referred specifically to logging activity. One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth. In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important. The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils responses indicate some misconceptions in basic scientific knowledge of rainforests ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests. Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.
The study involved asking children a number of yes/no questions such as Are there any rainforests in Africa?
c
id_480
Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken. Many studies have shown that children harbour misconceptions about pure, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers. Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children's ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools. The study surveys children's scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term rainforest. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats. Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life. The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as we are. About 18% of the pupils referred specifically to logging activity. One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth. In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important. The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils responses indicate some misconceptions in basic scientific knowledge of rainforests ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests. Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.
A second study has been planned to investigate primary school children's ideas about rainforests.
n
id_481
Advance notice of upcoming road works Sections of the East High Street will be closed from Monday, April 6, for up to 12 weeks to allow for Campion Gas to replace and reinforce gas networks. Your local council is taking the closure as an opportunity to carry out street lighting improvements at the same time so as to minimise possible future disruption. Campion Gas will carry out the work in four phases: Phase 1: Broad Avenue, from the roundabout to the junction with Winton Street, will be closed as well as one lane on the Market Square. An alternative route will be available via East High Street, Winton Street and Castle Road. Phases 2 & 3: East High Street, from the crossroads at Broad Avenue to the junction with Winton Street, will be closed, as will the junction at Winton Street. An alternative route will be available via Winton Street, Castle Road and Broad Avenue. Phase 4: Eastern Road will be closed from the junction with South Street and No. 15 on that road. One lane only on South Street will also be closed. Exact details of the closures, including dates for each phase, are still to be finalized and will be released in due course.
Two projects will be carried out at the same time during the upcoming road works.
e
id_482
Advance notice of upcoming road works Sections of the East High Street will be closed from Monday, April 6, for up to 12 weeks to allow for Campion Gas to replace and reinforce gas networks. Your local council is taking the closure as an opportunity to carry out street lighting improvements at the same time so as to minimise possible future disruption. Campion Gas will carry out the work in four phases: Phase 1: Broad Avenue, from the roundabout to the junction with Winton Street, will be closed as well as one lane on the Market Square. An alternative route will be available via East High Street, Winton Street and Castle Road. Phases 2 & 3: East High Street, from the crossroads at Broad Avenue to the junction with Winton Street, will be closed, as will the junction at Winton Street. An alternative route will be available via Winton Street, Castle Road and Broad Avenue. Phase 4: Eastern Road will be closed from the junction with South Street and No. 15 on that road. One lane only on South Street will also be closed. Exact details of the closures, including dates for each phase, are still to be finalized and will be released in due course.
Alternative bus services will be free of charge during the disturbances.
n
id_483
Advance notice of upcoming road works Sections of the East High Street will be closed from Monday, April 6, for up to 12 weeks to allow for Campion Gas to replace and reinforce gas networks. Your local council is taking the closure as an opportunity to carry out street lighting improvements at the same time so as to minimise possible future disruption. Campion Gas will carry out the work in four phases: Phase 1: Broad Avenue, from the roundabout to the junction with Winton Street, will be closed as well as one lane on the Market Square. An alternative route will be available via East High Street, Winton Street and Castle Road. Phases 2 & 3: East High Street, from the crossroads at Broad Avenue to the junction with Winton Street, will be closed, as will the junction at Winton Street. An alternative route will be available via Winton Street, Castle Road and Broad Avenue. Phase 4: Eastern Road will be closed from the junction with South Street and No. 15 on that road. One lane only on South Street will also be closed. Exact details of the closures, including dates for each phase, are still to be finalized and will be released in due course.
People will not be able to use South Street in phase 4.
c
id_484
Advantages of public transport. A new study conducted for the World Bank by Murdoch Universitys Institute for Science and Technology Policy (ISTP) has demonstrated that public transport is more efficient than cars. The study compared the proportion of wealth poured into transport by thirty-seven cities around the world. This included both the public and private costs of building, maintaining and using a transport system. The study found that the Western Australian city of Perth is a good example of a city with minimal public transport. As a result, 17% of its wealth went into transport costs. Some European and Asian cities, on the other hand, spent as little as 5%. Professor Peter Newman, ISTP Director, pointed out that these more efficient cities were able to put the difference into attracting industry and jobs or creating a better place to live. According to Professor Newman, the larger Australian city of Melbourne is a rather unusual city in this sort of comparison. He describes it as two cities: A European city surrounded by a car-dependent one. Melbournes large tram network has made car use in the inner city much lower, but the outer suburbs have the same car-based structure as most other Australian cities. The explosion in demand for accommodation in the inner suburbs of Melbourne suggests a recent change in many peoples preferences as to where they live. Newman says this is a new, broader way of considering public transport issues. In the past, the case for public transport has been made on the basis of environmental and social justice considerations rather than economics. Newman, however, believes the study demonstrates that the auto-dependent city model is inefficient and grossly inadequate in economic as well as environmental terms. Bicycle use was not included in the study but Newman noted that the two most bicycle friendly cities considered Amsterdam and Copenhagen were very efficient, even though their public transport systems were reasonable but not special. It is common for supporters of road networks to reject the models of cities with good public transport by arguing that such systems would not work in their particular city. One objection is climate. Some people say their city could not make more use of public transport because it is either too hot or too cold. Newman rejects this, pointing out that public transport has been successful in both Toronto and Singapore and, in fact, he has checked the use of cars against climate and found zero correlation. When it comes to other physical features, road lobbies are on stronger ground. For example, Newman accepts it would be hard for a city as hilly as Auckland to develop a really good rail network. However, he points out that both Hong Kong and Zurich have managed to make a success of their rail systems, heavy and light respectively, though there are few cities in the world as hilly. In fact, Newman believes the main reason for adopting one sort of transport over another is politics: The more democratic the process, the more public transport is favored. He considers Portland, Oregon, a perfect example of this. Some years ago, federal money was granted to build a new road. However, local pressure groups forced a referendum over whether to spend the money on light rail instead. The rail proposal won and the railway worked spectacularly well. In the years that have followed, more and more rail systems have been put in, dramatically changing the nature of the city. Newman notes that Portland has about the same population as Perth and had a similar population density at the time. In the UK, travel times to work had been stable for at least six centuries, with people avoiding situations that required them to spend more than half an hour travelling to work. Trains and cars initially allowed people to live at greater distances without taking longer to reach their destination. However, public infrastructure did not keep pace with urban sprawl, causing massive congestion problems which now make commuting times far higher. There is a widespread belief that increasing wealth encourages people to live farther out where cars are the only viable transport. The example of European cities refutes that. They are often wealthier than their American counterparts but have not generated the same level of car use. In Stockholm, car use has actually fallen in recent years as the city has become larger and wealthier. A new study makes this point even more starkly. Developing cities in Asia, such as Jakarta and Bangkok, make more use of the car than wealthy Asian cities such as Tokyo and Singapore. In cities that developed later, the World Bank and Asian Development Bank discouraged the building of public transport and people have been forced to rely on cars creating the massive traffic jams that characterize those cities. Newman believes one of the best studies on how cities built for cars might be converted to rail use is The Urban Village report, which used Melbourne as an example. It found that pushing everyone into the city centre was not the best approach. Instead, the proposal advocated the creation of urban villages at hundreds of sites, mostly around railway stations. It was once assumed that improvements in telecommunications would lead to more dispersal in the population as people were no longer forced into cities. However, the ISTP teams research demonstrates that the population and job density of cities rose or remained constant in the 1980s after decades of decline. The explanation for this seems to be that it is valuable to place people working in related fields together. The new world will largely depend on human creativity, and creativity flourishes where people come together face-to-face.
In Melbourne, people prefer to live in the outer suburbs.
c
id_485
Advantages of public transport. A new study conducted for the World Bank by Murdoch Universitys Institute for Science and Technology Policy (ISTP) has demonstrated that public transport is more efficient than cars. The study compared the proportion of wealth poured into transport by thirty-seven cities around the world. This included both the public and private costs of building, maintaining and using a transport system. The study found that the Western Australian city of Perth is a good example of a city with minimal public transport. As a result, 17% of its wealth went into transport costs. Some European and Asian cities, on the other hand, spent as little as 5%. Professor Peter Newman, ISTP Director, pointed out that these more efficient cities were able to put the difference into attracting industry and jobs or creating a better place to live. According to Professor Newman, the larger Australian city of Melbourne is a rather unusual city in this sort of comparison. He describes it as two cities: A European city surrounded by a car-dependent one. Melbournes large tram network has made car use in the inner city much lower, but the outer suburbs have the same car-based structure as most other Australian cities. The explosion in demand for accommodation in the inner suburbs of Melbourne suggests a recent change in many peoples preferences as to where they live. Newman says this is a new, broader way of considering public transport issues. In the past, the case for public transport has been made on the basis of environmental and social justice considerations rather than economics. Newman, however, believes the study demonstrates that the auto-dependent city model is inefficient and grossly inadequate in economic as well as environmental terms. Bicycle use was not included in the study but Newman noted that the two most bicycle friendly cities considered Amsterdam and Copenhagen were very efficient, even though their public transport systems were reasonable but not special. It is common for supporters of road networks to reject the models of cities with good public transport by arguing that such systems would not work in their particular city. One objection is climate. Some people say their city could not make more use of public transport because it is either too hot or too cold. Newman rejects this, pointing out that public transport has been successful in both Toronto and Singapore and, in fact, he has checked the use of cars against climate and found zero correlation. When it comes to other physical features, road lobbies are on stronger ground. For example, Newman accepts it would be hard for a city as hilly as Auckland to develop a really good rail network. However, he points out that both Hong Kong and Zurich have managed to make a success of their rail systems, heavy and light respectively, though there are few cities in the world as hilly. In fact, Newman believes the main reason for adopting one sort of transport over another is politics: The more democratic the process, the more public transport is favored. He considers Portland, Oregon, a perfect example of this. Some years ago, federal money was granted to build a new road. However, local pressure groups forced a referendum over whether to spend the money on light rail instead. The rail proposal won and the railway worked spectacularly well. In the years that have followed, more and more rail systems have been put in, dramatically changing the nature of the city. Newman notes that Portland has about the same population as Perth and had a similar population density at the time. In the UK, travel times to work had been stable for at least six centuries, with people avoiding situations that required them to spend more than half an hour travelling to work. Trains and cars initially allowed people to live at greater distances without taking longer to reach their destination. However, public infrastructure did not keep pace with urban sprawl, causing massive congestion problems which now make commuting times far higher. There is a widespread belief that increasing wealth encourages people to live farther out where cars are the only viable transport. The example of European cities refutes that. They are often wealthier than their American counterparts but have not generated the same level of car use. In Stockholm, car use has actually fallen in recent years as the city has become larger and wealthier. A new study makes this point even more starkly. Developing cities in Asia, such as Jakarta and Bangkok, make more use of the car than wealthy Asian cities such as Tokyo and Singapore. In cities that developed later, the World Bank and Asian Development Bank discouraged the building of public transport and people have been forced to rely on cars creating the massive traffic jams that characterize those cities. Newman believes one of the best studies on how cities built for cars might be converted to rail use is The Urban Village report, which used Melbourne as an example. It found that pushing everyone into the city centre was not the best approach. Instead, the proposal advocated the creation of urban villages at hundreds of sites, mostly around railway stations. It was once assumed that improvements in telecommunications would lead to more dispersal in the population as people were no longer forced into cities. However, the ISTP teams research demonstrates that the population and job density of cities rose or remained constant in the 1980s after decades of decline. The explanation for this seems to be that it is valuable to place people working in related fields together. The new world will largely depend on human creativity, and creativity flourishes where people come together face-to-face.
The ISTP study examined public and private systems in every city of the world.
c
id_486
Advantages of public transport. A new study conducted for the World Bank by Murdoch Universitys Institute for Science and Technology Policy (ISTP) has demonstrated that public transport is more efficient than cars. The study compared the proportion of wealth poured into transport by thirty-seven cities around the world. This included both the public and private costs of building, maintaining and using a transport system. The study found that the Western Australian city of Perth is a good example of a city with minimal public transport. As a result, 17% of its wealth went into transport costs. Some European and Asian cities, on the other hand, spent as little as 5%. Professor Peter Newman, ISTP Director, pointed out that these more efficient cities were able to put the difference into attracting industry and jobs or creating a better place to live. According to Professor Newman, the larger Australian city of Melbourne is a rather unusual city in this sort of comparison. He describes it as two cities: A European city surrounded by a car-dependent one. Melbournes large tram network has made car use in the inner city much lower, but the outer suburbs have the same car-based structure as most other Australian cities. The explosion in demand for accommodation in the inner suburbs of Melbourne suggests a recent change in many peoples preferences as to where they live. Newman says this is a new, broader way of considering public transport issues. In the past, the case for public transport has been made on the basis of environmental and social justice considerations rather than economics. Newman, however, believes the study demonstrates that the auto-dependent city model is inefficient and grossly inadequate in economic as well as environmental terms. Bicycle use was not included in the study but Newman noted that the two most bicycle friendly cities considered Amsterdam and Copenhagen were very efficient, even though their public transport systems were reasonable but not special. It is common for supporters of road networks to reject the models of cities with good public transport by arguing that such systems would not work in their particular city. One objection is climate. Some people say their city could not make more use of public transport because it is either too hot or too cold. Newman rejects this, pointing out that public transport has been successful in both Toronto and Singapore and, in fact, he has checked the use of cars against climate and found zero correlation. When it comes to other physical features, road lobbies are on stronger ground. For example, Newman accepts it would be hard for a city as hilly as Auckland to develop a really good rail network. However, he points out that both Hong Kong and Zurich have managed to make a success of their rail systems, heavy and light respectively, though there are few cities in the world as hilly. In fact, Newman believes the main reason for adopting one sort of transport over another is politics: The more democratic the process, the more public transport is favored. He considers Portland, Oregon, a perfect example of this. Some years ago, federal money was granted to build a new road. However, local pressure groups forced a referendum over whether to spend the money on light rail instead. The rail proposal won and the railway worked spectacularly well. In the years that have followed, more and more rail systems have been put in, dramatically changing the nature of the city. Newman notes that Portland has about the same population as Perth and had a similar population density at the time. In the UK, travel times to work had been stable for at least six centuries, with people avoiding situations that required them to spend more than half an hour travelling to work. Trains and cars initially allowed people to live at greater distances without taking longer to reach their destination. However, public infrastructure did not keep pace with urban sprawl, causing massive congestion problems which now make commuting times far higher. There is a widespread belief that increasing wealth encourages people to live farther out where cars are the only viable transport. The example of European cities refutes that. They are often wealthier than their American counterparts but have not generated the same level of car use. In Stockholm, car use has actually fallen in recent years as the city has become larger and wealthier. A new study makes this point even more starkly. Developing cities in Asia, such as Jakarta and Bangkok, make more use of the car than wealthy Asian cities such as Tokyo and Singapore. In cities that developed later, the World Bank and Asian Development Bank discouraged the building of public transport and people have been forced to rely on cars creating the massive traffic jams that characterize those cities. Newman believes one of the best studies on how cities built for cars might be converted to rail use is The Urban Village report, which used Melbourne as an example. It found that pushing everyone into the city centre was not the best approach. Instead, the proposal advocated the creation of urban villages at hundreds of sites, mostly around railway stations. It was once assumed that improvements in telecommunications would lead to more dispersal in the population as people were no longer forced into cities. However, the ISTP teams research demonstrates that the population and job density of cities rose or remained constant in the 1980s after decades of decline. The explanation for this seems to be that it is valuable to place people working in related fields together. The new world will largely depend on human creativity, and creativity flourishes where people come together face-to-face.
Efficient cities can improve the quality of life for their inhabitants.
e
id_487
Advantages of public transport. A new study conducted for the World Bank by Murdoch Universitys Institute for Science and Technology Policy (ISTP) has demonstrated that public transport is more efficient than cars. The study compared the proportion of wealth poured into transport by thirty-seven cities around the world. This included both the public and private costs of building, maintaining and using a transport system. The study found that the Western Australian city of Perth is a good example of a city with minimal public transport. As a result, 17% of its wealth went into transport costs. Some European and Asian cities, on the other hand, spent as little as 5%. Professor Peter Newman, ISTP Director, pointed out that these more efficient cities were able to put the difference into attracting industry and jobs or creating a better place to live. According to Professor Newman, the larger Australian city of Melbourne is a rather unusual city in this sort of comparison. He describes it as two cities: A European city surrounded by a car-dependent one. Melbournes large tram network has made car use in the inner city much lower, but the outer suburbs have the same car-based structure as most other Australian cities. The explosion in demand for accommodation in the inner suburbs of Melbourne suggests a recent change in many peoples preferences as to where they live. Newman says this is a new, broader way of considering public transport issues. In the past, the case for public transport has been made on the basis of environmental and social justice considerations rather than economics. Newman, however, believes the study demonstrates that the auto-dependent city model is inefficient and grossly inadequate in economic as well as environmental terms. Bicycle use was not included in the study but Newman noted that the two most bicycle friendly cities considered Amsterdam and Copenhagen were very efficient, even though their public transport systems were reasonable but not special. It is common for supporters of road networks to reject the models of cities with good public transport by arguing that such systems would not work in their particular city. One objection is climate. Some people say their city could not make more use of public transport because it is either too hot or too cold. Newman rejects this, pointing out that public transport has been successful in both Toronto and Singapore and, in fact, he has checked the use of cars against climate and found zero correlation. When it comes to other physical features, road lobbies are on stronger ground. For example, Newman accepts it would be hard for a city as hilly as Auckland to develop a really good rail network. However, he points out that both Hong Kong and Zurich have managed to make a success of their rail systems, heavy and light respectively, though there are few cities in the world as hilly. In fact, Newman believes the main reason for adopting one sort of transport over another is politics: The more democratic the process, the more public transport is favored. He considers Portland, Oregon, a perfect example of this. Some years ago, federal money was granted to build a new road. However, local pressure groups forced a referendum over whether to spend the money on light rail instead. The rail proposal won and the railway worked spectacularly well. In the years that have followed, more and more rail systems have been put in, dramatically changing the nature of the city. Newman notes that Portland has about the same population as Perth and had a similar population density at the time. In the UK, travel times to work had been stable for at least six centuries, with people avoiding situations that required them to spend more than half an hour travelling to work. Trains and cars initially allowed people to live at greater distances without taking longer to reach their destination. However, public infrastructure did not keep pace with urban sprawl, causing massive congestion problems which now make commuting times far higher. There is a widespread belief that increasing wealth encourages people to live farther out where cars are the only viable transport. The example of European cities refutes that. They are often wealthier than their American counterparts but have not generated the same level of car use. In Stockholm, car use has actually fallen in recent years as the city has become larger and wealthier. A new study makes this point even more starkly. Developing cities in Asia, such as Jakarta and Bangkok, make more use of the car than wealthy Asian cities such as Tokyo and Singapore. In cities that developed later, the World Bank and Asian Development Bank discouraged the building of public transport and people have been forced to rely on cars creating the massive traffic jams that characterize those cities. Newman believes one of the best studies on how cities built for cars might be converted to rail use is The Urban Village report, which used Melbourne as an example. It found that pushing everyone into the city centre was not the best approach. Instead, the proposal advocated the creation of urban villages at hundreds of sites, mostly around railway stations. It was once assumed that improvements in telecommunications would lead to more dispersal in the population as people were no longer forced into cities. However, the ISTP teams research demonstrates that the population and job density of cities rose or remained constant in the 1980s after decades of decline. The explanation for this seems to be that it is valuable to place people working in related fields together. The new world will largely depend on human creativity, and creativity flourishes where people come together face-to-face.
An inner-city tram network is dangerous for car drivers.
n
id_488
Advantages of public transport. A new study conducted for the World Bank by Murdoch Universitys Institute for Science and Technology Policy (ISTP) has demonstrated that public transport is more efficient than cars. The study compared the proportion of wealth poured into transport by thirty-seven cities around the world. This included both the public and private costs of building, maintaining and using a transport system. The study found that the Western Australian city of Perth is a good example of a city with minimal public transport. As a result, 17% of its wealth went into transport costs. Some European and Asian cities, on the other hand, spent as little as 5%. Professor Peter Newman, ISTP Director, pointed out that these more efficient cities were able to put the difference into attracting industry and jobs or creating a better place to live. According to Professor Newman, the larger Australian city of Melbourne is a rather unusual city in this sort of comparison. He describes it as two cities: A European city surrounded by a car-dependent one. Melbournes large tram network has made car use in the inner city much lower, but the outer suburbs have the same car-based structure as most other Australian cities. The explosion in demand for accommodation in the inner suburbs of Melbourne suggests a recent change in many peoples preferences as to where they live. Newman says this is a new, broader way of considering public transport issues. In the past, the case for public transport has been made on the basis of environmental and social justice considerations rather than economics. Newman, however, believes the study demonstrates that the auto-dependent city model is inefficient and grossly inadequate in economic as well as environmental terms. Bicycle use was not included in the study but Newman noted that the two most bicycle friendly cities considered Amsterdam and Copenhagen were very efficient, even though their public transport systems were reasonable but not special. It is common for supporters of road networks to reject the models of cities with good public transport by arguing that such systems would not work in their particular city. One objection is climate. Some people say their city could not make more use of public transport because it is either too hot or too cold. Newman rejects this, pointing out that public transport has been successful in both Toronto and Singapore and, in fact, he has checked the use of cars against climate and found zero correlation. When it comes to other physical features, road lobbies are on stronger ground. For example, Newman accepts it would be hard for a city as hilly as Auckland to develop a really good rail network. However, he points out that both Hong Kong and Zurich have managed to make a success of their rail systems, heavy and light respectively, though there are few cities in the world as hilly. In fact, Newman believes the main reason for adopting one sort of transport over another is politics: The more democratic the process, the more public transport is favored. He considers Portland, Oregon, a perfect example of this. Some years ago, federal money was granted to build a new road. However, local pressure groups forced a referendum over whether to spend the money on light rail instead. The rail proposal won and the railway worked spectacularly well. In the years that have followed, more and more rail systems have been put in, dramatically changing the nature of the city. Newman notes that Portland has about the same population as Perth and had a similar population density at the time. In the UK, travel times to work had been stable for at least six centuries, with people avoiding situations that required them to spend more than half an hour travelling to work. Trains and cars initially allowed people to live at greater distances without taking longer to reach their destination. However, public infrastructure did not keep pace with urban sprawl, causing massive congestion problems which now make commuting times far higher. There is a widespread belief that increasing wealth encourages people to live farther out where cars are the only viable transport. The example of European cities refutes that. They are often wealthier than their American counterparts but have not generated the same level of car use. In Stockholm, car use has actually fallen in recent years as the city has become larger and wealthier. A new study makes this point even more starkly. Developing cities in Asia, such as Jakarta and Bangkok, make more use of the car than wealthy Asian cities such as Tokyo and Singapore. In cities that developed later, the World Bank and Asian Development Bank discouraged the building of public transport and people have been forced to rely on cars creating the massive traffic jams that characterize those cities. Newman believes one of the best studies on how cities built for cars might be converted to rail use is The Urban Village report, which used Melbourne as an example. It found that pushing everyone into the city centre was not the best approach. Instead, the proposal advocated the creation of urban villages at hundreds of sites, mostly around railway stations. It was once assumed that improvements in telecommunications would lead to more dispersal in the population as people were no longer forced into cities. However, the ISTP teams research demonstrates that the population and job density of cities rose or remained constant in the 1980s after decades of decline. The explanation for this seems to be that it is valuable to place people working in related fields together. The new world will largely depend on human creativity, and creativity flourishes where people come together face-to-face.
Cities with high levels of bicycle usage can be efficient even when public transport is only averagely good.
e
id_489
Advertisements in the United Kingdom must conform to the standards set by the Advertisement Standards Agency (ASA). This agency ensures that products are not falsely promoted and attaches a financial penalty for false statements. An example of this is Post- Production Enhancement (PPE). PPE is a process by which images are digitally corrected after they have been captured. PPE is commonly used in skin-care adverts; providing a smoother, younger or healthier appearance than the product actually delivers. Many companies find loop holes in the ASA regulations regarding PPE by stating such a process has been used in small print at the bottom of the image. Such promotions escape the regulations set down by the Advertisement Standards Agency but can still be misleading.
The Advertisement Standards Agency (ASA) supports the use of PPE to promote skin care products.
c
id_490
Advertisements in the United Kingdom must conform to the standards set by the Advertisement Standards Agency (ASA). This agency ensures that products are not falsely promoted and attaches a financial penalty for false statements. An example of this is Post- Production Enhancement (PPE). PPE is a process by which images are digitally corrected after they have been captured. PPE is commonly used in skin-care adverts; providing a smoother, younger or healthier appearance than the product actually delivers. Many companies find loop holes in the ASA regulations regarding PPE by stating such a process has been used in small print at the bottom of the image. Such promotions escape the regulations set down by the Advertisement Standards Agency but can still be misleading.
The Advertisement Standards Agency (ASA) provides financial penalties to companies breaching advertisement regulations.
e
id_491
Advertisements in the United Kingdom must conform to the standards set by the Advertisement Standards Agency (ASA). This agency ensures that products are not falsely promoted and attaches a financial penalty for false statements. An example of this is Post- Production Enhancement (PPE). PPE is a process by which images are digitally corrected after they have been captured. PPE is commonly used in skin-care adverts; providing a smoother, younger or healthier appearance than the product actually delivers. Many companies find loop holes in the ASA regulations regarding PPE by stating such a process has been used in small print at the bottom of the image. Such promotions escape the regulations set down by the Advertisement Standards Agency but can still be misleading.
The Advertisement Standards Agency (ASA) wants to achieve younger, healthier looking skin for people in the UK.
n
id_492
Advertisements in the United Kingdom must conform to the standards set by the Advertisement Standards Agency (ASA). This agency ensures that products are not falsely promoted and attaches a financial penalty for false statements. An example of this is Post- Production Enhancement (PPE). PPE is a process by which images are digitally corrected after they have been captured. PPE is commonly used in skin-care adverts; providing a smoother, younger or healthier appearance than the product actually delivers. Many companies find loop holes in the ASA regulations regarding PPE by stating such a process has been used in small print at the bottom of the image. Such promotions escape the regulations set down by the Advertisement Standards Agency but can still be misleading.
The Advertisement Standards Agency (ASA) supports the use of digitally altered images to promote products.
c
id_493
Affordable Art Art prices have fallen drastically. The art market is being flooded with good material, much of it from big-name artists, including Pablo Picasso and Andy Warhol. Many pieces sell for less than you might expect, with items that would have made 20,000 two years ago fetching only 5,000 to 10,000 this autumn, according to Philip Hoffman, chief executive of the Fine Art Fund. Here, we round up what is looking cheap now, with a focus on works in the range of 500 to 10.000. Picasso is one of the most iconic names in art, yet some of his ceramics and lithographs fetched less than 1,000 each at Bonhams on Thursday. The low prices are because he produced so many of them. However, their value has increased steadily and his works will only become scarcer as examples are lost. Nic McElhatton, the chairman of Christies South Kensington, says that the biggest 'affordable' category for top artists is 'multiples' prints such as screenprints or lithographs in limited editions. In a Christie's sale this month, examples by Picasso, Matisse, Miro and Steinlen sold for less than 5,000 each. Alexandra Gill, the head of prints at the auction house, says that some prints are heavily hand-worked, or often coloured, by the artist, making them personalised. 'Howard Hodgkin's are a good example, ' she says. There's still prejudice against prints, but for the artist it was another, equal, medium. Mr Hoffman believes that these types of works are currently about as 'cheap as they can get' and will hold their value in the long run though he admits that their sheer number means prices are unlikely to rise any time soon. It can be smarter to buy realty good one-offs from lesser-known artists, he adds. A limited budget will not run to the blockbuster names you can obtain with multiples, but it will buy you work by Royal Academicians (RAs) and others whose pieces are held in national collections and who are given long write-ups in the art history books. For example, the Christie's sale of art from the Lehman Brothers collection on Wednesday will include Valley with cornflowers in oil by Anthony Gross (22 of whose works are held by the Tate), at 1,000 to 1,500. There is no reserve on items with estimates of 1,000 or less, and William Porter, who is in charge of the sale, expects some lots to go for 'very little'. The sale also has oils by the popular Mary Fedden (whose works are often reproduced on greetings cards), including Spanish House and The White Hyacinth, at 7,000 to 10,000 each. Large works by important Victorian painters are available in this sort of price range, too. These are affordable because their style has come to be considered 'uncool', but they please a large traditionalist following nonetheless. For example, the sale of 19th-century paintings at Bonhams on Wednesday has a Hampstead landscape by Frederick William Watts at 6,000 to 8,000 and a study of three Spanish girls by John Bagnold Burgess at 4,000 to 6,000. There are proto-social realist works depicting poverty, too, such as Uncared for by Augustus Edwin Mulready, at 10,000 to 15,000. Smaller auction houses offer a mix of periods and media. Tuesday's sale at Chiswick Auctions in West London includes a 1968 screenprint of Campbell's Tomato Soup by Andy Warhol, at 6,000 to 8,000, and 44 sketches by Augustus John, at 200 to 800 each. The latter have been restored after the artist tore them up. Meanwhile, the paintings and furniture sale at Duke's of Dorchester on Thursday has a coloured block print of Acrobats at Play by Marc Chagall, at 100 to 200, and a lithograph of a mother and child by Henry Moore, at 500 to 700. A group of five watercolour landscape studies by Jean-Baptiste Camille Corot is up at 1,500 to 3,000. Affordable works from lesser-known artists and younger markets are less safe, but they have the potential to offer greater rewards if you catch an emerging trend. Speculating on such trends is high-risk, so is worthwhile only if you like what you buy (you get something beautiful to keep, whatever happens), can afford to lose the capital and enjoy the necessary research. A trend could be based on a country or region. China has rocketed, but other Asian and Middle Eastern markets have yet to really emerge. Mr Horwich mentions some 1970s Iraqi paintings that he sold this year in Dubai. 'They are part of a sophisticated scene that remains little-known. ' Mr Hoffman tips Turkey and the Middle East. Meanwhile, the Sotheby's Impressionist and modern art sale in New York features a 1962 oil by the Vietnamese Vu Cao Dam, a graduate of Hanoi's Ecole des Beaux Arts de l'Indochine and friend of Chagall, at $8000 to $12000 (5,088 to 7,632). The painting shows two girls boating in traditional ao dai dresses. A further way of making money is to try to spot talent in younger artists. The annual Frieze Art Fair in Regent's Park provides a chance to buy from 170 contemporary galleries. Or you could gamble on the future fame trajectory of an established artist's subject. For example, a Gerald Laing screenprint of The Kiss (2007) showing Amy Winehouse and her ex-husband is up for 4,700 at the Multiplied fair.
Greeting cards can sell for up to 10,000 each.
n
id_494
Affordable Art Art prices have fallen drastically. The art market is being flooded with good material, much of it from big-name artists, including Pablo Picasso and Andy Warhol. Many pieces sell for less than you might expect, with items that would have made 20,000 two years ago fetching only 5,000 to 10,000 this autumn, according to Philip Hoffman, chief executive of the Fine Art Fund. Here, we round up what is looking cheap now, with a focus on works in the range of 500 to 10.000. Picasso is one of the most iconic names in art, yet some of his ceramics and lithographs fetched less than 1,000 each at Bonhams on Thursday. The low prices are because he produced so many of them. However, their value has increased steadily and his works will only become scarcer as examples are lost. Nic McElhatton, the chairman of Christies South Kensington, says that the biggest 'affordable' category for top artists is 'multiples' prints such as screenprints or lithographs in limited editions. In a Christie's sale this month, examples by Picasso, Matisse, Miro and Steinlen sold for less than 5,000 each. Alexandra Gill, the head of prints at the auction house, says that some prints are heavily hand-worked, or often coloured, by the artist, making them personalised. 'Howard Hodgkin's are a good example, ' she says. There's still prejudice against prints, but for the artist it was another, equal, medium. Mr Hoffman believes that these types of works are currently about as 'cheap as they can get' and will hold their value in the long run though he admits that their sheer number means prices are unlikely to rise any time soon. It can be smarter to buy realty good one-offs from lesser-known artists, he adds. A limited budget will not run to the blockbuster names you can obtain with multiples, but it will buy you work by Royal Academicians (RAs) and others whose pieces are held in national collections and who are given long write-ups in the art history books. For example, the Christie's sale of art from the Lehman Brothers collection on Wednesday will include Valley with cornflowers in oil by Anthony Gross (22 of whose works are held by the Tate), at 1,000 to 1,500. There is no reserve on items with estimates of 1,000 or less, and William Porter, who is in charge of the sale, expects some lots to go for 'very little'. The sale also has oils by the popular Mary Fedden (whose works are often reproduced on greetings cards), including Spanish House and The White Hyacinth, at 7,000 to 10,000 each. Large works by important Victorian painters are available in this sort of price range, too. These are affordable because their style has come to be considered 'uncool', but they please a large traditionalist following nonetheless. For example, the sale of 19th-century paintings at Bonhams on Wednesday has a Hampstead landscape by Frederick William Watts at 6,000 to 8,000 and a study of three Spanish girls by John Bagnold Burgess at 4,000 to 6,000. There are proto-social realist works depicting poverty, too, such as Uncared for by Augustus Edwin Mulready, at 10,000 to 15,000. Smaller auction houses offer a mix of periods and media. Tuesday's sale at Chiswick Auctions in West London includes a 1968 screenprint of Campbell's Tomato Soup by Andy Warhol, at 6,000 to 8,000, and 44 sketches by Augustus John, at 200 to 800 each. The latter have been restored after the artist tore them up. Meanwhile, the paintings and furniture sale at Duke's of Dorchester on Thursday has a coloured block print of Acrobats at Play by Marc Chagall, at 100 to 200, and a lithograph of a mother and child by Henry Moore, at 500 to 700. A group of five watercolour landscape studies by Jean-Baptiste Camille Corot is up at 1,500 to 3,000. Affordable works from lesser-known artists and younger markets are less safe, but they have the potential to offer greater rewards if you catch an emerging trend. Speculating on such trends is high-risk, so is worthwhile only if you like what you buy (you get something beautiful to keep, whatever happens), can afford to lose the capital and enjoy the necessary research. A trend could be based on a country or region. China has rocketed, but other Asian and Middle Eastern markets have yet to really emerge. Mr Horwich mentions some 1970s Iraqi paintings that he sold this year in Dubai. 'They are part of a sophisticated scene that remains little-known. ' Mr Hoffman tips Turkey and the Middle East. Meanwhile, the Sotheby's Impressionist and modern art sale in New York features a 1962 oil by the Vietnamese Vu Cao Dam, a graduate of Hanoi's Ecole des Beaux Arts de l'Indochine and friend of Chagall, at $8000 to $12000 (5,088 to 7,632). The painting shows two girls boating in traditional ao dai dresses. A further way of making money is to try to spot talent in younger artists. The annual Frieze Art Fair in Regent's Park provides a chance to buy from 170 contemporary galleries. Or you could gamble on the future fame trajectory of an established artist's subject. For example, a Gerald Laing screenprint of The Kiss (2007) showing Amy Winehouse and her ex-husband is up for 4,700 at the Multiplied fair.
It is possible to buy a painting by Picasso for less than 5,000.
e
id_495
Affordable Art Art prices have fallen drastically. The art market is being flooded with good material, much of it from big-name artists, including Pablo Picasso and Andy Warhol. Many pieces sell for less than you might expect, with items that would have made 20,000 two years ago fetching only 5,000 to 10,000 this autumn, according to Philip Hoffman, chief executive of the Fine Art Fund. Here, we round up what is looking cheap now, with a focus on works in the range of 500 to 10.000. Picasso is one of the most iconic names in art, yet some of his ceramics and lithographs fetched less than 1,000 each at Bonhams on Thursday. The low prices are because he produced so many of them. However, their value has increased steadily and his works will only become scarcer as examples are lost. Nic McElhatton, the chairman of Christies South Kensington, says that the biggest 'affordable' category for top artists is 'multiples' prints such as screenprints or lithographs in limited editions. In a Christie's sale this month, examples by Picasso, Matisse, Miro and Steinlen sold for less than 5,000 each. Alexandra Gill, the head of prints at the auction house, says that some prints are heavily hand-worked, or often coloured, by the artist, making them personalised. 'Howard Hodgkin's are a good example, ' she says. There's still prejudice against prints, but for the artist it was another, equal, medium. Mr Hoffman believes that these types of works are currently about as 'cheap as they can get' and will hold their value in the long run though he admits that their sheer number means prices are unlikely to rise any time soon. It can be smarter to buy realty good one-offs from lesser-known artists, he adds. A limited budget will not run to the blockbuster names you can obtain with multiples, but it will buy you work by Royal Academicians (RAs) and others whose pieces are held in national collections and who are given long write-ups in the art history books. For example, the Christie's sale of art from the Lehman Brothers collection on Wednesday will include Valley with cornflowers in oil by Anthony Gross (22 of whose works are held by the Tate), at 1,000 to 1,500. There is no reserve on items with estimates of 1,000 or less, and William Porter, who is in charge of the sale, expects some lots to go for 'very little'. The sale also has oils by the popular Mary Fedden (whose works are often reproduced on greetings cards), including Spanish House and The White Hyacinth, at 7,000 to 10,000 each. Large works by important Victorian painters are available in this sort of price range, too. These are affordable because their style has come to be considered 'uncool', but they please a large traditionalist following nonetheless. For example, the sale of 19th-century paintings at Bonhams on Wednesday has a Hampstead landscape by Frederick William Watts at 6,000 to 8,000 and a study of three Spanish girls by John Bagnold Burgess at 4,000 to 6,000. There are proto-social realist works depicting poverty, too, such as Uncared for by Augustus Edwin Mulready, at 10,000 to 15,000. Smaller auction houses offer a mix of periods and media. Tuesday's sale at Chiswick Auctions in West London includes a 1968 screenprint of Campbell's Tomato Soup by Andy Warhol, at 6,000 to 8,000, and 44 sketches by Augustus John, at 200 to 800 each. The latter have been restored after the artist tore them up. Meanwhile, the paintings and furniture sale at Duke's of Dorchester on Thursday has a coloured block print of Acrobats at Play by Marc Chagall, at 100 to 200, and a lithograph of a mother and child by Henry Moore, at 500 to 700. A group of five watercolour landscape studies by Jean-Baptiste Camille Corot is up at 1,500 to 3,000. Affordable works from lesser-known artists and younger markets are less safe, but they have the potential to offer greater rewards if you catch an emerging trend. Speculating on such trends is high-risk, so is worthwhile only if you like what you buy (you get something beautiful to keep, whatever happens), can afford to lose the capital and enjoy the necessary research. A trend could be based on a country or region. China has rocketed, but other Asian and Middle Eastern markets have yet to really emerge. Mr Horwich mentions some 1970s Iraqi paintings that he sold this year in Dubai. 'They are part of a sophisticated scene that remains little-known. ' Mr Hoffman tips Turkey and the Middle East. Meanwhile, the Sotheby's Impressionist and modern art sale in New York features a 1962 oil by the Vietnamese Vu Cao Dam, a graduate of Hanoi's Ecole des Beaux Arts de l'Indochine and friend of Chagall, at $8000 to $12000 (5,088 to 7,632). The painting shows two girls boating in traditional ao dai dresses. A further way of making money is to try to spot talent in younger artists. The annual Frieze Art Fair in Regent's Park provides a chance to buy from 170 contemporary galleries. Or you could gamble on the future fame trajectory of an established artist's subject. For example, a Gerald Laing screenprint of The Kiss (2007) showing Amy Winehouse and her ex-husband is up for 4,700 at the Multiplied fair.
It is not worth investing in new artists or markets because there is a great risk of losing all your money.
e
id_496
Affordable Art Art prices have fallen drastically. The art market is being flooded with good material, much of it from big-name artists, including Pablo Picasso and Andy Warhol. Many pieces sell for less than you might expect, with items that would have made 20,000 two years ago fetching only 5,000 to 10,000 this autumn, according to Philip Hoffman, chief executive of the Fine Art Fund. Here, we round up what is looking cheap now, with a focus on works in the range of 500 to 10.000. Picasso is one of the most iconic names in art, yet some of his ceramics and lithographs fetched less than 1,000 each at Bonhams on Thursday. The low prices are because he produced so many of them. However, their value has increased steadily and his works will only become scarcer as examples are lost. Nic McElhatton, the chairman of Christies South Kensington, says that the biggest 'affordable' category for top artists is 'multiples' prints such as screenprints or lithographs in limited editions. In a Christie's sale this month, examples by Picasso, Matisse, Miro and Steinlen sold for less than 5,000 each. Alexandra Gill, the head of prints at the auction house, says that some prints are heavily hand-worked, or often coloured, by the artist, making them personalised. 'Howard Hodgkin's are a good example, ' she says. There's still prejudice against prints, but for the artist it was another, equal, medium. Mr Hoffman believes that these types of works are currently about as 'cheap as they can get' and will hold their value in the long run though he admits that their sheer number means prices are unlikely to rise any time soon. It can be smarter to buy realty good one-offs from lesser-known artists, he adds. A limited budget will not run to the blockbuster names you can obtain with multiples, but it will buy you work by Royal Academicians (RAs) and others whose pieces are held in national collections and who are given long write-ups in the art history books. For example, the Christie's sale of art from the Lehman Brothers collection on Wednesday will include Valley with cornflowers in oil by Anthony Gross (22 of whose works are held by the Tate), at 1,000 to 1,500. There is no reserve on items with estimates of 1,000 or less, and William Porter, who is in charge of the sale, expects some lots to go for 'very little'. The sale also has oils by the popular Mary Fedden (whose works are often reproduced on greetings cards), including Spanish House and The White Hyacinth, at 7,000 to 10,000 each. Large works by important Victorian painters are available in this sort of price range, too. These are affordable because their style has come to be considered 'uncool', but they please a large traditionalist following nonetheless. For example, the sale of 19th-century paintings at Bonhams on Wednesday has a Hampstead landscape by Frederick William Watts at 6,000 to 8,000 and a study of three Spanish girls by John Bagnold Burgess at 4,000 to 6,000. There are proto-social realist works depicting poverty, too, such as Uncared for by Augustus Edwin Mulready, at 10,000 to 15,000. Smaller auction houses offer a mix of periods and media. Tuesday's sale at Chiswick Auctions in West London includes a 1968 screenprint of Campbell's Tomato Soup by Andy Warhol, at 6,000 to 8,000, and 44 sketches by Augustus John, at 200 to 800 each. The latter have been restored after the artist tore them up. Meanwhile, the paintings and furniture sale at Duke's of Dorchester on Thursday has a coloured block print of Acrobats at Play by Marc Chagall, at 100 to 200, and a lithograph of a mother and child by Henry Moore, at 500 to 700. A group of five watercolour landscape studies by Jean-Baptiste Camille Corot is up at 1,500 to 3,000. Affordable works from lesser-known artists and younger markets are less safe, but they have the potential to offer greater rewards if you catch an emerging trend. Speculating on such trends is high-risk, so is worthwhile only if you like what you buy (you get something beautiful to keep, whatever happens), can afford to lose the capital and enjoy the necessary research. A trend could be based on a country or region. China has rocketed, but other Asian and Middle Eastern markets have yet to really emerge. Mr Horwich mentions some 1970s Iraqi paintings that he sold this year in Dubai. 'They are part of a sophisticated scene that remains little-known. ' Mr Hoffman tips Turkey and the Middle East. Meanwhile, the Sotheby's Impressionist and modern art sale in New York features a 1962 oil by the Vietnamese Vu Cao Dam, a graduate of Hanoi's Ecole des Beaux Arts de l'Indochine and friend of Chagall, at $8000 to $12000 (5,088 to 7,632). The painting shows two girls boating in traditional ao dai dresses. A further way of making money is to try to spot talent in younger artists. The annual Frieze Art Fair in Regent's Park provides a chance to buy from 170 contemporary galleries. Or you could gamble on the future fame trajectory of an established artist's subject. For example, a Gerald Laing screenprint of The Kiss (2007) showing Amy Winehouse and her ex-husband is up for 4,700 at the Multiplied fair.
Picasso, Warhol, Matisse, Miro and Steinlen are big-name artists.
e
id_497
After providing LPG in easy-to-carry 5-kg cylinders, the government launched 2-kg bottles at local kirana stores and introduction of online booking of new connections for subsidised cooking fuel.
The online booking will help the customers get new cylinders delivered quickly
c
id_498
After providing LPG in easy-to-carry 5-kg cylinders, the government launched 2-kg bottles at local kirana stores and introduction of online booking of new connections for subsidised cooking fuel.
The 2-kg cylinder will cater to the LPG requirements for all sections of society, including economically weaker families
c
id_499
After providing LPG in easy-to-carry 5-kg cylinders, the government launched 2-kg bottles at local kirana stores and introduction of online booking of new connections for subsidised cooking fuel.
Online booking will end hassles of customers running to gas agencies for getting a new LPG connection.
e