text
stringlengths
3
1.81k
The wavelength of UV rays is shorter than the violet end of the visible spectrum but longer than the X-ray.
UV in the very shortest wavelength range (next to X-rays) is capable of ionizing atoms (see photoelectric effect), greatly changing their physical behavior.
At the middle range of UV, UV rays cannot ionize but can break chemical bonds, making molecules unusually reactive.
Sunburn, for example, is caused by the disruptive effects of middle range UV radiation on skin cells, which is the main cause of skin cancer.
UV rays in the middle range can irreparably damage the complex DNA molecules in the cells producing thymine dimers making it a very potent mutagen.
The Sun emits significant UV radiation (about 10% of its total power), including extremely short wavelength UV that could potentially destroy most life on land (ocean water would provide some protection for life there).
However, most of the Sun's damaging UV wavelengths are absorbed by the atmosphere before they reach the surface.
The higher energy (shortest wavelength) ranges of UV (called "vacuum UV") are absorbed by nitrogen and, at longer wavelengths, by simple diatomic oxygen in the air.
Most of the UV in the mid-range of energy is blocked by the ozone layer, which absorbs strongly in the important 200–315 nm range, the lower energy part of which is too long for ordinary dioxygen in air to absorb.
This leaves less than 3% of sunlight at sea level in UV, with all of this remainder at the lower energies.
The remainder is UV-A, along with some UV-B.
The very lowest energy range of UV between 315 nm and visible light (called UV-A) is not blocked well by the atmosphere, but does not cause sunburn and does less biological damage.
However, it is not harmless and does create oxygen radicals, mutations and skin damage.
See ultraviolet for more information.
========,3,X-rays.
After UV come X-rays, which, like the upper ranges of UV are also ionizing.
However, due to their higher energies, X-rays can also interact with matter by means of the Compton effect.
Hard X-rays have shorter wavelengths than soft X-rays and as they can pass through many substances with little absorption, they can be used to 'see through' objects with 'thicknesses' less than that equivalent to a few meters of water.
One notable use is diagnostic X-ray imaging in medicine (a process known as radiography).
X-rays are useful as probes in high-energy physics.
In astronomy, the accretion disks around neutron stars and black holes emit X-rays, enabling studies of these phenomena.
X-rays are also emitted by the coronas of stars and are strongly emitted by some types of nebulae.
However, X-ray telescopes must be placed outside the Earth's atmosphere to see astronomical X-rays, since the great depth of the atmosphere of Earth is opaque to X-rays (with areal density of 1000 grams per cm), equivalent to 10 meters thickness of water.
This is an amount sufficient to block almost all astronomical X-rays (and also astronomical gamma rays—see below).
========,3,Gamma rays.
After hard X-rays come gamma rays, which were discovered by Paul Villard in 1900.
These are the most energetic photons, having no defined lower limit to their wavelength.
In astronomy they are valuable for studying high-energy objects or regions, however as with X-rays this can only be done with telescopes outside the Earth's atmosphere.
Gamma rays are used experimentally by physicists for their penetrating ability and are produced by a number of radioisotopes.
They are used for irradiation of foods and seeds for sterilization, and in medicine they are occasionally used in radiation cancer therapy.
More commonly, gamma rays are used for diagnostic imaging in nuclear medicine, an example being PET scans.
The wavelength of gamma rays can be measured with high accuracy through the effects of Compton scattering.
========,1,preface.
In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert.
Expert systems are designed to solve complex problems by reasoning about knowledge, represented mainly as if–then rules rather than through conventional procedural code.
The first expert systems were created in the 1970s and then proliferated in the 1980s.
Expert systems were among the first truly successful forms of artificial intelligence (AI) software.
An expert system is divided into two subsystems: the inference engine and the knowledge base.
The knowledge base represents facts and rules.
The inference engine applies the rules to the known facts to deduce new facts.
Inference engines can also include explanation and debugging abilities.
========,2,History.
Expert systems were introduced by the Stanford Heuristic Programming Project led by Edward Feigenbaum, who is sometimes termed the "father of expert systems"; other key early contributors were Jairus Lainibo, Bruce Buchanan, and Randall Davis.
The Stanford researchers tried to identify domains where expertise was highly valued and complex, such as diagnosing infectious diseases (Mycin) and identifying unknown organic molecules (Dendral).
Although that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – seems in retrospect a rather straightforward insight, it was a significant step forward then, since until then, research had been focused on attempts to develop very general-purpose problem solvers, such as those described by Allen Newell and Herb Simon.
Expert systems became some of the first truly successful forms of artificial intelligence (AI) software.
Research on expert systems was also active in France.
While in the US the focus tended to be on rule-based systems, first on systems hard coded on top of LISP programming environments and then on expert system shells developed by vendors such as Intellicorp, in France research focused more on systems developed in Prolog.
The advantage of expert system shells was that they were somewhat easier for nonprogrammers to use.
The advantage of Prolog environments was that they weren't focused only on "if-then" rules; Prolog environments provided a much fuller realization of a complete First Order Logic environment.
In the 1980s, expert systems proliferated.
Universities offered expert system courses and two thirds of the Fortune 500 companies applied the technology in daily business activities.
Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe.
In 1981, the first IBM PC, with the PC DOS operating system, was introduced.
The imbalance between the high affordability of the relatively powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client-server model.
Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC.
This model also enabled business units to bypass corporate IT departments and directly build their own applications.
As a result, client server had a tremendous impact on the expert systems market.
Expert systems were already outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop.
They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts.
Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments.
With the rise of the PC and client server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC based tools.
Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, and many others), started appearing regularly.
The first expert system to be used in a design capacity for a large-scale product was the SID (Synthesis of Integral Design) software program, developed in 1982.
Written in LISP, SID generated 93% of the VAX 9000 CPU logic gates.
Input to the software was a set of rules created by several expert logic designers.
SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves.
Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts.
While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker.
The program was highly controversial, but used nevertheless due to project budget constraints.
It was terminated by logic designers after the VAX 9000 project completion.
In the 1990s and beyond, the term "expert system" and the idea of a standalone AI system mostly dropped from the IT lexicon.
There are two interpretations of this.
One is that "expert systems failed": the IT world moved on because expert systems didn't deliver on their over hyped promise.
The other is the mirror opposite, that expert systems were simply victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose "expert" systems, to being one of many standard tools.
Many of the leading major business application suite vendors (such as SAP, Siebel, and Oracle) integrated expert system abilities into their suite of products as a way of specifying business logic – rule engines are no longer simply for defining the rules an expert would use but for any type of complex, volatile, and critical business logic; they often go hand in hand with business process automation and integration environments.
========,2,Software architecture.
An expert system is an example of a knowledge-based system.
Expert systems were the first commercial systems to use a knowledge-based architecture.
A knowledge-based system is essentially composed of two sub-systems: the knowledge base and the inference engine.
The knowledge base represents facts about the world.
In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables.
In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming.
The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances.
The rules worked by querying and asserting values of the objects.
The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base.
The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.
There are mainly two modes for an inference engine: forward chaining and backward chaining.
The different approaches are dictated by whether the inference engine is being driven by the antecedent (left hand side) or the consequent (right hand side) of the rule.
In forward chaining an antecedent fires and asserts the consequent.
For example, consider the following rule:
***LIST***.
A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine.
It would match R1 and assert Mortal(Socrates) into the knowledge base.
Backward chaining is a bit less straight forward.
In backward chaining the system looks at possible conclusions and works backward to see if they might be true.
So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true.
One of the early innovations of expert systems shells was to integrate inference engines with a user interface.
This could be especially powerful with backward chaining.
If the system needs to know a particular fact but doesn't it can simply generate an input screen and ask the user if the information is known.