source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Efficiency%20of%20food%20conversion
|
The efficiency of conversion of ingested food to unit of body substance (ECI, also termed "growth efficiency") is an index measure of food fuel efficiency in animals. The ECI is a rough scale of how much of the food ingested is converted into growth in the animal's mass. It can be used to compare the growth efficiency as measured by the weight gain of different animals from consuming a given quantity of food relative to its size.
The ECI effectively represents efficiencies of both digestion (approximate digestibility or AD) and metabolic efficiency, or how well digested food is converted to mass (efficiency of conversion of digested food or ECD). The formula for the efficiency of food fuel is thus:
These concepts are also very closely related to the feed conversion ratio (FCR) and feed efficiency.
|
https://en.wikipedia.org/wiki/Reconstruction%20from%20zero%20crossings
|
The problem of reconstruction from zero crossings can be stated as: given the zero crossings of a continuous signal, is it possible to reconstruct the signal (to within a constant factor)? Worded differently, what are the conditions under which a signal can be reconstructed from its zero crossings?
This problem has two parts. Firstly, proving that there is a unique reconstruction of the signal from the zero crossings, and secondly, how to actually go about reconstructing the signal. Though there have been quite a few attempts, no conclusive solution has yet been found. Ben Logan from Bell Labs wrote an article in 1977 in the Bell System Technical Journal giving some criteria under which unique reconstruction is possible. Though this has been a major step towards the solution, many people are dissatisfied with the type of condition that results from his article.
According to Logan, a signal is uniquely reconstructible from its zero crossings if:
The signal x(t) and its Hilbert transform xt have no zeros in common with each other.
The frequency-domain representation of the signal is at most 1 octave long, in other words, it is bandpass-limited between some frequencies B and 2B.
Further reading
External links
Signal processing
|
https://en.wikipedia.org/wiki/Radio%20spectrum%20scope
|
The radio spectrum scope (also radio panoramic receiver, panoramic adapter, pan receiver, pan adapter, panadapter, panoramic radio spectroscope, panoramoscope, panalyzor and band scope) was invented by Marcel Wallace - and measures and shows the magnitude of an input signal versus frequency within one or more radio bands - e.g. shortwave bands. A spectrum scope is normally a lot cheaper than a spectrum analyzer, because the aim is not high quality frequency resolution - nor high quality signal strength measurements.
The spectrum scope use can be to:
find radio channels quickly of known and unknown signals when receiving.
find radio amateurs activity quickly e.g. with the intent of communicating with them.
Modern spectrum scopes, like the Elecraft P3, also plot signal frequencies and amplitudes over time, in a rolling format called a waterfall plot.
|
https://en.wikipedia.org/wiki/Enterprise%20test%20software
|
Enterprise test software (ETS) is a type of software that electronics and other manufacturers use to standardize product testing enterprise-wide, rather than simply in the test engineering department. It is designed to integrate and synchronize test systems to other enterprise functions such as research and development (R&D), new product introduction (NPI), manufacturing, and supply chain, overseeing the collaborative test processes between engineers and managers in their respective departments.
Details
Like most enterprise software subcategories, ETS represents an evolution away from custom-made, in-house software development by original equipment manufacturers (OEM). It typically replaces a cumbersome, unsophisticated, test management infrastructure that manufacturers have to redesign for every new product launch. Some large companies, such as Alcatel, Cisco, and Nortel, develop ETS systems internally to standardize and accelerate their test engineering activities, while others such as Harris Corporation and Freescale Semiconductor choose commercial off-the-shelf ETS options for advantages that include test data management and report generation. This need results from the extensive characterization efforts associated with IC design, characterization, validation, and verification. ETS accelerates design improvements through test system management and version control.
ETS supports test system development and can be interconnected with manufacturing execution systems (MES), enterprise resource planning (ERP), and product lifecycle management (PLM) software packages to eliminate double-data entry and enable real-time information sharing throughout all company departments.
Enterprise-wide test applications
ETS covers five major enterprise-wide test applications.
Test and automation—By using ETS in conjunction with virtual instrumentation programming tools, design and test engineers avoid custom software programming unrelated to device characterization, and can ther
|
https://en.wikipedia.org/wiki/ZX8301
|
The ZX8301 is an Uncommitted Logic Array (ULA) integrated circuit designed for the Sinclair QL microcomputer. Also known as the "Master Chip", it provides a Video Display Generator, the division of a 15 MHz crystal to provide the 7.5 MHz system clock, ZX8302 register address decoder, DRAM refresh and bus controller. The ZX8301 is IC22 on the QL motherboard.
The Sinclair Research business model had always been to work toward a maximum performance to price ratio (as was evidenced by the keyboard mechanisms in the QL and earlier Sinclair models). Unfortunately, this focus on price and performance often resulted in cost cutting in the design and build of Sinclair's machines. One such cost driven decision (failing to use a hardware buffer integrated circuit (IC) between the IC pins and the external RGB monitor connection) caused the ZX8301 to quickly develop a reputation for being fragile and easy to damage, particularly if the monitor plug was inserted or removed while the QL was powered up. Such action resulted in damage to the video circuitry and almost always required replacement of the ZX8301.
The ZX8301, when subsequently used in the International Computers Limited (ICL) One Per Desk featured hardware buffering, and the chip proved to be much more reliable in this configuration.
See also
Sinclair QL
One Per Desk
List of Sinclair QL clones
|
https://en.wikipedia.org/wiki/FeaturePak
|
The FeaturePak standard defines a small form factor card for I/O expansion of embedded systems and other space-constrained computing applications. The cards are intended to be used for adding a wide range of capabilities, such as A/D, D/A, digital I/O, counter/timers, serial I/O, wired or wireless networking, image processing, GPS, etc. to their host systems.
FeaturePak cards plug into edgecard sockets, parallel to the mainboard, similarly to how SO-DIMM memory modules install in laptop or desktop PCs.
Socket Interface
The FeaturePak socket consists of a 230-pin "MXM" connector, which provides all connections to the FeaturePak card, including the host interface, external I/O signals, and power. (Note, however, that the FeaturePak specification's use of the MXM connector differs from that of Nvidia's MXM specification.)
Host interface connections include:
PCI Express -- up to two PCI Express x1 lanes
USB -- up to two USB 1.1 or 2.0 channels
Serial—one logic-level UART interface
SMBus
JTAG
PCI Express Reset
Several auxiliary signals
3V and 5V power and ground
Reserved lines (for future enhancements)
The balance of the 230-pin FeaturePak socket is allocated to I/O, in two groups:
Primary I/O—50 general purpose I/O lines, of which 34 pairs have enhanced isolation
Secondary I/O—50 general purpose I/O lines
The FeaturePak socket's MXM connector is claimed capable of 2.5 Gbit/s bandwidth on each pin, thereby supporting high-speed interfaces such as PCI Express, gigabit Ethernet, USB 2.0, among others. Enhanced I/O signal isolation within the Primary I/O group is accomplished by leaving alternate pins on the MXM connector interface unused.
FeaturePak cards are powered by 3.3V and use standard 3.3V logic levels. The socket also provides a 5V input option, for cards that require the additional voltage to power auxiliary functions.
Other than the provision of extra isolation for 34 signal pairs, there is no defined allocation of the signals within the Primary I/O and
|
https://en.wikipedia.org/wiki/Die%20shot
|
A die shot or die photography is a photo or recording of the layout of an integrated circuit, showings its design with any packaging removed. A die shot can be compared with the cross-section of an (almost) two-dimensional computer chip, on which the design and construction of various tracks and components can be clearly seen. Due to the high complexity of modern computer chips, die-shots are often displayed colourfully, with various parts coloured using special lighting or even manually.
Methods
A die shot is a picture of a computer chip without its housing. There are two ways to capture such a chip "naked" on a photo; by either taking the photo before a chip is packaged or by removing its package.
Avoiding the package
Taking a photo before the chip ends up in a housing is typically preserved to the chip manufacturer, because the chip is packed fairly quickly in the production process to protect the sensitive very small parts against external influences. However, manufacturers may be reluctant to share die shots to prevent competitors from easily gaining insight into the technological progress and complexity of a chip.
Removing the package
Removing the housing from a chip is typically a chemical process - a chip is so small and the parts are so microscopic that opening a housing (also named delidding) with tools such as saws, sanders or dremels could damage the chip in such a way that a die shot is no longer or less useful. For example, sulphuric acid can be used to dissolve the plastic housing of a chip. This is not a harmless process - sulphuric acid can cause a lot of health damage to people, animals and the environment. Chips are immersed in a glass jar with sulphuric acid, after which the sulphuric acid is boiled for up to 45 minutes at a temperature of 337 degrees Celsius. Once the plastic housing has decayed, there may be other processes to remove leftover carbon, such as with a hot bath of concentrated nitric acid. After this, the contents of a chip a
|
https://en.wikipedia.org/wiki/Microbiology
|
Microbiology () is the scientific study of microorganisms, those being of unicellular (single-celled), multicellular (consisting of complex cells), or acellular (lacking cells). Microbiology encompasses numerous sub-disciplines including virology, bacteriology, protistology, mycology, immunology, and parasitology.
Eukaryotic microorganisms possess membrane-bound organelles and include fungi and protists, whereas prokaryotic organisms—all of which are microorganisms—are conventionally classified as lacking membrane-bound organelles and include Bacteria and Archaea. Microbiologists traditionally relied on culture, staining, and microscopy for the isolation and identification of microorganisms. However, less than 1% of the microorganisms present in common environments can be cultured in isolation using current means. With the emergence of biotechnology, Microbiologists currently rely on molecular biology tools such as DNA sequence-based identification, for example, the 16S rRNA gene sequence used for bacterial identification.
Viruses have been variably classified as organisms, as they have been considered either as very simple microorganisms or very complex molecules. Prions, never considered as microorganisms, have been investigated by virologists, however, as the clinical effects traced to them were originally presumed due to chronic viral infections, virologists took a search—discovering "infectious proteins".
The existence of microorganisms was predicted many centuries before they were first observed, for example by the Jains in India and by Marcus Terentius Varro in ancient Rome. The first recorded microscope observation was of the fruiting bodies of moulds, by Robert Hooke in 1666, but the Jesuit priest Athanasius Kircher was likely the first to see microbes, which he mentioned observing in milk and putrid material in 1658. Antonie van Leeuwenhoek is considered a father of microbiology as he observed and experimented with microscopic organisms in the 1670s, us
|
https://en.wikipedia.org/wiki/Pulse-density%20modulation
|
Pulse-density modulation, or PDM, is a form of modulation used to represent an analog signal with a binary signal. In a PDM signal, specific amplitude values are not encoded into codewords of pulses of different weight as they would be in pulse-code modulation (PCM); rather, the relative density of the pulses corresponds to the analog signal's amplitude. The output of a 1-bit DAC is the same as the PDM encoding of the signal.
Description
In a pulse-density modulation bitstream, a 1 corresponds to a pulse of positive polarity (+A), and a 0 corresponds to a pulse of negative polarity (−A). Mathematically, this can be represented as
where x[n] is the bipolar bitstream (either −A or +A), and a[n] is the corresponding binary bitstream (either 0 or 1).
A run consisting of all 1s would correspond to the maximum (positive) amplitude value, all 0s would correspond to the minimum (negative) amplitude value, and alternating 1s and 0s would correspond to a zero amplitude value. The continuous amplitude waveform is recovered by low-pass filtering the bipolar PDM bitstream.
Examples
A single period of the trigonometric sine function, sampled 100 times and represented as a PDM bitstream, is:
0101011011110111111111111111111111011111101101101010100100100000010000000000000000000001000010010101
Two periods of a higher frequency sine wave would appear as:
0101101111111111111101101010010000000000000100010011011101111111111111011010100100000000000000100101
In pulse-density modulation, a high density of 1s occurs at the peaks of the sine wave, while a low density of 1s occurs at the troughs of the sine wave.
Analog-to-digital conversion
A PDM bitstream is encoded from an analog signal through the process of a 1-bit delta-sigma modulation. This process uses a one-bit quantizer that produces either a 1 or 0 depending on the amplitude of the analog signal. A 1 or 0 corresponds to a signal that is all the way up or all the way down, respectively. Because in the real world, ana
|
https://en.wikipedia.org/wiki/Tomographic%20reconstruction
|
Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security.
This article applies in general to reconstruction methods for all kinds of tomography, but some of the terms and physical descriptions refer directly to the reconstruction of X-ray computed tomography.
Introducing formula
The projection of an object, resulting from the tomographic measurement process at a given angle , is made up of a set of line integrals (see Fig. 1). A set of many such projections under different angles organized in 2D is called sinogram (see Fig. 3). In X-ray CT, the line integral represents the total attenuation of the beam of x-rays as it travels in a straight line through the object. As mentioned above, the resulting image is a 2D (or 3D) model of the attenuation coefficient. That is, we wish to find the image . The simplest and easiest way to visualise the method of scanning is the system of parallel projection, as used in the first scanners. For this discussion we consider the data to be collected as a series of parallel rays, at position , across a projection at angle . This is repeated for various angles. Attenuation occurs exponentially in tissue:
where is the attenuation coefficient as a function of position. Therefore, generally the total attenuation of a ray at position , on the projection at angle , is given by the line integral:
Using the coordinate system of Figure 1, the value of onto which the point will be projected
|
https://en.wikipedia.org/wiki/ADvantage%20Framework
|
ADvantage Framework is a model-based systems engineering software platform used for a range of activities including building and operating real-time simulation-based lab test facilities for hardware-in-the-loop simulation purposes. ADvantage includes several desktop applications and run-time services software. The ADvantage run-time services combine a Real-Time Operating System (RTOS) layered on top of commercial computer equipment such as single board computers or standard PCs. The ADvantage tools include a development environment, a run-time environment, a plotting and analysis tool set, a fault insertion control application, and a vehicle network configuration and management tool that runs on a Windows or Linux desktop or laptop PC. The ADvantage user base is composed mainly of aerospace, defense, and naval/marine companies and academic researchers. Recent ADvantage real-time applications involved research and development of power systems applications including microgrid/smartgrid control and All-Electric Ship applications.
History
With roots in analog computer systems used for real-time applications where digital computers could not meet low-latency computational requirements, Applied Dynamics International moved from proprietary hardware architectures to commercial computing equipment over several decades. The Real-Time Station (RTS) was Applied Dynamics first entry into using Commercial Off The Shelf (COTS) computer hardware. Included with the sale of the RTS was the Applied Dynamics software package called "SIMsystem". In 2001, version 7.0 of SIMsystem was released. From 2001 to 2006 Applied Dynamics reworked their software and hardware products to make better use of COTS processors, computer boards, open source software technology and to better abstract software components from the hardware equipment. In 2006, Applied Dynamics announced a beta release of the "ADvantage Framework". The ADvantage brand provided an umbrella for the disparate software co
|
https://en.wikipedia.org/wiki/BioBlitz
|
A BioBlitz, also written without capitals as bioblitz, is an intense period of biological surveying in an attempt to record all the living species within a designated area. Groups of scientists, naturalists, and volunteers conduct an intensive field study over a continuous time period (e.g., usually 24 hours). There is a public component to many BioBlitzes, with the goal of getting the public interested in biodiversity. To encourage more public participation, these BioBlitzes are often held in urban parks or nature reserves close to cities. Research into the best practices for a successful BioBlitz has found that collaboration with local natural history museums can improve public participation. As well, BioBlitzes have been shown to be a successful tool in teaching post-secondary students about biodiversity.
Features
A BioBlitz has different opportunities and benefits than a traditional, scientific field study. Some of these potential benefits include:
Enjoyment – Instead of a highly structured and measured field survey, this sort of event has the atmosphere of a festival. The short time frame makes the search more exciting.
Local – The concept of biodiversity tends to be associated with coral reefs or tropical rainforests. A BioBlitz offers the chance for people to visit a nearby setting and see that local parks have biodiversity and are important to conserve.
Science – These one-day events gather basic taxonomic information on some groups of species.
Meet the Scientists – A BioBlitz encourages people to meet working scientists and ask them questions.
Identifying rare and unique species/groups – When volunteers and scientists work together, they are able to identify uncommon or special habitats for protection and management and, in some cases, rare species may be uncovered.
Documenting species occurrence – BioBlitzes do not provide a complete species inventory for a site, but they provide a species list which makes a basis for a more complete inventory and will of
|
https://en.wikipedia.org/wiki/Proxy%20server
|
In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and performance in the process.
Instead of connecting directly to a server that can fulfill a request for a resource, such as a file or web page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such as load balancing, privacy, or security. Proxies were devised to add structure and encapsulation to distributed systems. A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server.
Types
A proxy server may reside on the user's local computer, or at any point between the user's computer and destination servers on the Internet. A proxy server that passes unmodified requests and responses is usually called a gateway or sometimes a tunneling proxy. A forward proxy is an Internet-facing proxy used to retrieve data from a wide range of sources (in most cases, anywhere on the Internet). A reverse proxy is usually an internal-facing proxy used as a front-end to control and protect access to a server on a private network. A reverse proxy commonly also performs tasks such as load-balancing, authentication, decryption and caching.
Open proxies
An open proxy is a forwarding proxy server that is accessible by any Internet user. In 2008, network security expert Gordon Lyon estimated that "hundreds of thousands" of open proxies are operated on the Internet.
Anonymous proxy: This server reveals its identity as a proxy server but does not disclose the originating IP address of the client. Although this type of server can be discovered easily, it can be beneficial for some users as it hides the originating
|
https://en.wikipedia.org/wiki/Food%20and%20biological%20process%20engineering
|
Food and biological process engineering is a discipline concerned with applying principles of engineering to the fields of food production and distribution and biology. It is a broad field, with workers fulfilling a variety of roles ranging from design of food processing equipment to genetic modification of organisms. In some respects it is a combined field, drawing from the disciplines of food science and biological engineering to improve the earth's food supply.
Creating, processing, and storing food to support the world's population requires extensive interdisciplinary knowledge. Notably, there are many biological engineering processes within food engineering to manipulate the multitude of organisms involved in our complex food chain. Food safety in particular requires biological study to understand the microorganisms involved and how they affect humans. However, other aspects of food engineering, such as food storage and processing, also require extensive biological knowledge of both the food and the microorganisms that inhabit it. This food microbiology and biology knowledge becomes biological engineering when systems and processes are created to maintain desirable food properties and microorganisms while providing mechanisms for eliminating the unfavorable or dangerous ones.
Concepts
Many different concepts are involved in the field of food and biological process engineering. Below are listed several major ones.
Food science
The science behind food and food production involves studying how food behaves and how it can be improved. Researchers analyze longevity and composition (i.e., ingredients, vitamins, minerals, etc.) of foods, as well as how to ensure food safety.
Genetic engineering
Modern food and biological process engineering relies heavily on applications of genetic manipulation. By understanding plants and animals on the molecular level, scientists are able to engineer them with specific goals in mind.
Among the most notable applications of
|
https://en.wikipedia.org/wiki/Physical%20media
|
Physical media refers to the physical materials that are used to store or transmit information in data communications. These physical media are generally physical objects made of materials such as copper or glass. They can be touched and felt, and have physical properties such as weight and color. For a number of years, copper and glass were the only media used in computer networking.
The term physical media can also be used to describe data storage media like records, cassettes, VHS, LaserDiscs, CDs, DVDs, and Blu-rays, especially when compared with modern streaming media or content that has been downloaded from the Internet onto a hard drive or other storage device as files.
Types of physical media
Copper wire
Copper wire is currently the most commonly used type of physical media due to the abundance of copper in the world, as well as its ability to conduct electrical power. Copper is also one of the cheaper metals which makes it more feasible to use.
Most copper wires used in data communications today have eight strands of copper, organized in unshielded twisted pairs, or UTP. The wires are twisted around one another because it reduces electrical interference from outside sources. In addition to UTP, some wires use shielded twisted pairs (STP), which reduce electrical interference even further. The way copper wires are twisted around one another also has an effect on data rates. Category 3 cable (Cat3), has three to four twists per foot and can support speeds of 10 Mbit/s. Category 5 cable (Cat5) is newer and has three to four twists per inch, which results in a maximum data rate of 100 Mbit/s. In addition, there are category 5e (Cat5e) cables which can support speeds of up to 1,000 Mbit/s, and more recently, category 6 cables (Cat6), which support data rates of up to 10,000 Mbit/s (i.e., 10 Gbit/s).
On average, copper wire costs around $1 per foot.
Optical fiber
Optical fiber is a thin and flexible piece of fiber made of glass or plastic. Unlike copper w
|
https://en.wikipedia.org/wiki/PI%20curve
|
The PI (or photosynthesis-irradiance) curve is a graphical representation of the empirical relationship between solar irradiance and photosynthesis. A derivation of the Michaelis–Menten curve, it shows the generally positive correlation between light intensity and photosynthetic rate. It is a plot of photosynthetic rate as a function of light intensity (irradiance).
Introduction
The PI curve can be applied to terrestrial and marine reactions but is most commonly used to explain ocean-dwelling phytoplankton's photosynthetic response to changes in light intensity. Using this tool to approximate biological productivity is important because phytoplankton contribute ~50% of total global carbon fixation and are important suppliers to the marine food web.
Within the scientific community, the curve can be referred to as the PI, PE or Light Response Curve. While individual researchers may have their own preferences, all are readily acceptable for use in the literature. Regardless of nomenclature, the photosynthetic rate in question can be described in terms of carbon (C) fixed per unit per time. Since individuals vary in size, it is also useful to normalise C concentration to Chlorophyll a (an important photosynthetic pigment) to account for specific biomass.
History
As far back as 1905, marine researchers attempted to develop an equation to be used as the standard in establishing the relationship between solar irradiance and photosynthetic production. Several groups had relative success, but in 1976 a comparison study conducted by Alan Jassby and Trevor Platt, researchers at the Bedford Institute of Oceanography in Dartmouth, Nova Scotia, reached a conclusion that solidified the way in which a PI curve is developed. After evaluating the eight most-used equations, Jassby and Platt argued that the PI curve can be best approximated by a hyperbolic tangent function, at least until photoinhibition is reached.
Equations
There are two simple derivations of the equatio
|
https://en.wikipedia.org/wiki/Spectral%20correlation%20density
|
The spectral correlation density (SCD), sometimes also called the cyclic spectral density or spectral correlation function, is a function that describes the cross-spectral density of all pairs of frequency-shifted versions of a time-series. The spectral correlation density applies only to cyclostationary processes because stationary processes do not exhibit spectral correlation. Spectral correlation has been used both in signal detection and signal classification. The spectral correlation density is closely related to each of the bilinear time-frequency distributions, but is not considered one of Cohen's class of distributions.
Definition
The cyclic auto-correlation function of a time-series is calculated as follows:
where (*) denotes complex conjugation. By the Wiener–Khinchin theorem [questionable, discuss], the spectral correlation density is then:
Estimation methods
The SCD is estimated in the digital domain with an arbitrary resolution in frequency and time. There are several estimation methods currently used in practice to efficiently estimate the spectral correlation for use in real-time analysis of signals due to its high computational complexity. Some of the more popular ones are the FFT Accumulation Method (FAM) and the Strip-Spectral Correlation Algorithm. A fast-spectral-correlation (FSC) algorithm has recently been introduced.
FFT accumulation method (FAM)
This section describes the steps for one to compute the SCD on computers. If with MATLAB or the NumPy library in Python, the steps are rather simple to implement. The FFT accumulation method (FAM) is a digital approach to calculating the SCD. Its input is a large block of IQ samples, and the output is a complex-valued image, the SCD.
Let the signal, or block of IQ samples, be a complex valued tensor, or multidimensional array, of shape , where each element is an IQ sample. The first step of the FAM is to break into a matrix of frames of size with overlap.
where is the separation betwee
|
https://en.wikipedia.org/wiki/Equidimensionality
|
In mathematics, especially in topology, equidimensionality is a property of a space that the local dimension is the same everywhere.
Definition (topology)
A topological space X is said to be equidimensional if for all points p in X, the dimension at p, that is dim p(X), is constant. The Euclidean space is an example of an equidimensional space. The disjoint union of two spaces X and Y (as topological spaces) of different dimension is an example of a non-equidimensional space.
Definition (algebraic geometry)
A scheme S is said to be equidimensional if every irreducible component has the same Krull dimension. For example, the affine scheme Spec k[x,y,z]/(xy,xz), which intuitively looks like a line intersecting a plane, is not equidimensional.
Cohen–Macaulay ring
An affine algebraic variety whose coordinate ring is a Cohen–Macaulay ring is equidimensional.
|
https://en.wikipedia.org/wiki/List%20of%20search%20appliance%20vendors
|
A search appliance is a type of computer which is attached to a corporate network for the purpose of indexing the content shared across that network in a way that is similar to a web search engine. It may be made accessible through a public web interface or restricted to users of that network. A search appliance is usually made up of: a gathering component, a standardizing component, a data storage area, a search component, a user interface component, and a management interface component.
Vendors of search appliances
Fabasoft
Google
InfoLibrarian Search Appliance™
Maxxcat
Searchdaimon
Thunderstone
Former/defunct vendors of search appliances
Black Tulip Systems
Google Search Appliance
Index Engines
Munax
Perfect Search Appliance
|
https://en.wikipedia.org/wiki/Eightfold%20way%20%28physics%29
|
In physics, the eightfold way is an organizational scheme for a class of subatomic particles known as hadrons that led to the development of the quark model. Working alone, both the American physicist Murray Gell-Mann and the Israeli physicist Yuval Ne'eman proposed the idea in 1961.
The name comes from Gell-Mann's (1961) paper and is an allusion to the Noble Eightfold Path of Buddhism.
Background
By 1947, physicists believed that they had a good understanding of what the smallest bits of matter were. There were electrons, protons, neutrons, and photons (the components that make up the vast part of everyday experience such as atoms and light) along with a handful of unstable (i.e., they undergo radioactive decay) exotic particles needed to explain cosmic rays observations such as pions, muons and hypothesized neutrino. In addition, the discovery of the positron suggested there could be anti-particles for each of them. It was known a "strong interaction" must exist to overcome electrostatic repulsion in atomic nuclei. Not all particles are influenced by this strong force but those that are, are dubbed "hadrons", which are now further classified as mesons (middle mass) and baryons (heavy weight).
But the discovery of the (neutral) kaon in late 1947 and the subsequent discovery of a positively charged kaon in 1949 extended the meson family in an unexpected way and in 1950 the lambda particle did the same thing for the baryon family. These particles decay much more slowly than they are produced, a hint that there are two different physical processes involved. This was first suggested by Abraham Pais in 1952. In 1953, M. Gell Mann and a collaboration in Japan, Tadao Nakano with Kazuhiko Nishijima, independently suggested a new conserved value now known as "strangeness" during their attempts to understand the growing collection of known particles.
The trend of discovering new mesons and baryons would continue through the 1950s as the number of known "elementary" particl
|
https://en.wikipedia.org/wiki/Biological%20interaction
|
In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship.
Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups.
History
Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of "co-actions", later adopted by biologists as "interactions". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic.
The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitud of interaction force (competition/mutualism) or effect of individual fitness, accordi
|
https://en.wikipedia.org/wiki/Software%20development%20process
|
In software engineering, a software development process is a process of planning and managing software development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improve design and/or product management. It is also known as a software development life cycle (SDLC). The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application.
Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming.
A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a software development "process" is a more specific term to refer to a specific process chosen by a specific organization. For example, there are many specific software development processes that fit the spiral life-cycle model. The field is often considered a subset of the systems development life cycle.
History
The software development methodology (also known as SDM) framework didn't emerge until the 1960s. According to Elliott (2004), the systems development life cycle (SDLC) can be considered to be the oldest formalized methodology framework for building information systems. The main idea of the SDLC has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially" within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy
|
https://en.wikipedia.org/wiki/List%20of%20synchrotron%20radiation%20facilities
|
This is a table of synchrotrons and storage rings used as synchrotron radiation sources, and free electron lasers.
|
https://en.wikipedia.org/wiki/Alzheimer%27s%20Disease%20Neuroimaging%20Initiative
|
Alzheimer's Disease Neuroimaging Initiative (ADNI) is a multisite study that aims to improve clinical trials for the prevention and treatment of Alzheimer's disease (AD). This cooperative study combines expertise and funding from the private and public sector to study subjects with AD, as well as those who may develop AD and controls with no signs of cognitive impairment. Researchers at 63 sites in the US and Canada track the progression of AD in the human brain with neuroimaging, biochemical, and genetic biological markers. This knowledge helps to find better clinical trials for the prevention and treatment of AD. ADNI has made a global impact, firstly by developing a set of standardized protocols to allow the comparison of results from multiple centers, and secondly by its data-sharing policy which makes available all at the data without embargo to qualified researchers worldwide. To date, over 1000 scientific publications have used ADNI data. A number of other initiatives related to AD and other diseases have been designed and implemented using ADNI as a model. ADNI has been running since 2004 and is currently funded until 2021.
Primary goals
Detect the earliest signs of AD and to track the disease using biomarkers.
validate, standardize, and optimize biomarkers for clinical AD trials.
to make all data and samples available for sharing with clinical trial designers and scientists worldwide.
History and funding
The idea of a collaboration between public institutions and private pharmaceutical companies to fund a large biomarker project to study AD and to speed up progress toward effective treatments for the disease was conceived at the beginning of the millennium by Neil S. Buckholz at the National Institute on Aging (NIA) and Dr. William Potter, at Eli Lilly and Company. The Alzheimer's Disease Neuroimaging Initiative (ADNI) began in 2004 under the leadership of Dr. Michael W. Weiner, funded as a private – public partnership with $27 million contributed b
|
https://en.wikipedia.org/wiki/Classical%20probability%20density
|
The classical probability density is the probability density function that represents the likelihood of finding a particle in the vicinity of a certain location subject to a potential energy in a classical mechanical system. These probability densities are helpful in gaining insight into the correspondence principle and making connections between the quantum system under study and the classical limit.
Mathematical background
Consider the example of a simple harmonic oscillator initially at rest with amplitude . Suppose that this system was placed inside a light-tight container such that one could only view it using a camera which can only take a snapshot of what's happening inside. Each snapshot has some probability of seeing the oscillator at any possible position along its trajectory. The classical probability density encapsulates which positions are more likely, which are less likely, the average position of the system, and so on. To derive this function, consider the fact that the positions where the oscillator is most likely to be found are those positions at which the oscillator spends most of its time. Indeed, the probability of being at a given -value is proportional to the time spent in the vicinity of that -value. If the oscillator spends an infinitesimal amount of time in the vicinity of a given -value, then the probability of being in that vicinity will be
Since the force acting on the oscillator is conservative and the motion occurs over a finite domain, the motion will be cyclic with some period which will be denoted . Since the probability of the oscillator being at any possible position between the minimum possible -value and the maximum possible -value must sum to 1, the normalization
is used, where is the normalization constant. Since the oscillating mass covers this range of positions in half its period (a full period goes from to then back to ) the integral over is equal to , which sets to be .
Using the chain rule, can be put in te
|
https://en.wikipedia.org/wiki/Fast%20Ethernet
|
In computer networking, Fast Ethernet physical layers carry traffic at the nominal rate of 100 Mbit/s. The prior Ethernet speed was 10 Mbit/s. Of the Fast Ethernet physical layers, 100BASE-TX is by far the most common.
Fast Ethernet was introduced in 1995 as the IEEE 802.3u standard and remained the fastest version of Ethernet for three years before the introduction of Gigabit Ethernet. The acronym GE/FE is sometimes used for devices supporting both standards.
Nomenclature
The 100 in the media type designation refers to the transmission speed of 100 Mbit/s, while the BASE refers to baseband signaling. The letter following the dash (T or F) refers to the physical medium that carries the signal (twisted pair or fiber, respectively), while the last character (X, 4, etc.) refers to the line code method used. Fast Ethernet is sometimes referred to as 100BASE-X, where X is a placeholder for the FX and TX variants.
General design
Fast Ethernet is an extension of the 10-megabit Ethernet standard. It runs on twisted pair or optical fiber cable in a star wired bus topology, similar to the IEEE standard 802.3i called 10BASE-T, itself an evolution of 10BASE5 (802.3) and 10BASE2 (802.3a). Fast Ethernet devices are generally backward compatible with existing 10BASE-T systems, enabling plug-and-play upgrades from 10BASE-T. Most switches and other networking devices with ports capable of Fast Ethernet can perform autonegotiation, sensing a piece of 10BASE-T equipment and setting the port to 10BASE-T half duplex if the 10BASE-T equipment cannot perform auto negotiation itself. The standard specifies the use of CSMA/CD for media access control. A full-duplex mode is also specified and in practice, all modern networks use Ethernet switches and operate in full-duplex mode, even as legacy devices that use half duplex still exist.
A Fast Ethernet adapter can be logically divided into a media access controller (MAC), which deals with the higher-level issues of medium availability, a
|
https://en.wikipedia.org/wiki/Classification%20of%20manifolds
|
In mathematics, specifically geometry and topology, the classification of manifolds is a basic question, about which much is known, and many open questions remain.
Main themes
Overview
Low-dimensional manifolds are classified by geometric structure; high-dimensional manifolds are classified algebraically, by surgery theory.
"Low dimensions" means dimensions up to 4; "high dimensions" means 5 or more dimensions. The case of dimension 4 is somehow a boundary case, as it manifests "low dimensional" behaviour smoothly (but not topologically); see discussion of "low" versus "high" dimension.
Different categories of manifolds yield different classifications; these are related by the notion of "structure", and more general categories have neater theories.
Positive curvature is constrained, negative curvature is generic.
The abstract classification of high-dimensional manifolds is ineffective: given two manifolds (presented as CW complexes, for instance), there is no algorithm to determine if they are isomorphic.
Different categories and additional structure
Formally, classifying manifolds is classifying objects up to isomorphism.
There are many different notions of "manifold", and corresponding notions of
"map between manifolds", each of which yields a different category and a different classification question.
These categories are related by forgetful functors: for instance, a differentiable manifold is also a topological manifold, and a differentiable map is also continuous, so there is a functor .
These functors are in general neither one-to-one nor onto; these failures are generally referred to in terms of "structure", as follows. A topological manifold that is in the image of is said to "admit a differentiable structure", and the fiber over a given topological manifold is "the different differentiable structures on the given topological manifold".
Thus given two categories, the two natural questions are:
Which manifolds of a given type admit an additiona
|
https://en.wikipedia.org/wiki/Fault%20tolerance
|
Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of one or more faults within some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability, mission-critical, or even life-critical systems. The ability of maintaining functionality when portions of a system break down is referred to as graceful degradation.
A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced level, rather than failing completely, when some part of the system fails. The term is most commonly used to describe computer systems designed to continue more or less fully operational with, perhaps, a reduction in throughput or an increase in response time in the event of some partial failure. That is, the system as a whole is not stopped due to problems either in the hardware or the software. An example in another field is a motor vehicle designed so it will continue to be drivable if one of the tires is punctured, or a structure that is able to retain its integrity in the presence of damage due to causes such as fatigue, corrosion, manufacturing flaws, or impact.
Within the scope of an individual system, fault tolerance can be achieved by anticipating exceptional conditions and building the system to cope with them, and, in general, aiming for self-stabilization so that the system converges towards an error-free state. However, if the consequences of a system failure are catastrophic, or the cost of making it sufficiently reliable is very high, a better solution may be to use some form of duplication. In any case, if the consequence of a system failure is so catastrophic, the system must be able to use reversion to fall back to a safe mode. This is similar to roll-back r
|
https://en.wikipedia.org/wiki/Heat%20generation%20in%20integrated%20circuits
|
The heat dissipation in integrated circuits problem has gained an increasing interest in recent years due to the miniaturization of semiconductor devices. The temperature increase becomes relevant for cases of relatively small-cross-sections wires, because such temperature increase may affect the normal behavior of semiconductor devices.
Joule heating
Joule heating is a predominant heat mechanism for heat generation in integrated circuits and is an undesired effect.
Propagation
The governing equation of the physics of the problem to be analyzed is the heat diffusion equation. It relates the flux of heat in space, its variation in time and the generation of power.
Where is the thermal conductivity, is the density of the medium, is the specific heat
the thermal diffusivity and is the rate of heat generation per unit volume. Heat diffuses from the source following equation ([eq:diffusion]) and solution in a homogeneous medium of ([eq:diffusion]) has a Gaussian distribution.
See also
Thermal simulations for integrated circuits
Thermal design power
Thermal management in electronics
|
https://en.wikipedia.org/wiki/Developmental%20bioelectricity
|
Developmental bioelectricity is the regulation of cell, tissue, and organ-level patterning and behavior by electrical signals during the development of embryonic animals and plants. The charge carrier in developmental bioelectricity is the ion (a charged atom) rather than the electron, and an electric current and field is generated whenever a net ion flux occurs. Cells and tissues of all types use flows of ions to communicate electrically. Endogenous electric currents and fields, ion fluxes, and differences in resting potential across tissues comprise a signalling system. It functions along with biochemical factors, transcriptional networks, and other physical forces to regulate cell behaviour and large-scale patterning in processes such as embryogenesis, regeneration, and cancer suppression.
Overview
Developmental bioelectricity is a sub-discipline of biology, related to, but distinct from, neurophysiology and bioelectromagnetics. Developmental bioelectricity refers to the endogenous ion fluxes, transmembrane and transepithelial voltage gradients, and electric currents and fields produced and sustained in living cells and tissues. This electrical activity is often used during embryogenesis, regeneration, and cancer suppression—it is one layer of the complex field of signals that impinge upon all cells in vivo and regulate their interactions during pattern formation and maintenance. This is distinct from neural bioelectricity (classically termed electrophysiology), which refers to the rapid and transient spiking in well-recognized excitable cells like neurons and myocytes (muscle cells); and from bioelectromagnetics, which refers to the effects of applied electromagnetic radiation, and endogenous electromagnetics such as biophoton emission and magnetite.
The inside/outside discontinuity at the cell surface enabled by a lipid bilayer membrane (capacitor) is at the core of bioelectricity. The plasma membrane was an indispensable structure for the origin and evolut
|
https://en.wikipedia.org/wiki/List%20of%20physics%20mnemonics
|
This is a categorized list of physics mnemonics.
Mechanics
Work: formula
"Lots of Work makes me Mad!":
Work = Mad:
M=Mass
a=acceleration
d=distance
Thermodynamics
Ideal gas law
"Pure Virgins Never Really Tire":
PV=nRT
Gibbs's free energy formula
"Good Honey Tastes Sweet":
(delta)G = H - T(delta)S.
Electrodynamics
Ohm's Law
"Virgins Are Rare":
Volts = Amps x Resistance
Relation between Resistance and Resistivity
REPLAY
Resistance = ρ (Length/Area)
Inductive and Capacitive circuits
Once upon a time, the symbol E (for electromotive force) was used to designate voltages. Then, every student learned the phrase
ELI the ICE man
as a reminder that:
For an inductive (L) circuit, the EMF (E) is ahead of the current (I)
While for a capactive circuit (C), the current (I) is ahead of the EMF (E).
And then they all lived happily ever after.
Open and Short circuits
"There are zero COVS grazing in the field!"
This is a mnemonic to remember the useful fact that:
The Current through an Open circuit is always zero
The Voltage across a Short circuit is always zero
Order of rainbow colors
ROYGBIV (in reverse VIBGYOR) is commonly used to remember the order of colors in the visible light spectrum, as seen in a rainbow.
Richard of York gave battle in vain"
(red, orange, yellow, green, blue, indigo, violet).
Additionally, the fictitious name Roy G. Biv can be used as well.
(red, orange, yellow, green, blue, indigo, violet).
Speed of light
The phrase "We guarantee certainty, clearly referring to this light mnemonic." represents the speed of light in meters per second through the number of letters in each word: 299,792,458.
Electromagnetic spectrum
In the order of increasing frequency or decreasing wavelength of electromagnetic waves;
Road Men Invented Very Unique Xtra Gums
Ronald McDonald Invented Very Unusual & eXcellent Gherkins.
Remember My Instructions Visible Under X-Ray Glasses
Raging (or Red) Martians Invaded Venus Using X-ray Guns.Rahul's Mother
|
https://en.wikipedia.org/wiki/Equivalent%20rectangular%20bandwidth
|
The equivalent rectangular bandwidth or ERB is a measure used in psychoacoustics, which gives an approximation to the bandwidths of the filters in human hearing, using the unrealistic but convenient simplification of modeling the filters as rectangular band-pass filters, or band-stop filters, like in tailor-made notched music training (TMNMT).
Approximations
For moderate sound levels and young listeners, the bandwidth of human auditory filters can be approximated by the polynomial equation:
where f is the center frequency of the filter in kHz and ERB(f) is the bandwidth of the filter in Hz. The approximation is based on the results of a number of published simultaneous masking experiments and is valid from 0.1 to 6.5 kHz.
The above approximation was given in 1983 by Moore and Glasberg, who in 1990 published another (linear) approximation:
where f is in kHz and ERB(f) is in Hz. The approximation is applicable at moderate sound levels and for values of f between 0.1 and 10 kHz.
ERB-rate scale
The ERB-rate scale, or ERB-number scale, can be defined as a function ERBS(f) which returns the number of equivalent rectangular bandwidths below the given frequency f. The units of the ERB-number scale are known ERBs, or as Cams, following a suggestion by Hartmann. The scale can be constructed by solving the following differential system of equations:
The solution for ERBS(f) is the integral of the reciprocal of ERB(f) with the constant of integration set in such a way that ERBS(0) = 0.
Using the second order polynomial approximation () for ERB(f) yields:
where f is in kHz. The VOICEBOX speech processing toolbox for MATLAB implements the conversion and its inverse as:
where f is in Hz.
Using the linear approximation () for ERB(f) yields:
where f is in Hz.
See also
Critical bands
Bark scale
|
https://en.wikipedia.org/wiki/List%20of%20incomplete%20proofs
|
This page lists notable examples of incomplete published mathematical proofs. Most of these were accepted as correct for several years but later discovered to contain gaps. There are both examples where a complete proof was later found and where the alleged result turned out to be false.
Results later proved rigorously
Euclid's Elements. Euclid's proofs are essentially correct, but strictly speaking sometimes contain gaps because he tacitly uses some unstated assumptions, such as the existence of intersection points. In 1899 David Hilbert gave a complete set of (second order) axioms for Euclidean geometry, called Hilbert's axioms, and between 1926 and 1959 Tarski gave some complete sets of first order axioms, called Tarski's axioms.
Isoperimetric inequality. For three dimensions it states that the shape enclosing the maximum volume for its surface area is the sphere. It was formulated by Archimedes but not proved rigorously until the 19th century, by Hermann Schwarz.
Infinitesimals. In the 18th century there was widespread use of infinitesimals in calculus, though these were not really well defined. Calculus was put on firm foundations in the 19th century, and Robinson put infinitesimals in a rigorous basis with the introduction of nonstandard analysis in the 20th century.
Fundamental theorem of algebra (see History). Many incomplete or incorrect attempts were made at proving this theorem in the 18th century, including by d'Alembert (1746), Euler (1749), de Foncenex (1759), Lagrange (1772), Laplace (1795), Wood (1798), and Gauss (1799). The first rigorous proof was published by Argand in 1806.
Dirichlet's theorem on arithmetic progressions. In 1808 Legendre published an attempt at a proof of Dirichlet's theorem, but as Dupré pointed out in 1859 one of the lemmas used by Legendre is false. Dirichlet gave a complete proof in 1837.
The proofs of the Kronecker–Weber theorem by Kronecker (1853) and Weber (1886) both had gaps. The first complete proof was given
|
https://en.wikipedia.org/wiki/Exploitative%20interactions
|
Exploitative interactions, also known as enemy–victim interactions, is a part of consumer–resource interactions where one organism (the enemy) is the consumer of another organism (the victim), typically in a harmful manner. Some examples of this include predator–prey interactions, host–pathogen interactions, and brood parasitism.
In exploitative interactions, the enemy and the victim may often coevolve with each other. How exactly they coevolve depends on many factors, such as population density. One evolutionary consequence of exploitative interactions is antagonistic coevolution. This can occur because of resistance, where the victim attempts to decrease the number of successful attacks by the enemy, which encourages the enemy to evolve in response, thus resulting in a coevolutionary arms race. On the other hand, toleration, where the victim attempts to decrease the effect on fitness that successful enemy attacks have, may also evolve.
Exploitative interactions can have significant biological effects. For example, exploitative interactions between a predator and prey can result in the extinction of the victim (the prey, in this case), as the predator, by definition, kills the prey, and thus reduces its population. Another effect of these interactions is in the coevolutionary "hot" and "cold spots" put forth by geographic mosaic theory. In this case, coevolution caused by resistance would create "hot spots" of coevolutionary activity in an otherwise uniform environment, whereas "cold spots" would be created by the evolution of tolerance, which generally does not create a coevolutionary arms race.
See also
Biological interactions
Coevolution
Consumer–resource interactions
Host-pathogen interaction
Parasitism
Predation
|
https://en.wikipedia.org/wiki/Ultra-large-scale%20systems
|
Ultra-large-scale system (ULSS) is a term used in fields including Computer Science, Software Engineering and Systems Engineering to refer to software intensive systems with unprecedented amounts of hardware, lines of source code, numbers of users, and volumes of data. The scale of these systems gives rise to many problems: they will be developed and used by many stakeholders across multiple organizations, often with conflicting purposes and needs; they will be constructed from heterogeneous parts with complex dependencies and emergent properties; they will be continuously evolving; and software, hardware and human failures will be the norm, not the exception. The term 'ultra-large-scale system' was introduced by Northrop and others to describe challenges facing the United States Department of Defense. The term has subsequently been used to discuss challenges in many areas, including the computerization of financial markets. The term "ultra-large-scale system" (ULSS) is sometimes used interchangeably with the term "large-scale complex IT system" (LSCITS). These two terms were introduced at similar times to describe similar problems, the former being coined in the United States and the latter in the United Kingdom.
Background
The term ultra-large-scale system was introduced in a 2006 report from the Software Engineering Institute at Carnegie Mellon University authored by Linda Northrop and colleagues. The report explained that software intensive systems are reaching unprecedented scales (by measures including lines of code; numbers of users and stakeholders; purposes the system is put to; amounts of data stored, accessed, manipulated, and refined; numbers of connections and interdependencies among components; and numbers of hardware elements). When systems become ultra-large-scale, traditional approaches to engineering and management will no longer be adequate. The report argues that the problem is no longer of engineering systems or system of systems, but of engine
|
https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20physics
|
The following is a list of notable unsolved problems grouped into broad areas of physics.
Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed theory or investigate a phenomenon in greater detail.
There are still some questions beyond the Standard Model of physics, such as the strong CP problem, neutrino mass, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself—the Standard Model is inconsistent with that of general relativity, to the point that one or both theories break down under certain conditions (for example within known spacetime singularities like the Big Bang and the centres of black holes beyond the event horizon).
General physics
Theory of everything: Is there a singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe?
Dimensionless physical constants: At the present time, the values of various dimensionless physical constants cannot be calculated; they can be determined only by physical measurement. What is the minimum number of dimensionless physical constants from which all other dimensionless physical constants can be derived? Are dimensional physical constants necessary at all?
Quantum gravity
Quantum gravity: Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)? Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very s
|
https://en.wikipedia.org/wiki/RSCS
|
Remote Spooling Communications Subsystem or RSCS is a subsystem ("virtual machine" in VM terminology) of IBM's VM/370 operating system which accepts files transmitted to it from local or remote system and users and transmits them to destination local or remote users and systems. RSCS also transmits commands and messages among users and systems.
RSCS is the software that powered the world’s largest network (or network of networks) prior to the Internet and directly influenced both internet development and user acceptance of networking between independently managed organizations. RSCS was developed by Edson Hendricks and T.C. Hartmann. Both as an IBM product and as an IBM internal network, it later became known as VNET. The network interfaces continued to be called the RSCS compatible protocols and were used to interconnect with IBM systems other than VM systems (typically MVS) and non-IBM computers.
The history of this program, and its influence on IBM and the IBM user community, is described in contemporaneous accounts and interviews by Melinda Varian. Technical goals and innovations are described by Creasy and by Hendricks and Hartmann in seminal papers. Among academic users, the same software was employed by BITNET and related networks worldwide.
Background
RSCS arose because people throughout IBM recognized a need to exchange files. Hendricks’s solution was CPREMOTE, which he completed by mid-1969. CPREMOTE was the first example of a “service virtual machine” and was motivated partly by the desire to prove the usefulness of that concept.
In 1971, Norman L. Rasmussen, Manager of IBM’s Cambridge Scientific Center (CSC), asked Hendricks to find a way for the CSC machine to communicate with machines at IBM’s other Scientific Centers. CPREMOTE had taught Hendricks so much about how a communications facility would be used and what function was needed in such a facility, that he decided to discard it and begin again with a new design. After additional iterat
|
https://en.wikipedia.org/wiki/Multi-core%20processor
|
A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores (for example, dual-core or quad-core), each of which reads and executes program instructions. The instructions are ordinary CPU instructions (such as add, move data, and branch) but the single processor can run instructions on separate cores at the same time, increasing overall speed for programs that support multithreading or other parallel computing techniques. Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP) or onto multiple dies in a single chip package. The microprocessors currently used in almost all personal computers are multi-core.
A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared-memory inter-core communication methods. Common network topologies used to interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical cores; heterogeneous multi-core systems have cores that are not identical (e.g. big.LITTLE have heterogeneous cores that share the same instruction set, while AMD Accelerated Processing Units have cores that do not share the same instruction set). Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading.
Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU). Core count goes up to even dozens, and for specialized chips over 10,000, and in supercomputers (i.e. clusters of chips) the count can go over 10 million (and in one case up to 20 million processing elements total in addition to h
|
https://en.wikipedia.org/wiki/System%20in%20a%20package
|
A system in a package (SiP) or system-in-package is a number of integrated circuits (ICs) enclosed in one chip carrier package or encompassing an IC package substrate that may include passive components and perform the functions of an entire system. The ICs may be stacked using package on package, placed side by side, and/or embedded in the substrate. The SiP performs all or most of the functions of an electronic system, and is typically used when designing components for mobile phones, digital music players, etc. Dies containing integrated circuits may be stacked vertically on a substrate. They are internally connected by fine wires that are bonded to the package. Alternatively, with a flip chip technology, solder bumps are used to join stacked chips together. SiPs are like systems on a chip (SoCs) but less tightly integrated and not on a single semiconductor die.
Technology
SiP dies can be stacked vertically or tiled horizontally, with techniques like chiplets or quilt packaging, unlike less dense multi-chip modules, which place dies horizontally on a carrier. SiPs connect the dies with standard off-chip wire bonds or solder bumps, unlike slightly denser three-dimensional integrated circuits which connect stacked silicon dies with conductors running through the die. Many different 3D packaging techniques have been developed for stacking many fairly standard chip dies into a compact area.
SiPs can contain several chips—such as a specialized processor, DRAM, flash memory—combined with passive components—resistors and capacitors—all mounted on the same substrate. This means that a complete functional unit can be built in a multi-chip package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design. Despite its benefits, this technique decreases the yield of fabrication since any d
|
https://en.wikipedia.org/wiki/Grey%20box%20model
|
In mathematics, statistics, and computational modelling, a grey box model combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature. Thus, almost all models are grey box models as opposed to black box where no model form is assumed or white box models that are purely theoretical. Some models assume a special form such as a linear regression or neural network. These have special analysis methods. In particular linear regression techniques are much more efficient than most non-linear techniques. The model can be deterministic or stochastic (i.e. containing random components) depending on its planned use.
Model form
The general case is a non-linear model with a partial theoretical structure and some unknown parts derived from data. Models with unlike theoretical structures need to be evaluated individually, possibly using simulated annealing or genetic algorithms.
Within a particular model structure, parameters or variable parameter relations may need to be found. For a particular structure it is arbitrarily assumed that the data consists of sets of feed vectors f, product vectors p, and operating condition vectors c. Typically c will contain values extracted from f, as well as other values. In many cases a model can be converted to a function of the form:
m(f,p,q)
where the vector function m gives the errors between the data p, and the model predictions. The vector q gives some variable parameters that are the model's unknown parts.
The parameters q vary with the operating conditions c in a manner to be determined. This relation can be specified as q = Ac where A is a matrix of unknown coefficients, and c as in linear regression includes a constant term and possibly transformed values of the original operating conditions to obtain non-linear relations between the original operating condition
|
https://en.wikipedia.org/wiki/Penrose%20graphical%20notation
|
In mathematics and physics, Penrose graphical notation or tensor diagram notation is a (usually handwritten) visual depiction of multilinear functions or tensors proposed by Roger Penrose in 1971. A diagram in the notation consists of several shapes linked together by lines.
The notation widely appears in modern quantum theory, particularly in matrix product states and quantum circuits. In particular, Categorical quantum mechanics which includes ZX-calculus is a fully comprehensive reformulation of quantum theory in terms of Penrose diagrams, and is now widely used in quantum industry.
The notation has been studied extensively by Predrag Cvitanović, who used it, along with Feynman's diagrams and other related notations in developing "birdtracks", a group-theoretical diagram to classify the classical Lie groups. Penrose's notation has also been generalized using representation theory to spin networks in physics, and with the presence of matrix groups to trace diagrams in linear algebra.
Interpretations
Multilinear algebra
In the language of multilinear algebra, each shape represents a multilinear function. The lines attached to shapes represent the inputs or outputs of a function, and attaching shapes together in some way is essentially the composition of functions.
Tensors
In the language of tensor algebra, a particular tensor is associated with a particular shape with many lines projecting upwards and downwards, corresponding to abstract upper and lower indices of tensors respectively. Connecting lines between two shapes corresponds to contraction of indices. One advantage of this notation is that one does not have to invent new letters for new indices. This notation is also explicitly basis-independent.
Matrices
Each shape represents a matrix, and tensor multiplication is done horizontally, and matrix multiplication is done vertically.
Representation of special tensors
Metric tensor
The metric tensor is represented by a U-shaped loop or an upside-
|
https://en.wikipedia.org/wiki/5%20nm%20process
|
In semiconductor manufacturing, the International Roadmap for Devices and Systems defines the 5 nm process as the MOSFET technology node following the 7 nm node. In 2020, Samsung and TSMC entered volume production of 5 nm chips, manufactured for companies including Apple, Marvell, Huawei and Qualcomm.
The term "5 nm" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors being 5 nanometers in size. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a "5 nm node is expected to have a contacted gate pitch of 51 nanometers and a tightest metal pitch of 30 nanometers". However, in real world commercial practice, "5 nm" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption compared to the previous 7 nm process.
History
Background
Quantum tunnelling effects through the gate oxide layer on 7 nm and 5 nm transistors became increasingly difficult to manage using existing semiconductor processes. Single-transistor devices below 7 nm were first demonstrated by researchers in the early 2000s. In 2002, an IBM research team including Bruce Doris, Omer Dokumaci, Meikei Ieong and Anda Mocuta fabricated a 6-nanometre silicon-on-insulator (SOI) MOSFET.
In 2003, a Japanese research team at NEC, led by Hitoshi Wakabayashi and Shigeharu Yamagami, fabricated the first 5 nm MOSFET.
In 2015, IMEC and Cadence had fabricated 5 nm test chips. The fabricated test chips are not fully functional devices but rather are to evaluate patterning of interconnect layers.
In 2015, Intel described a lateral nanowire (or gate-all-around) FET concept for the 5 nm node.
In 2017, IBM revealed that it had
|
https://en.wikipedia.org/wiki/Somos%27%20quadratic%20recurrence%20constant
|
In mathematics, Somos' quadratic recurrence constant, named after Michael Somos, is the number
This can be easily re-written into the far more quickly converging product representation
which can then be compactly represented in infinite product form by:
The constant σ arises when studying the asymptotic behaviour of the sequence
with first few terms 1, 1, 2, 12, 576, 1658880, ... . This sequence can be shown to have asymptotic behaviour as follows:
Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent:
where ln is the natural logarithm and (z, s, q) is the Lerch transcendent.
Finally,
.
Notes
|
https://en.wikipedia.org/wiki/In-circuit%20emulation
|
In-circuit emulation (ICE) is the use of a hardware device or in-circuit emulator used to debug the software of an embedded system. It operates by using a processor with the additional ability to support debugging operations, as well as to carry out the main function of the system. Particularly for older systems, with limited processors, this usually involved replacing the processor temporarily with a hardware emulator: a more powerful although more expensive version. It was historically in the form of bond-out processor which has many internal signals brought out for the purpose of debugging. These signals provide information about the state of the processor.
More recently the term also covers JTAG-based hardware debuggers which provide equivalent access using on-chip debugging hardware with standard production chips. Using standard chips instead of custom bond-out versions makes the technology ubiquitous and low cost, and eliminates most differences between the development and runtime environments. In this common case, the in-circuit emulator term is a misnomer, sometimes confusingly so, because emulation is no longer involved.
Embedded systems present special problems for programmers because they usually lack keyboards, monitors, disk drives and other user interfaces that are present on computers. These shortcomings make in-circuit software debugging tools essential for many common development tasks.
Function
An in-circuit emulator (ICE) provides a window into the embedded system. The programmer uses the emulator to load programs into the embedded system, run them, step through them slowly, and view and change data used by the system's software.
An emulator gets its name because it emulates (imitates) the central processing unit (CPU) of the embedded system's computer. Traditionally it had a plug that inserts into the socket where the CPU integrated circuit chip would normally be placed. Most modern systems use the target system's CPU directly, with special
|
https://en.wikipedia.org/wiki/Order%20tracking%20%28signal%20processing%29
|
In rotordynamics, order tracking is a family of signal processing tools aimed at transforming a measured signal from time domain to angular (or order) domain. These techniques are applied to asynchronously sampled signals (i.e. with a constant sample rate in Hertz) to obtain the same signal sampled at constant angular increments of a reference shaft. In some cases the outcome of the Order Tracking is directly the Fourier transform of such angular domain signal, whose frequency counterpart is defined as "order". Each order represents a fraction of the angular velocity of the reference shaft.
Order tracking is based on a velocity measurement, generally obtained by means of a tachometer or encoder, needed to estimate the instantaneous velocity and/or the angular position of the shaft.
Three main families of computed order tracking techniques have been developed in the past: Computed Order Tracking (COT), Vold-Kalman Filter (VKF) and Order Tracking Transforms.
Order tracking refers to a signal processing technique used to extract the periodic content of a signal and track its frequency variations over time. This technique is often used in vibration analysis and monitoring of rotating machinery, such as engines, turbines, and pumps.
In order to track the order of a signal, the signal is first transformed into the frequency domain using techniques such as the Fourier transform. The resulting frequency spectrum shows the frequency content of the signal. From the frequency spectrum, it is possible to identify the dominant frequency components, which correspond to the various orders of the rotating machinery.
Once the orders are identified, a tracking algorithm is used to track the frequency variations of each order over time. This is done by comparing the frequency content of the signal at different time instants and identifying the shifts in the frequency components.
Computed order tracking
Computed order tracking is a resampling technique based on interpolation.
T
|
https://en.wikipedia.org/wiki/Network%20allocation%20vector
|
The network allocation vector (NAV) is a virtual carrier-sensing mechanism used with wireless network protocols such as IEEE 802.11 (Wi-Fi) and IEEE 802.16 (WiMax). The virtual carrier-sensing is a logical abstraction which limits the need for physical carrier-sensing at the air interface in order to save power. The MAC layer frame headers contain a duration field that specifies the transmission time required for the frame, in which time the medium will be busy. The stations listening on the wireless medium read the Duration field and set their NAV, which is an indicator for a station on how long it must defer from accessing the medium.
The NAV may be thought of as a counter, which counts down to zero at a uniform rate. When the counter is zero, the virtual carrier-sensing indication is that the medium is idle; when nonzero, the indication is busy. The medium shall be determined to be busy when the station (STA) is transmitting. In IEEE 802.11, the NAV represents the number of microseconds the sending STA intends to hold the medium busy (maximum of 32,767 microseconds). When the sender sends a Request to Send the receiver waits one SIFS before sending Clear to Send. Then the sender will wait again one SIFS before sending all the data. Again the receiver will wait a SIFS before sending ACK. So NAV is the duration from the first SIFS to the ending of ACK. During this time the medium is considered busy.
Wireless stations are often battery-powered, so to conserve power the stations may enter a power-saving mode. A station decrements its NAV counter until it becomes zero, at which time it is awakened to sense the medium again.
The NAV virtual carrier sensing mechanism is a prominent part of the CSMA/CA MAC protocol used with IEEE 802.11 WLANs. NAV is used in DCF, PCF and HCF.
Media access control
Computer networking
|
https://en.wikipedia.org/wiki/Rimose
|
Rimose is an adjective used to describe a surface that is cracked or fissured.
The term is often used in describing crustose lichens. A rimose surface of a lichen is sometimes contrasted to the surface being areolate. Areolate is an extreme form of being rimose, where the cracks or fissures are so deep that they create island-like pieces called areoles, which look the "islands" of mud on the surface of a dry lake bed. Rimose and areolate are contrasted with being verrucose, or "warty". Verrucose surfaces have warty bumps which are distinct, but not separated by cracks.
In mycology the term describes mushrooms whose caps crack in a radial pattern, as commonly found in the genera Inocybe and Inosperma.
|
https://en.wikipedia.org/wiki/Cutler%27s%20bar%20notation
|
In mathematics, Cutler's bar notation is a notation system for large numbers, introduced by Mark Cutler in 2004. The idea is based on iterated exponentiation in much the same way that exponentiation is iterated multiplication.
Introduction
A regular exponential can be expressed as such:
However, these expressions become arbitrarily large when dealing with systems such as Knuth's up-arrow notation. Take the following:
Cutler's bar notation shifts these exponentials counterclockwise, forming . A bar is placed above the variable to denote this change. As such:
This system becomes effective with multiple exponents, when regular denotation becomes too cumbersome.
At any time, this can be further shortened by rotating the exponential counterclockwise once more.
The same pattern could be iterated a fourth time, becoming . For this reason, it is sometimes referred to as Cutler's circular notation.
Advantages and drawbacks
The Cutler bar notation can be used to easily express other notation systems in exponent form. It also allows for a flexible summarization of multiple copies of the same exponents, where any number of stacked exponents can be shifted counterclockwise and shortened to a single variable. The bar notation also allows for fairly rapid composure of very large numbers. For instance, the number would contain more than a googolplex digits, while remaining fairly simple to write with and remember.
However, the system reaches a problem when dealing with different exponents in a single expression. For instance, the expression could not be summarized in bar notation. Additionally, the exponent can only be shifted thrice before it returns to its original position, making a five degree shift indistinguishable from a one degree shift. Some have suggested using a double and triple bar in subsequent rotations, though this presents problems when dealing with ten- and twenty-degree shifts.
Other equivalent notations for the same operations already exis
|
https://en.wikipedia.org/wiki/List%20of%20Foucault%20pendulums
|
This is a list of Foucault pendulums in the world:
Europe
Austria
Technisches Museum Wien, Vienna
St. Ruprecht an der Raab, Styria, erected in 2001 in a slim stainless steel pyramid, partially with glass windows; it is worldwide the first to exist outside a closed building: on the street. - Length: 6.5 m, weight: 32 kg
Belarus
Belarus State Pedagogic University, Minsk
Belgium
Volkssterrenwacht Mira, Grimbergen
Technopolis, Mechelen
Festraetsstudio, Sint-Truiden
UGent-volkssterrenwacht Armand Pien Ghent
Bulgaria
Public Astronomical Observatory and Planetarium "Nicolaus Copernicus", Varna - Length: 14.4 m
Czech Republic
Observatory and Planetarium Hradec Králové, Hradec Králové - Length: 10 m, weight: 8.5 kg
Czech Technical University, Prague - Length: 21 m, weight: 34 kg
Rotunda in Castle Flower Garden, Kroměříž - Length: 25 m, weight: 30 kg
Denmark
Steno Museet, Aarhus
Odense Technical College, Odense
Geocenter, Faculty of Science, University of Copenhagen - Length 25 m, weight: 145 kg
Estonia
Department of Physics, University of Tartu
Finland
Department of Physics, University of Turku, Turku
Eurajoki - Length: 40 m, weight: 110 kg
Finnish Science Centre Heureka, Vantaa
The watertower of Kuusamo
France
Germany
Jahrtausendturm, Magdeburg
Gymnasium Lünen-Altlünen, Lünen
Gymnasium Verl, Verl
German Museum of Technology, Berlin
University of Bremen
University of Heidelberg
Helmholtz-Gymnasium Heidelberg
Hochschule für Angewandte Wissenschaften Hamburg, Hamburg
School for Business and Technique, Mainz
Deutsches Museum, Munich - Length: 30 m, weight: 30 kg
University of Munich, Geophysics – Department of Earth and Environmental Sciences, 20 m, 12 kg, live webcam, description
Münster, 48 kg, 29 m, with mirrors, Zwei Graue Doppelspiegel für ein Pendel by artist Gerhard Richter in a former church, opened 17 June 2018
University of Osnabrück, Osnabrück, Lower Saxony - Length: 19.5 m, weight: 70 kg
Gymnasium of the city Lennestadt, N
|
https://en.wikipedia.org/wiki/Pure%20mathematics
|
Pure mathematics is the study of mathematical concepts independently of any application outside mathematics. These concepts may originate in real-world concerns, and the results obtained may later turn out to be useful for practical applications, but pure mathematicians are not primarily motivated by such applications. Instead, the appeal is attributed to the intellectual challenge and aesthetic beauty of working out the logical consequences of basic principles.
While pure mathematics has existed as an activity since at least ancient Greece, the concept was elaborated upon around the year 1900, after the introduction of theories with counter-intuitive properties (such as non-Euclidean geometries and Cantor's theory of infinite sets), and the discovery of apparent paradoxes (such as continuous functions that are nowhere differentiable, and Russell's paradox). This introduced the need to renew the concept of mathematical rigor and rewrite all mathematics accordingly, with a systematic use of axiomatic methods. This led many mathematicians to focus on mathematics for its own sake, that is, pure mathematics.
Nevertheless, almost all mathematical theories remained motivated by problems coming from the real world or from less abstract mathematical theories. Also, many mathematical theories, which had seemed to be totally pure mathematics, were eventually used in applied areas, mainly physics and computer science. A famous early example is Isaac Newton's demonstration that his law of universal gravitation implied that planets move in orbits that are conic sections, geometrical curves that had been studied in antiquity by Apollonius. Another example is the problem of factoring large integers, which is the basis of the RSA cryptosystem, widely used to secure internet communications.
It follows that, presently, the distinction between pure and applied mathematics is more a philosophical point of view or a mathematician's preference rather than a rigid subdivision of mathem
|
https://en.wikipedia.org/wiki/Classification%20of%20low-dimensional%20real%20Lie%20algebras
|
This mathematics-related list provides Mubarakzyanov's classification of low-dimensional real Lie algebras, published in Russian in 1963. It complements the article on Lie algebra in the area of abstract algebra.
An English version and review of this classification was published by Popovych et al. in 2003.
Mubarakzyanov's Classification
Let be -dimensional Lie algebra over the field of real numbers
with generators , . For each algebra we adduce only non-zero commutators between basis elements.
One-dimensional
, abelian.
Two-dimensional
, abelian ;
, solvable ,
Three-dimensional
, abelian, Bianchi I;
, decomposable solvable, Bianchi III;
, Heisenberg–Weyl algebra, nilpotent, Bianchi II,
, solvable, Bianchi IV,
, solvable, Bianchi V,
, solvable, Bianchi VI, Poincaré algebra when ,
, solvable, Bianchi VII,
, simple, Bianchi VIII,
, simple, Bianchi IX,
Algebra can be considered as an extreme case of , when , forming contraction of Lie algebra.
Over the field algebras , are isomorphic to and , respectively.
Four-dimensional
, abelian;
, decomposable solvable,
, decomposable solvable,
, decomposable nilpotent,
, decomposable solvable,
, decomposable solvable,
, decomposable solvable,
, decomposable solvable,
, unsolvable,
, unsolvable,
, indecomposable nilpotent,
, indecomposable solvable,
, indecomposable solvable,
, indecomposable solvable,
, indecomposable solvable,
, indecomposable solvable,
, indecomposable solvable,
, indecomposable solvable,
, indecomposable solvable,
, indecomposable solvable,
Algebra can be considered as an extreme case of , when , forming contraction of Lie algebra.
Over the field algebras , , , , are isomorphic to , , , , , respectively.
See also
Table of Lie groups
Simple Lie group#Full classification
Notes
|
https://en.wikipedia.org/wiki/Almost%20surely
|
In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. The concept is analogous to the concept of "almost everywhere" in measure theory. In probability experiments on a finite sample space with a non-zero probability for each outcome, there is no difference between almost surely and surely (since having a probability of 1 entails including all the sample points); however, this distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability 0.
Some examples of the use of this concept include the strong and uniform versions of the law of large numbers, the continuity of the paths of Brownian motion, and the infinite monkey theorem. The terms almost certainly (a.c.) and almost always (a.a.) are also used. Almost never describes the opposite of almost surely: an event that happens with probability zero happens almost never.
Formal definition
Let be a probability space. An event happens almost surely if . Equivalently, happens almost surely if the probability of not occurring is zero: . More generally, any event (not necessarily in ) happens almost surely if is contained in a null set: a subset in such that The notion of almost sureness depends on the probability measure . If it is necessary to emphasize this dependence, it is customary to say that the event occurs P-almost surely, or almost surely .
Illustrative examples
In general, an event can happen "almost surely", even if the probability space in question includes outcomes which do not belong to the event—as the following examples illustrate.
Throwing a dart
Imagine throwing a dart at a unit square (a square with an area of 1) so that the dart always hits an exact point in the square, in such a way that each point in the square is equally lik
|
https://en.wikipedia.org/wiki/Embedded%20hypervisor
|
An embedded hypervisor is a hypervisor that supports the requirements of embedded systems.
The requirements for an embedded hypervisor are distinct from hypervisors targeting server and desktop applications.
An embedded hypervisor is designed into the embedded device from the outset, rather than loaded subsequent to device deployment.
While desktop and enterprise environments use hypervisors to consolidate hardware and isolate computing environments from one another, in an embedded system, the various components typically function collectively to provide the device's functionality. Mobile virtualization overlaps with embedded system virtualization, and shares some use cases.
Typical attributes of embedded virtualization include efficiency, security, communication, isolation and real-time capabilities.
Background
Software virtualization has been a major topic in the enterprise space since the late 1960s, but only since the early 2000s has its use appeared in embedded systems. The use of virtualization and its implementation in the form of a hypervisor in embedded systems are very different from enterprise applications. An effective implementation of an embedded hypervisor must deal with a number of issues specific to such applications. These issues include the highly integrated nature of embedded systems, the requirement for isolated functional blocks within the system to communicate rapidly, the need for real-time/deterministic performance, the resource-constrained target environment and the wide range of security and reliability requirements.
Hypervisor
A hypervisor provides one or more software virtualization environments in which other software, including operating systems, can run with the appearance of full access to the underlying system hardware, where in fact such access is under the complete control of the hypervisor. These virtual environments are called virtual machines (VM)s, and a hypervisor will typically support multiple VMs managed simultane
|
https://en.wikipedia.org/wiki/Covariance%20group
|
In physics, a covariance group is a group of coordinate transformations between frames of reference (see for example Ryckman (2005)). A frame of reference provides a set of coordinates for an observer moving with that frame to make measurements and define physical quantities. The covariance principle states the laws of physics should transform from one frame to another covariantly, that is, according to a representation of the covariance group.
Special relativity considers observers in inertial frames, and the covariance group consists of rotations, velocity boosts, and the parity transformation. It is denoted as O(1,3) and is often referred to as Lorentz group.
For example, the Maxwell equation with sources,
transforms as a four-vector, that is, under the (1/2,1/2) representation of the O(1,3) group.
The Dirac equation,
transforms as a bispinor, that is, under the (1/2,0)⊕(0,1/2) representation of the O(1,3) group.
The covariance principle, unlike the relativity principle, does not imply that the equations are invariant under transformations from the covariance group. In practice the equations for electromagnetic and strong interactions are invariant, while the weak interaction is not invariant under the parity transformation. For example, the Maxwell equation is invariant, while the corresponding equation for the weak field explicitly contains left currents and thus is not invariant under the parity transformation.
In general relativity the covariance group consists of all arbitrary (invertible and differentiable) coordinate transformations.
See also
Manifestly covariant
Relativistic wave equations
Representation theory of the Lorentz group
Notes
|
https://en.wikipedia.org/wiki/Numbers%20%28TV%20series%29
|
Numbers (stylized as NUMB3RS) is an American crime drama television series that was broadcast on CBS from January 23, 2005, to March 12, 2010, for six seasons and 118 episodes. The series was created by Nicolas Falacci and Cheryl Heuton, and follows FBI Special Agent Don Eppes (Rob Morrow) and his brother Charlie Eppes (David Krumholtz), a college mathematics professor and prodigy, who helps Don solve crimes for the FBI. Brothers Ridley and Tony Scott produced Numbers; its production companies are the Scott brothers' Scott Free Productions and CBS Television Studios (originally Paramount Network Television, and later CBS Paramount Network Television).
The show focuses equally on the relationships among Don Eppes, his brother Charlie Eppes, and their father, Alan Eppes (Judd Hirsch), and on the brothers' efforts to fight crime, usually in Los Angeles. A typical episode begins with a crime, which is subsequently investigated by a team of FBI agents led by Don and mathematically modeled by Charlie, with the help of Larry Fleinhardt (Peter MacNicol) and Amita Ramanujan (Navi Rawat). The insights provided by Charlie's mathematics were always in some way crucial to solving the crime.
On May 18, 2010, CBS canceled the series after six seasons.
Cast and characters
The show revolved around three intersecting groups of characters: the FBI, scientists at the fictitious California Institute of Science (CalSci), and the Eppes family.
Don Eppes (Rob Morrow), Charlie's older brother, is the lead FBI agent at the Los Angeles Violent Crimes Squad.
Professor Charlie Eppes (David Krumholtz) is a mathematical genius, who in addition to teaching at CalSci, consults for the FBI and NSA.
Alan Eppes (Judd Hirsch) is a former L.A. city planner, a widower, and the father of both Charlie and Don Eppes. Alan lives in a historic two-story California bungalow furnished with period Arts and Crafts furniture.
David Sinclair (Alimi Ballard) is an FBI field agent and was later made Don's se
|
https://en.wikipedia.org/wiki/National%20Association%20of%20Biology%20Teachers
|
The National Association of Biology Teachers (NABT) is an incorporated association of biology educators in the United States. It was initially founded in response to the poor understanding of biology and the decline in the teaching of the subject in the 1930s. It has grown to become a national representative organisation which promotes the teaching of biology, supports the learning of biology based on scientific principles and advocates for biology within American society. The National Conference and the journal, The American Biology Teacher, are two mechanisms used to achieve those goals.
The NABT has also been an advocate for the teaching of evolution in the debate about creation and evolution in public education in the United States, playing a role in a number of court cases and hearings throughout the country.
History
The NABT was formed in 1938 in New York City. The journal of the organisation (The American Biology Teacher) was created in the same year.
In 1944, Helen Trowbridge, the first female president, was elected. The Outstanding Teacher Awards were first presented in 1960 and the first independent National Convention was held in 1968.
The seventies marked an era of activism in the teaching of evolution with legal action against a state code amendment in Tennessee which required equal amounts of time to teach evolution and creationism.
In 1987 NABT helped develop the first National High School Biology test which established a list of nine core principles in the teaching of biology.
In the year 2005, NABT was involved in the Kitzmiller v. Dover Area School District case which established the principle that Intelligent Design had no place in the Science Curriculum.
2017 was the Year of the March for Science, which the NABT endorsed, and in 2018, it held its annual four-day conference in San Diego, California.
Purpose
The purpose of the NABT is to "empower educators to provide the best possible biology and life science education for all students". The org
|
https://en.wikipedia.org/wiki/Email%20art
|
Email art refers to artwork created for the medium of email. It includes computer graphics, animations, screensavers, digital scans of artwork in other media, and even ASCII art. When exhibited, Email art can be either displayed on a computer screen or similar type of display device, or the work can be printed out and displayed.
Email art is an evolution of the networking Mail Art movement and began during the early 1990s. Chuck Welch, also known as Cracker Jack Kid, connected with early online artists and created a net-worker telenetlink. The historical evolution of the term "Email art" is documented in Chuck Welch's Eternal Network: A Mail Art Anthology published and edited by University of Calgary Press.
By the end of the 1990s, many mailartists, aware of increasing postal rates and cheaper internet access, were beginning the gradual migration of collective art projects towards the web and new, inexpensive forms of digital communication. The Internet facilitated faster dissemination of Mail Art calls (invitations), Mail Art blogs and websites have become commonly used to display contributions and online documentation, and an increasing number of projects include an invitation to submit Email art digitally, either as the preferred channel or as an alternative to sending contributions by post.
In 2006, Ramzi Turki received an e-mail containing a scanned work of Belgian artist Luc Fierens, so he sent this picture to about 7000 e-mail addresses artists seeking their interactions in order to acquire about 200 contributions and answers.
See also
Cyberculture
Digital art
Fax art
Internet art
Mail art
|
https://en.wikipedia.org/wiki/Feller%E2%80%93Tornier%20constant
|
In mathematics, the Feller–Tornier constant CFT is the density of the set of all positive integers that have an even number of distinct prime factors raised to a power larger than one (ignoring any prime factors which appear only to the first power).
It is named after William Feller (1906–1970) and Erhard Tornier (1894–1982)
Omega function
The Big Omega function is given by
See also: Prime omega function.
The Iverson bracket is
With these notations, we have
Prime zeta function
The prime zeta function P is give by
The Feller–Tornier constant satisfies
See also
Riemann zeta function
L-function
Euler product
Twin prime
|
https://en.wikipedia.org/wiki/Content%20delivery%20network%20interconnection
|
Content delivery network interconnection (CDNI) is a set of interfaces and mechanisms required for interconnecting two independent content delivery networks (CDNs) that enables one to deliver content on behalf of the other. Interconnected CDNs offer many benefits, such as footprint extension, reduced infrastructure costs, higher availability, etc., for content service providers (CSPs), CDNs, and end users. Among its many use cases, it allows small CDNs to interconnect and provides services for CSPs that allows them to compete against the CDNs of global CSPs.
Rationale
Thanks to the many benefits of CDNs, e.g. reduced delivery cost, improved quality of experience (QoE), and increased robustness of delivery, CDNs have become popular for large-scale content delivery of cacheable content. For this reason, CDN providers are scaling up their infrastructure and many Internet service providers (ISPs)/network service providers (NSPs) have deployed or are deploying their own CDNs for their own use or for lease, if a business and technical arrangement between them and a CDN provider were made. Those stand-alone CDNs with well-defined request routing, delivery, acquisition, accounting systems and protocols may sooner or later face either footprint, resource or capability limits. The CDNI targets at leveraging separate CDNs to provide end-to-end delivery of content from CSPs to end users, regardless of their location or attachment network.
Example of operation
Let's consider an interconnection of two CDNs as presented in the below figure. The ISP-A deploys an authoritative upstream CDN (uCDN), and he has established a technical and business arrangement with the CSP. Because the CDN-A is authorised to serve on behalf of the CSP, a user in the network of ISP-B requests content from CDN-A (1). The uCDN can either serve the request itself or redirect it to a downstream CDN (dCDN) if, for example, dCDN is closer to the user equipment (UE). If the request is redirected, the inter
|
https://en.wikipedia.org/wiki/Aseptic%20sampling
|
Aseptic sampling is the process of aseptically withdrawing materials used in biopharmaceutical processes for analysis so as not contaminate or alter the sample or the source of the sample. Aseptic samples are drawn throughout the entire biopharmaceutical process (cell culture/fermentation, buffer & media prep, purification, final fill and finish). Analysis of the sample includes sterility, cell count/cell viability, metabolites, gases, osmolality and more.
Aseptic sampling techniques
Biopharmaceutical drug manufacturers widely use aseptic sampling devices to enhance aseptic technique. The latest innovations of sampling devices harmonize with emerging trends in disposability, enhance operating efficiencies and improve operator safety.
Turn-key aseptic sampling devices
Turn-key Aseptic Sampling Devices are ready-to-use sampling devices that require little or no equipment preparation by the users. Turn-key devices help managers reduce labor costs, estimated to represent 75% to 80% of the cost of running a biotech facility.
Turn-key aseptic sampling devices include:
A means to connect the device to the bioprocess equipment
A mechanism to aseptically access the materials held in the biopress equipment
A means to aseptically transfer the sample out of the bioprocess equipment
A vessel or container to aseptically collect the sample
A mechanism to aseptically disconnect the collection vessel
To protect the integrity of the sample and to ensure it is truly representative of the time the sample is taken, the sampling pathway should be fully contained and independent of other sampling pathways.
Cannula(needle) based aseptic sampling devices
In a cannula-based aseptic sampling system, a needle penetrates an elastomeric septum. The septum is in direct contact with the liquid so that the liquid flows out of the equipment through the needle. Iterations of this technique are used in medical device industries but don't usually include equipment combining the needle an
|
https://en.wikipedia.org/wiki/Geophysical%20MASINT
|
Geophysical MASINT is a branch of Measurement and Signature Intelligence (MASINT) that involves phenomena transmitted through the earth (ground, water, atmosphere) and manmade structures including emitted or reflected sounds, pressure waves, vibrations, and magnetic field or ionosphere disturbances.
According to the United States Department of Defense, MASINT has technically derived intelligence (excluding traditional imagery IMINT and signals intelligence SIGINT) that – when collected, processed, and analyzed by dedicated MASINT systems – results in intelligence that detects, tracks, identifies or describes the signatures (distinctive characteristics) of fixed or dynamic target sources. MASINT was recognized as a formal intelligence discipline in 1986. Another way to describe MASINT is a "non-literal" discipline. It feeds on a target's unintended emissive by-products, the "trails" - the spectral, chemical or RF that an object leaves behind. These trails form distinct signatures, which can be exploited as reliable discriminators to characterize specific events or disclose hidden targets."
As with many branches of MASINT, specific techniques may overlap with the six major conceptual disciplines of MASINT defined by the Center for MASINT Studies and Research, which divides MASINT into Electro-optical, Nuclear, Geophysical, Radar, Materials, and Radiofrequency disciplines.
Military requirements
Geophysical sensors have a long history in conventional military and commercial applications, from weather prediction for sailing, to fish finding for commercial fisheries, to nuclear test ban verification. New challenges, however, keep emerging.
For first-world military forces opposing other conventional militaries, there is an assumption that if a target can be located, it can be destroyed. As a result, concealment and deception have taken on new criticality. "Stealth" low-observability aircraft have gotten much attention, and new surface ship designs feature observabili
|
https://en.wikipedia.org/wiki/Security%20hacker
|
A security hacker is someone who explores methods for breaching defenses and exploiting weaknesses in a computer system or network. Hackers may be motivated by a multitude of reasons, such as profit, protest, information gathering, challenge, recreation, or evaluation of a system weaknesses to assist in formulating defenses against potential hackers.
Longstanding controversy surrounds the meaning of the term "hacker." In this controversy, computer programmers reclaim the term hacker, arguing that it refers simply to someone with an advanced understanding of computers and computer networks, and that cracker is the more appropriate term for those who break into computers, whether computer criminals (black hats) or computer security experts (white hats). A 2014 article noted that "the black-hat meaning still prevails among the general public". The subculture that has evolved around hackers is often referred to as the "computer underground".
History
Birth of subculture and entering mainstream: 1960s-1980s
The subculture around such hackers is termed network hacker subculture, hacker scene, or computer underground. It initially developed in the context of phreaking during the 1960s and the microcomputer BBS scene of the 1980s. It is implicated with 2600: The Hacker Quarterly and the alt.2600 newsgroup.
In 1980, an article in the August issue of Psychology Today (with commentary by Philip Zimbardo) used the term "hacker" in its title: "The Hacker Papers." It was an excerpt from a Stanford Bulletin Board discussion on the addictive nature of computer use. In the 1982 film Tron, Kevin Flynn (Jeff Bridges) describes his intentions to break into ENCOM's computer system, saying "I've been doing a little hacking here." CLU is the software he uses for this. By 1983, hacking in the sense of breaking computer security had already been in use as computer jargon, but there was no public awareness about such activities. However, the release of the film WarGames that year, featuri
|
https://en.wikipedia.org/wiki/Iron%20in%20biology
|
Iron is an important biological element. It is used in both the ubiquitous Iron-sulfur proteins and in Vertebrates it is used in Hemoglobin which is essential for Blood and oxygen transport.
Overview
Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and used of oxygen. Iron proteins are involved in electron transfer. The ubiquity of Iron in life has led to the Iron–sulfur world hypothesis that Iron was a central component of the environment of early life.
Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin – a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content.
Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron (III).
Biochemistry
Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores.
After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable comple
|
https://en.wikipedia.org/wiki/Flash%20memory%20controller
|
A flash memory controller (or flash controller) manages data stored on flash memory (usually NAND flash) and communicates with a computer or electronic device. Flash memory controllers can be designed for operating in low duty-cycle environments like memory cards, or other similar media for use in PDAs, mobile phones, etc. USB flash drives use flash memory controllers designed to communicate with personal computers through the USB port at a low duty-cycle. Flash controllers can also be designed for higher duty-cycle environments like solid-state drives (SSD) used as data storage for laptop computer systems up to mission-critical enterprise storage arrays.
Initial setup
After a flash storage device is initially manufactured, the flash controller is first used to format the flash memory. This ensures the device is operating properly, it maps out bad flash memory cells, and it allocates spare cells to be substituted for future failed cells. Some part of the spare cells is also used to hold the firmware which operates the controller and other special features for a particular storage device. A directory structure is created to allow the controller to convert requests for logical sectors into the physical locations on the actual flash memory chips.
Reading, writing, and erasing
When the system or device needs to read data from or write data to the flash memory, it will communicate with the flash memory controller. Simpler devices like SD cards and USB flash drives typically have a small number of flash memory die connected simultaneously. Operations are limited to the speed of the individual flash memory die. In contrast, a high-performance solid-state drive will have more dies organized with parallel communication paths to enable speeds many times greater than that of a single flash die.
Wear-leveling and block picking
Flash memory can withstand a limited number of program-erase cycles. If a particular flash memory block were programmed and erased repeatedly withou
|
https://en.wikipedia.org/wiki/Comparison%20of%20CPU%20microarchitectures
|
The following is a comparison of CPU microarchitectures.
See also
Processor design
Comparison of instruction set architectures
Notes
|
https://en.wikipedia.org/wiki/Decorrelation
|
Decorrelation is a general term for any process that is used to reduce autocorrelation within a signal, or cross-correlation within a set of signals, while preserving other aspects of the signal. A frequently used method of decorrelation is the use of a matched linear filter to reduce the autocorrelation of a signal as far as possible. Since the minimum possible autocorrelation for a given signal energy is achieved by equalising the power spectrum of the signal to be similar to that of a white noise signal, this is often referred to as signal whitening.
Process
Although most decorrelation algorithms are linear, non-linear decorrelation algorithms also exist.
Many data compression algorithms incorporate a decorrelation stage. For example, many transform coders first apply a fixed linear transformation that would, on average, have the effect of decorrelating a typical signal of the class to be coded, prior to any later processing. This is typically a Karhunen–Loève transform, or a simplified approximation such as the discrete cosine transform.
By comparison, sub-band coders do not generally have an explicit decorrelation step, but instead exploit the already-existing reduced correlation within each of the sub-bands of the signal, due to the relative flatness of each sub-band of the power spectrum in many classes of signals.
Linear predictive coders can be modelled as an attempt to decorrelate signals by subtracting the best possible linear prediction from the input signal, leaving a whitened residual signal.
Decorrelation techniques can also be used for many other purposes, such as reducing crosstalk in a multi-channel signal, or in the design of echo cancellers.
In image processing decorrelation techniques can be used to enhance or stretch, colour differences found in each pixel of an image. This is generally termed as 'decorrelation stretching'.
The concept of decorrelation can be applied in many other fields.
In neuroscience, decorrelation is used in the an
|
https://en.wikipedia.org/wiki/Mutualism%20%28biology%29
|
Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples include most vascular plants engaged in mutualistic interactions with mycorrhizae, flowering plants being pollinated by animals, vascular plants being dispersed by animals, and corals with zooxanthellae, among many others. Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, or parasitism, in which one species benefits at the expense of the other.
The term mutualism was introduced by Pierre-Joseph van Beneden in his 1876 book Animal Parasites and Messmates to mean "mutual aid among species".
Mutualism is often conflated with two other types of ecological phenomena: cooperation and symbiosis. Cooperation most commonly refers to increases in fitness through within-species (intraspecific) interactions, although it has been used (especially in the past) to refer to mutualistic interactions, and it is sometimes used to refer to mutualistic interactions that are not obligate. Symbiosis involves two species living in close physical contact over a long period of their existence and may be mutualistic, parasitic, or commensal, so symbiotic relationships are not always mutualistic, and mutualistic interactions are not always symbiotic. Despite a different definition between mutualistic interactions and symbiosis, mutualistic and symbiosis have been largely used interchangeably in the past, and confusion on their use has persisted.
Mutualism plays a key part in ecology and evolution. For example, mutualistic interactions are vital for terrestrial ecosystem function as about 80% of land plants species rely on mycorrhizal relationships with fungi to provide them with inorganic compounds and trace elements. As another example, the estimate of tropical rainforest plants with seed dispersal mutualisms with animals ranges
|
https://en.wikipedia.org/wiki/Three-domain%20system
|
The three-domain system is a biological classification introduced by Carl Woese, Otto Kandler, and Mark Wheelis in 1990 that divides cellular life forms into three domains, namely Archaea, Bacteria, and Eukarya. The key difference from earlier classifications such as the two-empire system and the five-kingdom classification is the splitting of Archaea from Bacteria as completely different organisms. It has been challenged by the two-domain system that divides organisms into Bacteria and Archaea only, as Eukaryotes are considered as one group of Archaea.
Background
Woese argued, on the basis of differences in 16S rRNA genes, that bacteria, archaea, and eukaryotes each arose separately from an ancestor with poorly developed genetic machinery, often called a progenote. To reflect these primary lines of descent, he treated each as a domain, divided into several different kingdoms. Originally his split of the prokaryotes was into Eubacteria (now Bacteria) and Archaebacteria (now Archaea). Woese initially used the term "kingdom" to refer to the three primary phylogenic groupings, and this nomenclature was widely used until the term "domain" was adopted in 1990.
Acceptance of the validity of Woese's phylogenetically valid classification was a slow process. Prominent biologists including Salvador Luria and Ernst Mayr objected to his division of the prokaryotes. Not all criticism of him was restricted to the scientific level. A decade of labor-intensive oligonucleotide cataloging left him with a reputation as "a crank", and Woese would go on to be dubbed "Microbiology's Scarred Revolutionary" by a news article printed in the journal Science in 1997. The growing amount of supporting data led the scientific community to accept the Archaea by the mid-1980s. Today, very few scientists still accept the concept of a unified Prokarya.
Classification
The three-domain system adds a level of classification (the domains) "above" the kingdoms present in the previously used five- or
|
https://en.wikipedia.org/wiki/Radisys
|
Radisys Corporation is an American technology company located in Hillsboro, Oregon, United States that makes technology used by telecommunications companies in mobile networks. Founded in 1987 in Oregon by former employees of Intel, the company went public in 1995. The company's products are used in mobile network applications such as small cell radio access networks, wireless core network elements, deep packet inspection and policy management equipment; conferencing, and media services including voice, video and data. In 2015, the first-quarter revenues of Radisys totaled $48.7 million, and approximately employed 700 people. Arun Bhikshesvaran is the company's chief executive officer.
On 30 June 2018, multinational conglomerate Reliance Industries acquired Radisys for $74 million.
It now operates as an independent subsidiary.
History
Radisys was founded in 1987 as Radix Microsystems in Beaverton, Oregon, by former Intel engineers Dave Budde and Glen Myers. The first investors were employees who put up $50,000 each, with Tektronix later investing additional funds into the company. Originally located in space leased from Sequent Computer Systems, by 1994 the company had grown to annual sales of $20 million. The company's products were computers used in end products such as automated teller machines to paint mixers. On October 20, 1995, the company became a publicly traded company when it held an initial public offering (IPO). The IPO raised $19.6 million for Radisys after selling 2.7 million shares at $12 per share.
In 1996, the company moved its headquarters to a new campus in Hillsboro, and at that time sales reached $80 million and the company had a profit of $9.6 million that year with 175 employees. Company co-founder Dave Budde left the company in 1997, with company revenues at $81 million annually at that time. The company grew in part by acquisitions such as Sonitech International in 1997, part of IBM's Open Computing Platform unit and Texas Micro in 1999
|
https://en.wikipedia.org/wiki/CMOS%20amplifier
|
CMOS amplifiers (complementary metal–oxide–semiconductor amplifiers) are ubiquitous analog circuits used in computers, audio systems, smartphones, cameras, telecommunication systems, biomedical circuits, and many other systems. Their performance impacts the overall specifications of the systems. They take their name from the use of MOSFETs (metal–oxide–semiconductor field-effect transistors) as opposite to bipolar junction transistors (BJTs). MOSFETs are simpler to fabricate and therefore less expensive than BJT amplifiers, still providing a sufficiently high transconductance to allow the design of very high performance circuits. In high performance CMOS (complementary metal–oxide–semiconductor) amplifier circuits, transistors are not only used to amplify the signal but are also used as active loads to achieve higher gain and output swing in comparison with resistive loads.
CMOS technology was introduced primarily for digital circuit design. In the last few decades, to improve speed, power consumption, required area, and other aspects of digital integrated circuits (ICs), the feature size of MOSFET transistors has shrunk (minimum channel length of transistors reduces in newer CMOS technologies). This phenomenon predicted by Gordon Moore in 1975, which is called Moore’s law, and states that in about each 2 years, the number of transistors doubles for the same silicon area of ICs. Progress in memory circuits design is an interesting example to see how process advancement have affected the required size and their performance in the last decades. In 1956, a 5 MB Hard Disk Drive (HDD) weighed over a ton, while these days having 50000 times more capacity with a weight of several tens of grams is very common.
While digital ICs have benefited from the feature size shrinking, analog CMOS amplifiers have not gained corresponding advantages due to the intrinsic limitations of an analog design—such as the intrinsic gain reduction of short channel transistors, which affects th
|
https://en.wikipedia.org/wiki/Zermelo%27s%20theorem%20%28game%20theory%29
|
In game theory, Zermelo's theorem is a theorem about finite two-person games of perfect information in which the players move alternately and in which chance does not affect the decision making process. It says that if the game cannot end in a draw, then one of the two players must have a winning strategy (i.e. can force a win). An alternate statement is that for a game meeting all of these conditions except the condition that a draw is now possible, then either the first-player can force a win, or the second-player can force a win, or both players can at least force a draw.
The theorem is named after Ernst Zermelo, a German mathematician and logician, who proved the theorem for the example game of chess in 1913.
Example
Zermelo's Theorem can be applied to all finite-stage two-player games with complete information and alternating moves. The game must satisfy the following criteria: there are two players in the game; the game is of perfect information; the board game is finite; the two players can take alternate turns; and there is no chance element present. Zermelo has stated that there are many games of this type however his theorem has been applied mostly to the game chess.
When applied to chess, Zermelo's Theorem states "either White can force a win, or Black can force a win, or both sides can force at least a draw".
Zermelo's algorithm is a cornerstone algorithm in game-theory, however, it can also be applied in areas outside of finite games.
Apart from chess, Zermelo's theorem is applied across all areas of computer science. In particular, it is applied in model checking and value interaction.
Conclusions of Zermelo's theorem
Zermelo's work shows that in two-person zero-sum games with perfect information, if a player is in a winning position, then that player can always force a win no matter what strategy the other player may employ. Furthermore, and as a consequence, if a player is in a winning position, it will never require more moves than there are
|
https://en.wikipedia.org/wiki/Return%20ratio
|
The return ratio of a dependent source in a linear electrical circuit is the negative of the ratio of the current (voltage) returned to the site of the dependent source to the current (voltage) of a replacement independent source. The terms loop gain and return ratio are often used interchangeably; however, they are necessarily equivalent only in the case of a single feedback loop system with unilateral blocks.
Calculating the return ratio
The steps for calculating the return ratio of a source are as follows:
Set all independent sources to zero.
Select the dependent source for which the return ratio is sought.
Place an independent source of the same type (voltage or current) and polarity in parallel with the selected dependent source.
Move the dependent source to the side of the inserted source and cut the two leads joining the dependent source to the independent source.
For a voltage source the return ratio is minus the ratio of the voltage across the dependent source divided by the voltage of the independent replacement source.
For a current source, short-circuit the broken leads of the dependent source. The return ratio is minus the ratio of the resulting short-circuit current to the current of the independent replacement source.
Other Methods
These steps may not be feasible when the dependent sources inside the devices are not directly accessible, for example when using built-in "black box" SPICE models or when measuring the return ratio experimentally.
For SPICE simulations, one potential workaround is to manually replace non-linear devices by their small-signal equivalent model, with exposed dependent sources. However this will have to be redone if the bias point changes.
A result by Rosenstark shows that return ratio can be calculated by breaking the loop at any unilateral point in the circuit. The problem is now finding how to break the loop without affecting the bias point and altering the results. Middlebrook and Rosenstark have propose
|
https://en.wikipedia.org/wiki/List%20of%20physical%20constants
|
The constants listed here are known values of physical constants expressed in SI units; that is, physical quantities that are generally believed to be universal in nature and thus are independent of the unit system in which they are measured. Many of these are redundant, in the sense that they obey a known relationship with other physical constants and can be determined from them.
Table of physical constants
Uncertainties
While the values of the physical constants are independent of the system of units in use, each uncertainty as stated reflects our lack of knowledge of the corresponding value as expressed in SI units, and is strongly dependent on how those units are defined. For example, the atomic mass constant is exactly known when expressed using the dalton (its value is exactly 1 Da), but the kilogram is not exactly known when using these units, the opposite of when expressing the same quantities using the kilogram.
Technical constants
Some of these constants are of a technical nature and do not give any true physical property, but they are included for convenience. Such a constant gives the correspondence ratio of a technical dimension with its corresponding underlying physical dimension. These include the Boltzmann constant , which gives the correspondence of the dimension temperature to the dimension of energy per degree of freedom, and the Avogadro constant , which gives the correspondence of the dimension of amount of substance with the dimension of count of entities (the latter formally regarded in the SI as being dimensionless). By implication, any product of powers of such constants is also such a constant, such as the molar gas constant .
See also
List of mathematical constants
Physical constant
List of particles
Notes
|
https://en.wikipedia.org/wiki/Penrose%20tiling
|
A Penrose tiling is an example of an aperiodic tiling. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and a tiling is aperiodic if it does not contain arbitrarily large periodic regions or patches. However, despite their lack of translational symmetry, Penrose tilings may have both reflection symmetry and fivefold rotational symmetry. Penrose tilings are named after mathematician and physicist Roger Penrose, who investigated them in the 1970s.
There are several different variations of Penrose tilings with different tile shapes. The original form of Penrose tiling used tiles of four different shapes, but this was later reduced to only two shapes: either two different rhombi, or two different quadrilaterals called kites and darts. The Penrose tilings are obtained by constraining the ways in which these shapes are allowed to fit together in a way that avoids periodic tiling. This may be done in several different ways, including matching rules, substitution tiling or finite subdivision rules, cut and project schemes, and coverings. Even constrained in this manner, each variation yields infinitely many different Penrose tilings.
Penrose tilings are self-similar: they may be converted to equivalent Penrose tilings with different sizes of tiles, using processes called inflation and deflation. The pattern represented by every finite patch of tiles in a Penrose tiling occurs infinitely many times throughout the tiling. They are quasicrystals: implemented as a physical structure a Penrose tiling will produce diffraction patterns with Bragg peaks and five-fold symmetry, revealing the repeated patterns and fixed orientations of its tiles. The study of these tilings has been important in the understanding of physical materials that also form quasicrystals. Penrose tilings have also been applied in architecture and decoration, as in the floor tiling shown.
Background and history
Periodic and aperiodic tilings
Covering a flat surface ("
|
https://en.wikipedia.org/wiki/Magnes%20the%20shepherd
|
Magnes the shepherd, sometimes described as Magnes the shepherd boy, is a mythological figure, possibly based on a real person, who was cited by Pliny the Elder as discovering natural magnetism. His name, "Magnes", the Latin word for magnetite, has been attributed as the origin of the Latin root that has passed into English, giving its speakers the words magnet, magnetism, the mentioned ore, and related formulations. Other authorities have attributed the word origin to other sources.
As set out in Pliny's Naturalis Historia ("Natural History"), an early encyclopedia published c. 77 CE – c. 79 CE, and as translated from the Latin in Robert Jacobus Forbes' Studies in Ancient Technology, Pliny wrote the following (attributing the source of his information, in turn, to Nicander of Colophon):
Nicander is our authority that it [magnetite ore] was called Magnes from the man who first discovered it on Mount Ida and he is said to have found it when the nails of his shoes and the ferrule of his staff adhered to it, as he was pasturing his herds.
The passage appears at Book XXXVI of Naturalis Historia, covering "The Natural History of Stones", at chapter 25 entitled "The Magnet: Three Remedies". Although Pliny's description is often cited, the story of Magnes the shepherd is postulated by physicist Gillian Turner to be much older, dating from approximately 900 BCE.
Any writings Nicander may have made on the subject have since been lost.
Written in approximately 600 CE, book XVI of Etymologiae by Isidore of Seville tells the same story as Pliny, but places Magnes in India. This is repeated in Vincent of Beauvais' Miroir du Monde (c. 1250 CE) and in Thomas Nicols' 1652 work, Lapidary, or, the History of Pretious Stones, wherein he describes Magnes as a "shepherd of India, who was wont to keep his flocks about those mountains in India, where there was an abundance of lodestones".
Following from Pliny's account, the shepherd's name has been often cited as giving rise to the La
|
https://en.wikipedia.org/wiki/Xerophile
|
A xerophile () is an extremophilic organism that can grow and reproduce in conditions with a low availability of water, also known as water activity. Water activity (aw) is measured as the humidity above a substance relative to the humidity above pure water (Aw = 1.0). Xerophiles are "xerotolerant", meaning tolerant of dry conditions. They can often survive in environments with water activity below 0.8; above which is typical for most life on Earth. Typically xerotolerance is used with respect to matric drying, where a substance has a low water concentration. These environments include arid desert soils. The term osmotolerance is typically applied to organisms that can grow in solutions with high solute concentrations (salts, sugars), such as halophiles.
The common food preservation method of reducing water (food drying) activities may not prevent the growth of xerophilic organisms, often resulting in food spoilage. Some mold and yeast species are xerophilic. Mold growth on bread is an example of food spoilage by xerophilic organisms.
Examples of xerophiles include Trichosporonoides nigrescens, Zygosaccharomyces, and cacti.
See also
Xerocole
Xerophyte
|
https://en.wikipedia.org/wiki/Consumer%E2%80%93resource%20interactions
|
Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food.
Classification of consumer types
The standard categorization
Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists.
The Getz categorization
Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage.
In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal
|
https://en.wikipedia.org/wiki/Altered%20nuclear%20transfer
|
Altered nuclear transfer is an alternative method of obtaining embryonic-like, pluripotent stem cells without the creation and destruction of human embryos. The process was originally proposed by William B. Hurlbut.
External links
Explanation of the theory of Altered Nuclear Transfer
Stem cell harvesting techniques
Biological techniques and tools
Stem cells
Induced stem cells
|
https://en.wikipedia.org/wiki/Mycobiota
|
Mycobiota (plural noun, no singular) are a group of all the fungi present in a particular geographic region (e.g. "the mycobiota of Ireland") or habitat type (e.g. "the mycobiota of cocoa"). An analogous term for Mycobiota is funga.
Human mycobiota
Mycobiota exist on the surface and in the gastrointestinal system of humans. There are as many as sixty-six genera and 184 species in the gastrointestinal tract of healthy people. Most of these are in the Candida genera.
Though found to be present on the skin and in the gi tract in healthy individuals, the normal resident mycobiota can become pathogenic in those who are immunocompromized. Such multispecies infections lead to higher mortalities. In addition hospital-acquired infections by C. albicans have become a cause of major health concerns. A high mortality rate of 40-60% is associated with systemic infection. The best-studied of these are Candida species due to their ability to become pathogenic in immunocompromised and even in healthy hosts. Yeasts are also present on the skin, such as Malassezia species, where they consume oils secreted from the sebaceous glands. Pityrosporum (Malassezia) ovale, which is lipid-dependent and found only on humans. P. ovale was later divided into two species, P. ovale and P. orbiculare, but current sources consider these terms to refer to a single species of fungus, with M. furfur the preferred name.
Other uses
There is a peer reviewed mycological journal titled Mycobiota.
|
https://en.wikipedia.org/wiki/Continuous%20Computing
|
Continuous Computing was a privately held company based in San Diego and founded in 1998 that provides telecom systems made up of telecom platforms and Trillium software, including protocol software stacks for femtocells and 4G wireless / Long Term Evolution (LTE). The company also sells standalone Trillium software products and ATCA hardware components, as well as professional services. Continuous Computing's Trillium software addresses LTE Femtocells (Home eNodeB) and pico / macro eNodeBs, as well as the Evolved Packet Core (EPC), Mobility Management Entity (MME), Serving Gateway (SWG) and Evolved Packet Data Gateway (ePDG).
The company is said to be the first systems vendor to introduce an end-to-end offering that spans the range of LTE network infrastructure from the Home NodeB (Macro / Pico base stations) to the Evolved Packet Core (EPC).
History
In February 2003, Continuous Computing acquired Trillium Digital Systems' intellectual property, customers and also hired some Trillium engineering, sales and marketing staff from Intel Corporation.
In July 2004, Continuous Computing expanded with the opening of a major software development center in Bangalore, India. The company acquired key products, people, technology and other assets from China-based UP Technologies Ltd. in July 2005.
In October 2007, the company launched "FlexTCA" platforms, targeting the security and wireless core vertical telecom markets. In February 2008, Continuous Computing announced the availability of its upgraded Trillium 3G / 4G Wireless protocol software for comprehensive support of Universal Mobile Telecommunications System (UMTS) High-Speed Packet Access (HSPA) functionality in alignment with 3GPP Release 7 standards. These performance improvements increase the data rates and bandwidth over the air interface in 3G networks.
Continuous Computing also announced in February 2008 their partnership with picoChip Designs Ltd. This partnership was created to speed the development of the
|
https://en.wikipedia.org/wiki/Chlororespiration
|
Chlororespiration is a respiratory process that takes place within plants. Inside plant cells there is an organelle called the chloroplast which is surrounded by the thylakoid membrane. This membrane contains an enzyme called NAD(P)H dehydrogenase which transfers electrons in a linear chain to oxygen molecules. This electron transport chain (ETC) within the chloroplast also interacts with those in the mitochondria where respiration takes place. Photosynthesis is also a process that Chlororespiration interacts with. If photosynthesis is inhibited by environmental stressors like water deficit, increased heat, and/or increased/decreased light exposure, or even chilling stress then chlororespiration is one of the crucial ways that plants use to compensate for chemical energy synthesis.
Chlororespiration – the latest model
Initially, the presence of chlororespiration as a legitimate respiratory process in plants was heavily doubted. However, experimentation on Chlamydomonas reinhardtii, discovered Plastoquinone (PQ) to be a redox carrier. The role of this redox carrier is to transport electrons from the NAD(P)H enzyme to oxygen molecules on the thylakoid membrane. Using this cyclic electron chain around photosystem one (PS I), chlororespiration compensates for the lack of light. This cyclic pathway also allows electrons to re-enter the PQ pool through NAD(P)H enzyme activity and production, which is then used to supply ATP molecules (energy) to plant cells.
In the year 2002, the discovery of the molecules; plastid terminal oxidase (PTOX) and NDH complexes have revolutionised the concept of chlororespiration. Using evidence from experimentation on the plant species Rosa Meillandina, this latest model observes the role of PTOX to be an enzyme that prevents the PQ pool from over-reducing, by stimulating its reoxidation. Whereas, the NDH complexes are responsible for providing a gateway for electrons to form an ETC. The presence of such molecules are apparent in the non-
|
https://en.wikipedia.org/wiki/In%20vitro%20models%20for%20calcification
|
In vitro models for calcification may refer to systems that have been developed in order to reproduce, in the best possible way, the calcification process that tissues or biomaterials undergo inside the body. The aim of these systems is to mimic the high levels of calcium and phosphate present in the blood and measure the extent of the crystal's deposition. Different variations can include other parameters to increase the veracity of these models, such as flow, pressure, compliance and resistance. All the systems have different limitations that have to be acknowledged regarding the operating conditions and the degree of representation. The rational of using such is to partially replace in vivo animal testing, whilst rendering much more controllable and independent parameters compared to an animal model.
The main use of these models is to study the calcification potential of prostheses that are in direct contact with the blood. In this category we find examples such as animal tissue prostheses (xenogeneic bioprosthesis). Xenogeneic heart valves are of special importance for this area of study as they demonstrate a limited durability mainly due to the fatigue of the tissue and the calcific deposits (see Aortic valve replacement).
Description
In vitro calcification models have been used in medical implant development to evaluate the calcification potential of the medical device or tissue. They can be considered a subfamily of the bioreactors that have been used in the field of tissue engineering for tissue culture and growth. These calcification bioreactors are designed to mimic and maintain the mechano-chemical environment that the tissue encounters in vivo with a view to generating the pathological environment that would favor calcium deposition. Parameters including medium flow, pH, temperature and supersaturation of the calcifying solution used in the bioreactor are maintained and closely monitored. The monitoring of these parameters allows to obtain information
|
https://en.wikipedia.org/wiki/Developer%20relations
|
Developer relations, abbreviated as DevRel, is an umbrella term covering the strategies and tactics for building and nurturing a community of mutually beneficial relationships between organizations and developers (e.g., software developers) as the primary users, and often influencers on purchases, of a product.
Developer Relations is a form of Platform Evangelism and the activities involved are sometimes referred to as a Developer Program or DevRel Program. A DevRel program may comprise a framework built around some or all of the following aspects:
Developer Marketing: Outreach and engagement activities to create awareness and convert developers to use a product.
Developer Education: Product documentation and education resources to aid learning and build affinity with a product and community.
Developer Experience (DX): Resources like a developer portal, product, and documentation, to activate the developer with the least friction.
Developer Success: Activities to nurture and retain developers as they build and scale with a product.
Community: Nourishes a community to maintain a sustainable program.
The impacts and goals of DevRel programs include:
Increased revenue and funding
User growth and retention
Product innovation and improvements
Customer satisfaction and support deflection
Strong technical recruiting pipeline
Brand recognition and awareness
Other goals of DevRel initiatives can include:
Product Building: An organization relies on a community of developers to build their technology (e.g., open source).
Product-market Fit: The product's success depends on understanding developers' needs and desires.
Developer Enablement: Supporting developers' use of the product (e.g., by providing education, tools, and infrastructure).
Developer Perception: To overcome developer perceptions that may be preventing success of a product.
Hiring/Recruiting: To attract potential developers for recruitment.
History and roots
Apple is considered to have crea
|
https://en.wikipedia.org/wiki/Autoregressive%20model
|
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation which should not be confused with differential equation). Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.
Contrary to the moving-average (MA) model, the autoregressive model is not always stationary as it may contain a unit root.
Definition
The notation indicates an autoregressive model of order p. The AR(p) model is defined as
where are the parameters of the model, and is white noise. This can be equivalently written using the backshift operator B as
so that, moving the summation term to the left side and using polynomial notation, we have
An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise.
Some parameter constraints are necessary for the model to remain weak-sense stationary. For example, processes in the AR(1) model with are not stationary. More generally, for an AR(p) model to be weak-sense stationary, the roots of the polynomial must lie outside the unit circle, i.e., each (complex) root must satisfy (see pages 89,92 ).
Intertemporal effect of shocks
In an AR process, a one-time
|
https://en.wikipedia.org/wiki/Cephalopod%20size
|
Cephalopods, which include squids and octopuses, vary enormously in size. The smallest are only about long and weigh less than at maturity, while the giant squid can exceed in length and the colossal squid weighs close to half a tonne (), making them the largest living invertebrates. Living species range in mass more than three-billion-fold, or across nine orders of magnitude, from the lightest hatchlings to the heaviest adults. Certain cephalopod species are also noted for having individual body parts of exceptional size.
Cephalopods were at one time the largest of all organisms on Earth, and numerous species of comparable size to the largest present day squids are known from the fossil record, including enormous examples of ammonoids, belemnoids, nautiloids, orthoceratoids, teuthids, and vampyromorphids. In terms of mass, the largest of all known cephalopods were likely the giant shelled ammonoids and endocerid nautiloids, though perhaps still second to the largest living cephalopods when considering tissue mass alone.
Cephalopods vastly larger than either giant or colossal squids have been postulated at various times. One of these was the St. Augustine Monster, a large carcass weighing several tonnes that washed ashore on the United States coast near St. Augustine, Florida, in 1896. Reanalyses in 1995 and 2004 of the original tissue samples—together with those of other similar carcasses—showed conclusively that they were all masses of the collagenous matrix of whale blubber.
Giant cephalopods have fascinated humankind for ages. The earliest surviving records are perhaps those of Aristotle and Pliny the Elder, both of whom described squids of very large size. Tales of giant squid have been common among mariners since ancient times, and may have inspired the monstrous kraken of Nordic legend, said to be as large as an island and capable of engulfing and sinking any ship. Similar tentacled sea monsters are known from other parts of the globe, including the Akk
|
https://en.wikipedia.org/wiki/Omega-categorical%20theory
|
In mathematical logic, an omega-categorical theory is a theory that has exactly one countably infinite model up to isomorphism. Omega-categoricity is the special case κ = = ω of κ-categoricity, and omega-categorical theories are also referred to as ω-categorical. The notion is most important for countable first-order theories.
Equivalent conditions for omega-categoricity
Many conditions on a theory are equivalent to the property of omega-categoricity. In 1959 Erwin Engeler, Czesław Ryll-Nardzewski and Lars Svenonius, proved several independently. Despite this, the literature still widely refers to the Ryll-Nardzewski theorem as a name for these conditions. The conditions included with the theorem vary between authors.
Given a countable complete first-order theory T with infinite models, the following are equivalent:
The theory T is omega-categorical.
Every countable model of T has an oligomorphic automorphism group (that is, there are finitely many orbits on Mn for every n).
Some countable model of T has an oligomorphic automorphism group.
The theory T has a model which, for every natural number n, realizes only finitely many n-types, that is, the Stone space Sn(T) is finite.
For every natural number n, T has only finitely many n-types.
For every natural number n, every n-type is isolated.
For every natural number n, up to equivalence modulo T there are only finitely many formulas with n free variables, in other words, for every n, the nth Lindenbaum–Tarski algebra of T is finite.
Every model of T is atomic.
Every countable model of T is atomic.
The theory T has a countable atomic and saturated model.
The theory T has a saturated prime model.
Examples
The theory of any countably infinite structure which is homogeneous over a finite relational language is omega-categorical. Hence, the following theories are omega-categorical:
The theory of dense linear orders without endpoints (Cantor's isomorphism theorem)
The theory of the Rado graph
The theory o
|
https://en.wikipedia.org/wiki/Motzkin%E2%80%93Taussky%20theorem
|
The Motzkin–Taussky theorem is a result from operator and matrix theory about the representation of a sum of two bounded, linear operators (resp. matrices). The theorem was proven by Theodore Motzkin and Olga Taussky-Todd.
The theorem is used in perturbation theory, where e.g. operators of the form
are examined.
Statement
Let be a finite-dimensional complex vector space. Furthermore, let be such that all linear combinations
are diagonalizable for all . Then all eigenvalues of are of the form
(i.e. they are linear in und ) and are independent of the choice of .
Here stands for an eigenvalue of .
Comments
Motzkin and Taussky call the above property of the linearity of the eigenvalues in property L.
Bibliography
Kato, Tosio (1995). Perturbation Theory for Linear Operators. Berlin, Heidelberg: Springer. p. 86. ISBN 978-3-540-58661-6, doi:10.1007/978-3-642-66282-9.
Friedland, Shmuel (1981). A generalization of the Motzkin-Taussky theorem. Linear Algebra and its Applications. Vol. 36. pp. 103–109. doi:10.1016/0024-3795(81)90223-8.
Notes
Mathematical theorems
Linear algebra
Perturbation theory
Linear operators
|
https://en.wikipedia.org/wiki/Estimation%20theory
|
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
In estimation theory, two approaches are generally considered:
The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest
The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.
Examples
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal.
Basics
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector,
Secondly, there are M parameters
whose values are to be estimated. Third, the continuous probability density function (pdf) or its
|
https://en.wikipedia.org/wiki/Alpha%20beta%20filter
|
An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filter) is a simplified form of observer for estimation, data smoothing and control applications. It is closely related to Kalman filters and to linear state observers used in control theory. Its principal advantage is that it does not require a detailed system model.
Filter equations
An alpha beta filter presumes that a system is adequately approximated by a model having two internal states, where the first state is obtained by integrating the value of the second state over time. Measured system output values correspond to observations of the first model state, plus disturbances. This very low order approximation is adequate for many simple systems, for example, mechanical systems where position is obtained as the time integral of velocity. Based on a mechanical system analogy, the two states can be called position x and velocity v. Assuming that velocity remains approximately constant over the small time interval ΔT between measurements, the position state is projected forward to predict its value at the next sampling time using equation 1.
Since velocity variable v is presumed constant, its projected value at the next sampling time equals the current value.
If additional information is known about how a driving function will change the v state during each time interval, equation 2 can be modified to include it.
The output measurement is expected to deviate from the prediction because of noise and dynamic effects not included in the simplified dynamic model. This prediction error r is also called the residual or innovation, based on statistical or Kalman filtering interpretations
Suppose that residual r is positive. This could result because the previous x estimate was low, the previous v was low, or some combination of the two. The alpha beta filter takes selected alpha and beta constants (from which the filter gets its name), uses alpha times the deviation r to correct the position estim
|
https://en.wikipedia.org/wiki/List%20of%20books%20on%20popular%20physics%20concepts
|
This is a list of books which talk about things related to current day physics or physics as it would be in the future.
There a number of books that have been penned about specific physics concepts, e.g. quantum mechanics or kinematics, and many other books which discuss physics in general, i.e. not focussing on a single topic. There are also books that encourage beginners to enjoy physics by making them look at it from different angles.
Boks
Lists of books
|
https://en.wikipedia.org/wiki/SwitchBlade
|
SwitchBlade is the registered name of a family of layer 2 and layer 3 chassis switches developed by Allied Telesis. Current models include the SwitchBlade x908 GEN2 and the SwitchBlade x8100 layer 3 chassis switches. The first model was the SwitchBlade 4000-layer 3 core chassis, which ran the earlier AlliedWare operating system.
AlliedWare Plus models
The family includes models using the AlliedWare Plus operating system which uses an industry standard CLI structure.
SwitchBlade x908 Generation 2
The SwitchBlade x908 GEN2 was introduced in 2017 and is the latest evolution of the original SwitchBlade x908 design. It features a stackable advanced layer 3 3RU chassis switch with 2.6 Terabit/s of switching capacity. It has eight switch module bays like its predecessor although in the GEN2 they are mounted vertically to assist with cooling and cable management. The GEN2 also supports Allied Telesis' Virtual Chassis Stacking technology, but this has been enhanced to enable up to 4 SwitchBlade x908 GEN2 chassis' to be stacked over long-distances using any port-speed (10G, 40G or 100G). Each chassis includes redundant system power supply bays.
Available modules
XEM2-12XT - 12x 1000BASE-T/10GBASE-T copper RJ-45 ports
XEM2-12XTm - 12x 1000BASE-T/NBASE-T/10GBASE-T multi-gigabit copper RJ-45 ports
XEM2-12XS - 12x 10G SFP ports
XEM2-4QS - 4x 40G QSFP ports
XEM2-1CQ - 1x 100G QSFP28 port
SwitchBlade x8100
The SwitchBlade x8100 series was launched in 2012 is an advanced layer 3 chassis switch with 1.92Tbit/s of switching capacity when two SBx81CFC960 control cards are installed. It is available in two chassis sizes, 6-slot (SBx8106) and 12-slot (SBx8112). The 12-slot chassis has 10-line card slots and 2 controller card slots. The 6-slot chassis has 4-line card slots, 1 controller card slot, and one additional slot that can accommodate either a line card or controller card. It also features four hotswappable PSU bays, supporting load sharing and redundancy for both sys
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20constants
|
A mathematical constant is a key number whose value is fixed by an unambiguous definition, often referred to by a symbol (e.g., an alphabet letter), or by mathematicians' names to facilitate using it across multiple mathematical problems. For example, the constant π may be defined as the ratio of the length of a circle's circumference to its diameter. The following list includes a decimal expansion and set containing each number, ordered by year of discovery.
The column headings may be clicked to sort the table alphabetically, by decimal value, or by set. Explanations of the symbols in the right hand column can be found by clicking on them.
List
Mathematical constants sorted by their representations as continued fractions
The following list includes the continued fractions of some constants and is sorted by their representations. Continued fractions with more than 20 known terms have been truncated, with an ellipsis to show that they continue. Rational numbers have two continued fractions; the version in this list is the shorter one. Decimal representations are rounded or padded to 10 places if the values are known.
Sequences of constants
See also
Invariant (mathematics)
Glossary of mathematical symbols
List of mathematical symbols by subject
List of numbers
List of physical constants
Particular values of the Riemann zeta function
Physical constant
Notes
|
https://en.wikipedia.org/wiki/Regularization%20%28physics%29
|
In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.
It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback.
Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions.
Overview
Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing an auxiliary concept of a regulator (for example, the minimal distance in space which is useful, in case the divergences arise from short-distance physical effects). The correct physical result is obtained in the limit in which the regulator goes away (in our example, ), but the virtue of the regulator is that for its finite value, the result is finite.
However, the result usually includes terms proportional to expressions like which are not well-defined in the limit . Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization. Renormalization is based on the requirement that some physical quantities — expressed by seemingly divergent expressions such as — are equal to the observed
|
https://en.wikipedia.org/wiki/Angle%20of%20arrival
|
The angle of arrival (AoA) of a signal is the direction from which the signal (e.g. radio, optical or acoustic) is received.
Measurement
Measurement of AoA can be done by determining the direction of propagation of a radio-frequency wave incident on an antenna array or determined from maximum signal strength during antenna rotation.
The AoA can be calculated by measuring the time difference of arrival (TDOA) between individual elements of the array.
Generally this TDOA measurement is made by measuring the difference in received phase at each element in the antenna array. This can be thought of as beamforming in reverse. In beamforming, the signal from each element is weighed to "steer" the gain of the antenna array. In AoA, the delay of arrival at each element is measured directly and converted to an AoA measurement.
Consider, for example, a two element array spaced apart by one-half the wavelength of an incoming RF wave. If a wave is incident upon the array at boresight, it will arrive at each antenna simultaneously. This will yield 0° phase-difference measured between the two antenna elements, equivalent to a 0° AoA. If a wave is incident upon the array at broadside, then a 180° phase difference will be measured between the elements, corresponding to a 90° AoA.
In optics, AoA can be calculated using interferometry.
Applications
An application of AoA is in the geolocation of cell phones. The aim is either for the cell system to report the location of a cell phone placing an emergency call or to provide a service to tell the user of the cell phone where they are. Multiple receivers on a base station would calculate the AoA of the cell phone's signal, and this information would be combined to determine the phone's location.
AoA is generally used to discover the location of pirate radio stations or of any military radio transmitter.
In submarine acoustics, AoA is used to localize objects with active or passive ranging.
Limitation
Limitations on the acc
|
https://en.wikipedia.org/wiki/List%20of%20solids%20derived%20from%20the%20sphere
|
This page lists solids derived from a sphere.
Solids from cutting a sphere with one or more planes
Dome
Spherical cap
Spherical sector
Spherical segment
Spherical shell
Spherical wedge
Solids from deforming a sphere
Ellipsoid
Spheroid
Solid bounded by Morin surface
Any Genus 0 surface
Solids from intersecting a sphere with other solids or curved planes
Reuleaux tetrahedron
Spherical lens
Notes
Geometric shapes
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Sideloading
|
Sideloading describes the process of transferring files between two local devices, in particular between a personal computer and a mobile device such as a mobile phone, smartphone, PDA, tablet, portable media player or e-reader.
Sideloading typically refers to media file transfer to a mobile device via USB, Bluetooth, WiFi or by writing to a memory card for insertion into the mobile device, but also applies to the transfer of apps from web sources that are not vendor-approved.
When referring to Android apps, "sideloading" typically means installing an application package in APK format onto an Android device. Such packages are usually downloaded from websites other than the official app store Google Play. For Android users sideloading of apps is only possible if the user has allowed "Unknown Sources" in their Security Settings.
When referring to iOS apps, "sideloading" means installing an app in IPA format onto an Apple device, usually through the use of a computer program such as Cydia Impactor or Xcode. On modern versions of iOS, the sources of the apps must be trusted by both Apple and the user in "profiles and device management" in settings, except when using jailbreak methods of sideloading apps. Sideloading is only allowed by Apple for internal testing and development of apps using the official SDKs.
Historical
The term "sideload" was coined in the late 1990s by online storage service i-drive as an alternative means of transferring and storing computer files virtually instead of physically. In 2000, i-drive applied for a trademark on the term. Rather than initiating a traditional file "download" from a website or FTP site to their computer, a user could perform a "sideload" and have the file transferred directly into their personal storage area on the service. Usage of this feature began to decline as newer hard drives became cheaper and the space on them grew each year into the gigabytes and the trademark application was abandoned.
The advent of portable
|
https://en.wikipedia.org/wiki/Luminex%20Corporation
|
Luminex Corporation | A DiaSorin Company is a biotechnology company which develops, manufactures and markets proprietary biological testing technologies with applications in life-sciences.
Background
Luminex's Multi-Analyte Profiling (xMAP) technology allows simultaneous analysis of up to 500 bioassays from a small sample volume, typically a single drop of fluid, by reading biological tests on the surface of microscopic polystyrene beads called microspheres.
The xMAP technology combines this miniaturized liquid array bioassay capability with small lasers, light emitting diodes (LEDs), digital signal processors, photo detectors, charge-coupled device imaging and proprietary software to create a system offering advantages in speed, precision, flexibility and cost. The technology is currently being used within various segments of the life sciences industry, which includes the fields of drug discovery and development, and for clinical diagnostics, genetic analysis, bio-defense, food safety and biomedical research.
The Luminex MultiCode technology is used for real-time polymerase chain reaction (PCR) and multiplexed PCR assays. Luminex Corporation owns 315 issued patents worldwide, including over 124 issued patents in the United States based on its multiplexing xMAP platform.
|
https://en.wikipedia.org/wiki/Host%20model
|
In computer networking, a host model is an option of designing the TCP/IP stack of a networking operating system like Microsoft Windows or Linux. When a unicast packet arrives at a host, IP must determine whether the packet is locally destined (its destination matches an address that is assigned to an interface of the host). If the IP stack is implemented with a weak host model, it accepts any locally destined packet regardless of the network interface on which the packet was received. If the IP stack is implemented with a strong host model, it only accepts locally destined packets if the destination IP address in the packet matches an IP address assigned to the network interface on which the packet was received.
The weak host model provides better network connectivity (for example, it can be easy to find any packet arriving at the host using ordinary tools), but it also makes hosts susceptible to multihome-based network attacks. For example, in some configurations when a system running a weak host model is connected to a VPN, other systems on the same subnet can compromise the security of the VPN connection. Systems running the strong host model are not susceptible to this type of attack.
The IPv4 implementation in Microsoft Windows versions prior to Windows Vista uses the weak host model. The Windows Vista and Windows Server 2008 TCP/IP stack supports the strong host model for both IPv4 and IPv6 and is configured to use it by default. However, it can also be configured to use a weak host model.
The IPv4 implementation in Linux defaults to the weak host model. Source validation by reversed path, as specified in RFC 1812 can be enabled (the rp_filter option), and some distributions do so by default. This is not quite the same as the strong host model, but defends against the same class of attacks for typical multihomed hosts. arp_ignore and arp_announce can also be used to tweak this behaviour.
Modern BSDs (FreeBSD, NetBSD, OpenBSD, and DragonflyBSD) all defau
|
https://en.wikipedia.org/wiki/Typed%20assembly%20language
|
In computer science, a typed assembly language (TAL) is an assembly language that is extended to include a method of annotating the datatype of each value that is manipulated by the code. These annotations can then be used by a program (type checker) that processes the assembly language code in order to analyse how it will behave when it is executed. Specifically, such a type checker can be used to prove the type safety of code that meets the criteria of some appropriate type system.
Typed assembly languages usually include a high-level memory management system based on garbage collection.
A typed assembly language with a suitably expressive type system can be used to enable the safe execution of untrusted code without using an intermediate representation like bytecode, allowing features similar to those currently provided by virtual machine environments like Java and .NET.
See also
Proof-carrying code
Further reading
Greg Morrisett. "Typed assembly language" in Advanced Topics in Types and Programming Languages. Editor: Benjamin C. Pierce.
External links
TALx86, a research project from Cornell University which has implemented a typed assembler for the Intel IA-32 architecture.
Assembly languages
Computer security
Programming language theory
|
https://en.wikipedia.org/wiki/Field-programmable%20analog%20array
|
A field-programmable analog array (FPAA) is an integrated circuit device containing computational analog blocks (CAB) and interconnects between these blocks offering field-programmability. Unlike their digital cousin, the FPGA, the devices tend to be more application driven than general purpose as they may be current mode or voltage mode devices. For voltage mode devices, each block usually contains an operational amplifier in combination with programmable configuration of passive components. The blocks can, for example, act as summers or integrators.
FPAAs usually operate in one of two modes: continuous time and discrete time.
Discrete-time devices possess a system sample clock. In a switched capacitor design, all blocks sample their input signals with a sample and hold circuit composed of a semiconductor switch and a capacitor. This feeds a programmable op amp section which can be routed to a number of other blocks. This design requires more complex semiconductor construction. An alternative, switched-current design, offers simpler construction and does not require the input capacitor, but can be less accurate, and has lower fan-out - it can drive only one following block. Both discrete-time device types must compensate for switching noise, aliasing at the system sample rate, and sample-rate limited bandwidth, during the design phase.
Continuous-time devices work more like an array of transistors or op amps which can operate at their full bandwidth. The components are connected in a particular arrangement through a configurable array of switches. During circuit design, the switch matrix's parasitic inductance, capacitance and noise contributions must be taken into account.
Currently there are very few manufactures of FPAAs. On-chip resources are still very limited when compared to that of an FPGA. This resource deficit is often cited by researchers as a limiting factor in their research.
History
The term FPAA was first used in 1991 by Lee and Gulak. Th
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.