source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Undefined%20%28mathematics%29
In mathematics, the term undefined is often used to refer to an expression which is not assigned an interpretation or a value (such as an indeterminate form, which has the possibility of assuming different values). The term can take on several different meanings depending on the context. For example: In various branches of mathematics, certain concepts are introduced as primitive notions (e.g., the terms "point", "line" and "plane" in geometry). As these terms are not defined in terms of other concepts, they may be referred to as "undefined terms". A function is said to be "undefined" at points outside of its domainfor example, the real-valued function is undefined for negative  (i.e., it assigns no value to negative arguments). In algebra, some arithmetic operations may not assign a meaning to certain values of its operands (e.g., division by zero). In which case, the expressions involving such operands are termed "undefined". In square roots, square roots of any negative number are undefined because you can’t multiply 2 of the same positive nor negative number to get a negative number, like √-4, √-9, √-16 etc. (ex: 6x6=36 and -6x-6=36). Undefined terms In ancient times, geometers attempted to define every term. For example, Euclid defined a point as "that which has no part". In modern times, mathematicians recognize that attempting to define every word inevitably leads to circular definitions, and therefore leave some terms (such as "point") undefined (see primitive notion for more). This more abstract approach allows for fruitful generalizations. In topology, a topological space may be defined as a set of points endowed with certain properties, but in the general setting, the nature of these "points" is left entirely undefined. Likewise, in category theory, a category consists of "objects" and "arrows", which are again primitive, undefined terms. This allows such abstract mathematical theories to be applied to very diverse concrete situations. In arithme
https://en.wikipedia.org/wiki/The%20Aleph%20%28short%20story%29
"The Aleph" (original Spanish title: "El Aleph") is a short story by the Argentine writer and poet Jorge Luis Borges. First published in September 1945, it was reprinted in the short story collection, The Aleph and Other Stories, in 1949, and revised by the author in 1974. Plot summary In Borges' story, the Aleph is a point in space that contains all other points. Anyone who gazes into it can see everything in the universe from every angle simultaneously, without distortion, overlapping, or confusion. The story traces the theme of infinity found in several of Borges' other works, such as "The Book of Sand". Borges has stated that the inspiration for this story came from H.G. Wells's short story "The Door in the Wall". As in many of Borges' short stories, the protagonist is a fictionalized version of the author. At the beginning of the story, he is mourning the recent death of Beatriz Viterbo, a woman he loved, and he resolves to stop by the house of her family to pay his respects. Over time, he comes to know her first cousin, Carlos Argentino Daneri, a mediocre poet with a vastly exaggerated view of his own talent who has made it his lifelong quest to write an epic poem that describes every single location on the planet in excruciatingly fine detail. Later in the story, a business attempts to tear down Daneri's house in the course of its expansion. Daneri becomes enraged, explaining to the narrator that he must keep the house in order to finish his poem, because the cellar contains an Aleph which he is using to write the poem. Though by now he believes Daneri to be insane, the narrator proposes to come to the house and see the Aleph for himself. Left alone in the darkness of the cellar, the narrator begins to fear that Daneri is conspiring to kill him, and then he sees the Aleph for himself: Though staggered by the experience of seeing the Aleph, the narrator pretends to have seen nothing in order to get revenge on Daneri, whom he dislikes, by giving Daneri
https://en.wikipedia.org/wiki/Hilbert%20spectral%20analysis
Hilbert spectral analysis is a signal analysis method applying the Hilbert transform to compute the instantaneous frequency of signals according to After performing the Hilbert transform on each signal, we can express the data in the following form: This equation gives both the amplitude and the frequency of each component as functions of time. It also enables us to represent the amplitude and the instantaneous frequency as functions of time in a three-dimensional plot, in which the amplitude can be contoured on the frequency-time plane. This frequency-time distribution of the amplitude is designated as the Hilbert amplitude spectrum, or simply Hilbert spectrum. Hilbert spectral analysis method is an important part of Hilbert–Huang transform.
https://en.wikipedia.org/wiki/Avalon%20explosion
The Avalon explosion, named from the Precambrian fauna discovered at the Avalon Peninsula in Newfoundland, is a proposed evolutionary radiation of prehistoric animals about 575 million years ago in the Ediacaran Period, with the Avalon explosion being one of three eras grouped in this time. This event is believed to have occurred some 33 million years earlier than the Cambrian explosion. Scientists are still unsure of the full extent behind the development of the Avalon explosion. The Avalon explosion resulted in a rapid increase in organism diversity. Many of the animals from the Avalon explosion are found living in deep marine environments. The first stages of the Avalon explosion were observed through comparatively minimal species. History Charles Darwin predicted a time of ecological growth before the Cambrian Period, but there was no evidence to support it until the Avalon explosion was proposed in 2008 by Virginia Tech paleontologists after analysis of the morphological space change in several Ediacaran assemblages. The discovery suggests that the early evolution of animals may have involved more than one explosive event. The original analysis has been the subject of dispute in the literature. Evidence Trace fossils of these Avalon organisms have been found worldwide, with many found in Newfoundland, in Canada and the Charnwood Forest in England, representing the earliest known complex multicellular organisms. The Avalon explosion theoretically produced the Ediacaran biota. The biota largely disappeared contemporaneously with the rapid increase in biodiversity known as the Cambrian explosion. At this time, all living animal groups were present in the Cambrian oceans. The Avalon explosion appears similar to the Cambrian explosion in the rapid increase in diversity of morphologies in a relatively small-time frame, followed by diversification within the established body plans, a pattern similar to that observed in other evolutionary events. Plants and animals
https://en.wikipedia.org/wiki/Stanton%20number
The Stanton number, St, is a dimensionless number that measures the ratio of heat transferred into a fluid to the thermal capacity of fluid. The Stanton number is named after Thomas Stanton (engineer) (1865–1931). It is used to characterize heat transfer in forced convection flows. Formula where h = convection heat transfer coefficient ρ = density of the fluid cp = specific heat of the fluid u = velocity of the fluid It can also be represented in terms of the fluid's Nusselt, Reynolds, and Prandtl numbers: where Nu is the Nusselt number; Re is the Reynolds number; Pr is the Prandtl number. The Stanton number arises in the consideration of the geometric similarity of the momentum boundary layer and the thermal boundary layer, where it can be used to express a relationship between the shear force at the wall (due to viscous drag) and the total heat transfer at the wall (due to thermal diffusivity). Mass transfer Using the heat-mass transfer analogy, a mass transfer St equivalent can be found using the Sherwood number and Schmidt number in place of the Nusselt number and Prandtl number, respectively. where is the mass Stanton number; is the Sherwood number based on length; is the Reynolds number based on length; is the Schmidt number; is defined based on a concentration difference (kg s−1 m−2); is the velocity of the fluid Boundary layer flow The Stanton number is a useful measure of the rate of change of the thermal energy deficit (or excess) in the boundary layer due to heat transfer from a planar surface. If the enthalpy thickness is defined as: Then the Stanton number is equivalent to for boundary layer flow over a flat plate with a constant surface temperature and properties. Correlations using Reynolds-Colburn analogy Using the Reynolds-Colburn analogy for turbulent flow with a thermal log and viscous sub layer model, the following correlation for turbulent heat transfer for is applicable where See also Strouhal number, an unrelated nu
https://en.wikipedia.org/wiki/Floppy-disk%20controller
A floppy-disk controller (FDC) has evolved from a discrete set of components on one or more circuit boards to a special-purpose integrated circuit (IC or "chip") or a component thereof. An FDC directs and controls reading from and writing to a computer's floppy disk drive (FDD). The FDC is responsible for reading data presented from the host computer and converting it to the drive's on-disk format using one of a number of encoding schemes, like FM encoding (single density) or MFM encoding (double density), and reading those formats and returning it to its original binary values. Depending on the platform, data transfers between the controller and host computer would be controlled by the computer's own microprocessor, or an inexpensive dedicated microprocessor like the MOS 6507 or Zilog Z80. Early controllers required additional circuitry to perform specific tasks like providing clock signals and setting various options. Later designs included more of this functionality on the controller and reduced the complexity of the external circuitry; single-chip solutions were common by the later 1980s. By the 1990s, the floppy disk was increasingly giving way to hard drives, which required similar controllers. In these systems, the controller also often combined a microcontroller to handle data transfer over standardized connectors like SCSI and IDE that could be used with any computer. In more modern systems, the FDC, if present at all, is typically part of the many functions provided by a single super I/O chip. History The first floppy disk drive controller (FDC) like the first floppy disk drive (the IBM 23FD) shipped in 1971 as a component in the IBM 2385 Storage Control Unit for the IBM 2305 fixed head disk drive, and of the System 370 Models 155 and 165. The IBM 3830 Storage Control Unit, a contemporaneous and quite similar controller, uses its internal processor to control a 23FD. The resultant FDC is a simple implementation in IBMs’ MST hybrid circuits on a few pr
https://en.wikipedia.org/wiki/Pathovar
A pathovar is a bacterial strain or set of strains with the same or similar characteristics, that is differentiated at infrasubspecific level from other strains of the same species or subspecies on the basis of distinctive pathogenicity to one or more plant hosts. Pathovars are named as a ternary or quaternary addition to the species binomial name, for example the bacterium that causes citrus canker Xanthomonas axonopodis, has several pathovars with different host ranges, X. axonopodis pv. citri is one of them; the abbreviation 'pv.' means pathovar. The type strains of pathovars are pathotypes, which are distinguished from the types (holotype, neotype, etc.) of the species to which the pathovar belongs. See also Infraspecific names in botany Phytopathology Trinomen, infraspecific names in zoology (subspecies only)
https://en.wikipedia.org/wiki/Laser%20capture%20microdissection
Laser capture microdissection (LCM), also called microdissection, laser microdissection (LMD), or laser-assisted microdissection (LMD or LAM), is a method for isolating specific cells of interest from microscopic regions of tissue/cells/organisms (dissection on a microscopic scale with the help of a laser). Principle Laser-capture microdissection (LCM) is a method to procure subpopulations of tissue cells under direct microscopic visualization. LCM technology can harvest the cells of interest directly or can isolate specific cells by cutting away unwanted cells to give histologically pure enriched cell populations. A variety of downstream applications exist: DNA genotyping and loss of heterozygosity (LOH) analysis, RNA transcript profiling, cDNA library generation, proteomics discovery and signal-pathway profiling. The total time required to carry out this protocol is typically 1–1.5 h. Extraction A laser is coupled into a microscope and focuses onto the tissue on the slide. By movement of the laser by optics or the stage the focus follows a trajectory which is predefined by the user. This trajectory, also called element, is then cut out and separated from the adjacent tissue. After the cutting process, an extraction process has to follow if an extraction process is desired. More recent technologies utilize non-contact microdissection. There are several ways to extract tissue from a microscope slide with a histopathology sample on it. Press a sticky surface onto the sample and tear out. This extracts the desired region, but can also remove particles or unwanted tissue on the surface, because the surface is not selective. Melt a plastic membrane onto the sample and tear out. The heat is introduced, for example, by a red or infrared (IR) laser onto a membrane stained with an absorbing dye. As this adheres the desired sample onto the membrane, as with any membrane that is put close to the histopathology sample surface, there might be some debris extracted. Another
https://en.wikipedia.org/wiki/Institute%20of%20Electronics%2C%20Information%20and%20Communication%20Engineers
The is a Japanese institute specializing in the areas of electronic, information and communication engineering and associated fields. Its headquarters are located in Tokyo, Japan. It is a membership organization with the purpose of advancing the field of electronics, information and communications and support activities of its members. History The earliest predecessor to the organization was formed in May 1911 as the Second Study Group of the Second Department of the Japanese Ministry of Communications Electric Laboratory. In March 1914 the Second Study Group was renamed the Study Group on Telegraph and Telephone. As the adoption of the telegraph and telephone quickly mounted, there was increased demand for research and development of these technologies, which prompted the need to create a dedicated institute for engineers working in this field. Thus the Institute of Telegraph and Telephone Engineers of Japan was established in May 1917. Soon after its formation the institute began to publish journals and host paper presentations showcasing the latest developments in the field. As the institute's scope of research broadened to accommodate new technical developments, it was rebranded as the Institute of Electrical Communication Engineers of Japan in January 1937, and then once again as the Institute of Electronics and Communication Engineers of Japan in May 1967. Finally, in January 1987, the institute renamed itself to the Institute of Electronics, Information and Communication Engineers to recognize the increasing research being conducted in computer engineering and information technology. Organization The institution is organized into five societies: electronics society communications society information and system society engineering sciences society human communication engineering society Each society has its own president and technical committees. Volunteers helped run various activities within the society, such as publications and conferences. Mem
https://en.wikipedia.org/wiki/Reagent
In chemistry, a reagent ( ) or analytical reagent is a substance or compound added to a system to cause a chemical reaction, or test if one occurs. The terms reactant and reagent are often used interchangeably, but reactant specifies a substance consumed in the course of a chemical reaction. Solvents, though involved in the reaction mechanism, are usually not called reactants. Similarly, catalysts are not consumed by the reaction, so they are not reactants. In biochemistry, especially in connection with enzyme-catalyzed reactions, the reactants are commonly called substrates. Definitions Organic chemistry In organic chemistry, the term "reagent" denotes a chemical ingredient (a compound or mixture, typically of inorganic or small organic molecules) introduced to cause the desired transformation of an organic substance. Examples include the Collins reagent, Fenton's reagent, and Grignard reagents. Analytical chemistry In analytical chemistry, a reagent is a compound or mixture used to detect the presence or absence of another substance, e.g. by a color change, or to measure the concentration of a substance, e.g. by colorimetry. Examples include Fehling's reagent, Millon's reagent, and Tollens' reagent. Commercial or laboratory preparations In commercial or laboratory preparations, reagent-grade designates chemical substances meeting standards of purity that ensure the scientific precision and reliability of chemical analysis, chemical reactions or physical testing. Purity standards for reagents are set by organizations such as ASTM International or the American Chemical Society. For instance, reagent-quality water must have very low levels of impurities such as sodium and chloride ions, silica, and bacteria, as well as a very high electrical resistivity. Laboratory products which are less pure, but still useful and economical for undemanding work, may be designated as technical, practical, or crude grade to distinguish them from reagent versions. Biology In t
https://en.wikipedia.org/wiki/Observability%20%28software%29
In distributed systems, observability is the ability to collect data about programs' execution, modules' internal states, and the communication among components. To improve observability, software engineers use a wide range of logging and tracing techniques to gather telemetry information, and tools to analyze and use it. Observability is foundational to site reliability engineering, as it is the first step in triaging a service outage. One of the goals of observability is to minimize the amount of prior knowledge needed to debug an issue. Etymology, terminology and definition The term is borrowed from control theory, where the "observability" of a system measures how well its state can be determined from its outputs. Similarly, software observability measures how well a system's state can be understood from the obtained telemetry (metrics, logs, traces, profiling). The definition of observability varies by vendor: The term is frequently referred to as its numeronym O11y (where 11 stands for the number of letters between the first letter and the last letter of the word). This is similar to other computer science abbreviations such as i18n and L10n and k8s. Observability vs. monitoring Observability and monitoring are sometimes used interchangeably. As tooling, commercial offerings and practices evolved in complexity, "monitoring" was re-branded as observability in order to differentiate new tools from the old. The terms are commonly contrasted in that systems are monitored using predefined sets of telemetry, and monitored systems may be observable. Majors et al. suggest that engineering teams that only have monitoring tools end up relying on expert foreknowledge (seniority), whereas teams that have observability tools rely on exploratory analysis (curiosity). Telemetry types Observability relies on three main types of telemetry data: metrics, logs and traces. Those are often referred to as "pillars of observability". Metrics A metric is a point in tim
https://en.wikipedia.org/wiki/Chirp%20compression
The chirp pulse compression process transforms a long duration frequency-coded pulse into a narrow pulse of greatly increased amplitude. It is a technique used in radar and sonar systems because it is a method whereby a narrow pulse with high peak power can be derived from a long duration pulse with low peak power. Furthermore, the process offers good range resolution because the half-power beam width of the compressed pulse is consistent with the system bandwidth. The basics of the method for radar applications were developed in the late 1940s and early 1950s, but it was not until 1960, following declassification of the subject matter, that a detailed article on the topic appeared the public domain. Thereafter, the number of published articles grew quickly, as demonstrated by the comprehensive selection of papers to be found in a compilation by Barton. Briefly, the basic pulse compression properties can be related as follows. For a chirp waveform that sweeps over a frequency range F1 to F2 in a time period T, the nominal bandwidth of the pulse is B, where B = F2 – F1, and the pulse has a time-bandwidth product of T×B. Following pulse compression, a narrow pulse of duration τ is obtained, where τ ≈ 1/B, together with a peak voltage amplification of . The chirp compression process – outline In order to compress a chirp pulse of duration T seconds, which sweeps linearly in frequency from F1 Hz to F2 Hz, a device with the characteristics of a dispersive delay line is required. This provides most delay for the frequency F1, the first to be generated, but with a delay which reduces linearly with frequency, to be T second less at the end frequency F2. Such a delay characteristic ensures that all frequency components of the chirp pass through the device, to arrive at the detector at the same time instant and so augment one another, to produce a narrow high amplitude pulse, as shown in the figure: An expression describing the required delay characteristic is This has
https://en.wikipedia.org/wiki/IEC%2061162
IEC 61162 is a collection of IEC standards for "Digital interfaces for navigational equipment within a ship". The 61162 standards are developed in Working Group 6 (WG6) of Technical Committee 80 (TC80) of the IEC. Sections of IEC 61162 Standard IEC 61162 is divided into the following parts: Part 1: Single talker and multiple listeners (Also known as NMEA 0183) Part 2: Single talker and multiple listeners, high-speed transmission Part 3: Serial data instrument network (Also known as NMEA 2000) Part 450: Multiple talkers and multiple listeners–Ethernet interconnection (Also known as Lightweight Ethernet) Part 460: Multiple talkers and multiple listeners - Ethernet interconnection - Safety and security The 61162 standards are all concerning the transport of NMEA sentences, but the IEC does not define any of these. This is left to the NMEA Organization. IEC 61162-1 Single talker and multiple listeners. IEC 61162-2 Single talker and multiple listeners, high-speed transmission. IEC 61162-3 Serial data instrument network, multiple talker-multiple listener, prioritized data. IEC 61162-450 Multiple talkers and multiple listeners. This subgroup of TC80/WG6 has specified the use of Ethernet for shipboard navigational networks. The specification describes the transport of NMEA sentences as defined in 61162-1 over IPv4. Due to the low amount of protocol complexity it has been nicknamed Lightweight Ethernet or LWE in short. The historical background and justification for LWE was presented at the ISIS2011 symposium. An overview article of LWE was given in the December 2010 issue of "Digital Ship" The standard was published in the first edition in June 2011. The second edition is in progress (state May 2016). IEC 61162-450/460 IEC 61162-460:2015(E) is an add-on to the IEC 61162-450 standard where higher safety and security standards are needed, e.g. due to higher exposure to external threats or to improve network integrity. This standard provides requirements an
https://en.wikipedia.org/wiki/Temperature-sensitive%20mutant
Temperature-sensitive mutants are variants of genes that allow normal function of the organism at low temperatures, but altered function at higher temperatures. Cold sensitive mutants are variants of genes that allow normal function of the organism at higher temperatures, but altered function at low temperatures. Mechanism Most temperature-sensitive mutations affect proteins, and cause loss of protein function at the non-permissive temperature. The permissive temperature is one at which the protein typically can fold properly, or remain properly folded. At higher temperatures, the protein is unstable and ceases to function properly. These mutations are usually recessive in diploid organisms. Temperature sensitive mutants arrange a reversible mechanism and are able to reduce particular gene products at varying stages of growth and are easily done by changing the temperature of growth. Permissive temperature The permissive temperature is the temperature at which a temperature-sensitive mutant gene product takes on a normal, functional phenotype. When a temperature-sensitive mutant is grown in a permissive condition, the mutated gene product behaves normally (meaning that the phenotype is not observed), even if there is a mutant allele present. This results in the survival of the cell or organism, as if it were a wild type strain. In contrast, the nonpermissive temperature or restrictive temperature is the temperature at which the mutant phenotype is observed. Temperature sensitive mutations are usually missense mutations, which then will harbor the function of a specified necessary gene at the standard, permissive, low temperature. It will alternatively lack the function at a rather high, non-permissive, temperature and display a hypomorphic (partial loss of gene function) and a middle, semi-permissive, temperature. Use in research Temperature-sensitive mutants are useful in biological research. They allow the study of essential processes required for the surviv
https://en.wikipedia.org/wiki/Wigner%E2%80%93Weyl%20transform
In quantum mechanics, the Wigner–Weyl transform or Weyl–Wigner transform (after Hermann Weyl and Eugene Wigner) is the invertible mapping between functions in the quantum phase space formulation and Hilbert space operators in the Schrödinger picture. Often the mapping from functions on phase space to operators is called the Weyl transform or Weyl quantization, whereas the inverse mapping, from operators to functions on phase space, is called the Wigner transform. This mapping was originally devised by Hermann Weyl in 1927 in an attempt to map symmetrized classical phase space functions to operators, a procedure known as Weyl quantization. It is now understood that Weyl quantization does not satisfy all the properties one would require for consistent quantization and therefore sometimes yields unphysical answers. On the other hand, some of the nice properties described below suggest that if one seeks a single consistent procedure mapping functions on the classical phase space to operators, the Weyl quantization is the best option: a sort of normal coordinates of such maps. (Groenewold's theorem asserts that no such map can have all the ideal properties one would desire.) Regardless, the Weyl–Wigner transform is a well-defined integral transform between the phase-space and operator representations, and yields insight into the workings of quantum mechanics. Most importantly, the Wigner quasi-probability distribution is the Wigner transform of the quantum density matrix, and, conversely, the density matrix is the Weyl transform of the Wigner function. In contrast to Weyl's original intentions in seeking a consistent quantization scheme, this map merely amounts to a change of representation within quantum mechanics; it need not connect "classical" with "quantum" quantities. For example, the phase-space function may depend explicitly on Planck's constant ħ, as it does in some familiar cases involving angular momentum. This invertible representation change then all
https://en.wikipedia.org/wiki/Migration%20%28virtualization%29
In the context of virtualization, where a guest simulation of an entire computer is actually merely a software virtual machine (VM) running on a host computer under a hypervisor, migration (also known as teleportation, also known as live migration) is the process by which a running virtual machine is moved from one physical host to another, with little or no disruption in service. Subjective effects Ideally, the process is completely transparent, resulting in no disruption of service (or downtime). In practice, there is always some minor pause in availability, though it may be low enough that only hard real-time systems are affected. Virtualization is far more frequently used with network services and user applications, and these can generally tolerate the brief delays which may be involved. The perceived impact, if any, is similar to a longer-than-usual kernel delay. Objective effects The actual process is heavily dependent on the particular virtualization package in use, but in general, the process is as follows: Regular snapshots of the VM (its simulated hard disk storage, its memory, and its virtual peripherals) are taken in the background by the hypervisor, or by a set of administrative scripts. Each new snapshot adds a differential overlay file to the top of a stack that, as a whole, fully describes the machine. Only the topmost overlay can be written to. Since the older overlays are read-only, they are safe to copy to another machine—the backup host. This is done at regular intervals, and each overlay need only be copied once. When a migration operation is requested, the virtual machine is paused, and its current state is saved to disk. These new, final overlay files are transferred to the backup host. Since this new current state consists only of changes made since the last backup synchronization, for many applications there is very little to transfer, and this happens very quickly. The hypervisor on the new host resumes the guest virtual machine. Id
https://en.wikipedia.org/wiki/Undervoltage-lockout
The undervoltage-lockout (UVLO) is an electronic circuit used to turn off the power of an electronic device in the event of the voltage dropping below the operational value that could cause unpredictable system behavior. For instance, in battery powered embedded devices, UVLOs can be used to monitor the battery voltage and turn off the embedded device's circuit if the battery voltage drops below a specific threshold, thus protecting the associated equipment. Some variants may also have unique values for power-up (positive-going) and power-down (negative-going) thresholds. Usages Typical usages include: Electrical ballast circuits to switch them off in the event of voltage falling below the operational value. Switched-mode power supplies. When the system supply output impedance is higher than the input impedance of the regulator, an UVLO with a higher hysteresis should be used to prevent oscillations before settling down to a steady state and possible malfunctions of the regulator. See also No-volt release
https://en.wikipedia.org/wiki/Cytometry
Cytometry is the measurement of number and characteristics of cells. Variables that can be measured by cytometric methods include cell size, cell count, cell morphology (shape and structure), cell cycle phase, DNA content, and the existence or absence of specific proteins on the cell surface or in the cytoplasm. Cytometry is used to characterize and count blood cells in common blood tests such as the complete blood count. In a similar fashion, cytometry is also used in cell biology research and in medical diagnostics to characterize cells in a wide range of applications associated with diseases such as cancer and AIDS. Cytometric devices Image cytometers Image cytometry is the oldest form of cytometry. Image cytometers operate by statically imaging a large number of cells using optical microscopy. Prior to analysis, cells are commonly stained to enhance contrast or to detect specific molecules by labeling these with fluorochromes. Traditionally, cells are viewed within a hemocytometer to aid manual counting. Since the introduction of the digital camera, in the mid-1990s, the automation level of image cytometers has steadily increased. This has led to the commercial availability of automated image cytometers, ranging from simple cell counters to sophisticated high-content screening systems. Flow cytometers Due to the early difficulties of automating microscopy, the flow cytometer has since the mid-1950s been the dominating cytometric device. Flow cytometers operate by aligning single cells using flow techniques. The cells are characterized optically or by the use of an electrical impedance method called the Coulter principle. To detect specific molecules when optically characterized, cells are in most cases stained with the same type of fluorochromes that are used by image cytometers. Flow cytometers generally provide less data than image cytometers, but have a significantly higher throughput. Cell sorters Cell sorters are flow cytometers capable of sorting ce
https://en.wikipedia.org/wiki/Hardware%20security
Hardware security is a discipline originated from the cryptographic engineering and involves hardware design, access control, secure multi-party computation, secure key storage, ensuring code authenticity, measures to ensure that the supply chain that built the product is secure among other things. A hardware security module (HSM) is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. Some providers in this discipline consider that the key difference between hardware security and software security is that hardware security is implemented using "non-Turing-machine" logic (raw combinatorial logic or simple state machines). One approach, referred to as "hardsec", uses FPGAs to implement non-Turing-machine security controls as a way of combining the security of hardware with the flexibility of software. Hardware backdoors are backdoors in hardware. Conceptionally related, a hardware Trojan (HT) is a malicious modification of electronic system, particularly in the context of integrated circuit. A physical unclonable function (PUF) is a physical entity that is embodied in a physical structure and is easy to evaluate but hard to predict. Further, an individual PUF device must be easy to make but practically impossible to duplicate, even given the exact manufacturing process that produced it. In this respect it is the hardware analog of a one-way function. The name "physical unclonable function" might be a little misleading as some PUFs are clonable, and most PUFs are noisy and therefore do not achieve the requirements for a function. Today, PUFs are usually implemented in integrated circuits and are typically used in applications with high security requirements. Many attacks on sensitive data and resources reported by organizations occur from within the org
https://en.wikipedia.org/wiki/Cope%27s%20rule
Cope's rule, named after American paleontologist Edward Drinker Cope, postulates that population lineages tend to increase in body size over evolutionary time. It was never actually stated by Cope, although he favoured the occurrence of linear evolutionary trends. It is sometimes also known as the Cope–Depéret rule, because Charles Depéret explicitly advocated the idea. Theodor Eimer had also done so earlier. The term "Cope's rule" was apparently coined by Bernhard Rensch, based on the fact that Depéret had "lionized Cope" in his book. While the rule has been demonstrated in many instances, it does not hold true at all taxonomic levels, or in all clades. Larger body size is associated with increased fitness for a number of reasons, although there are also some disadvantages both on an individual and on a clade level: clades comprising larger individuals are more prone to extinction, which may act to limit the maximum size of organisms. Function Effects of growth Directional selection appears to act on organisms' size, whereas it exhibits a far smaller effect on other morphological traits, though it is possible that this perception may be a result of sample bias. This selectional pressure can be explained by a number of advantages, both in terms of mating success and survival rate. For example, larger organisms find it easier to avoid or fight off predators and capture prey, to reproduce, to kill competitors, to survive temporary lean times, and to resist rapid climatic changes. They may also potentially benefit from better thermal efficiency, increased intelligence, and a longer lifespan. Offsetting these advantages, larger organisms require more food and water, and shift from r to K-selection. Their longer generation time means a longer period of reliance on the mother, and on a macroevolutionary scale restricts the clade's ability to evolve rapidly in response to changing environments. Capping growth Left unfettered, the trend of ever-larger size would produc
https://en.wikipedia.org/wiki/Quality%20intellectual%20property%20metric
The quality intellectual property metric (QIP) is an international standard, developed by Virtual Socket Interface Alliance (VSIA) for measuring Intellectual Property (IP) or Silicon intellectual property (SIP) quality and examining the practices used to design, integrate and support the SIP. SIP hardening is required to facilitate the reuse of IP in integrated circuit design. Background and importance Many computer processors use a system-on-a-chip (SoC) design, which is intended to include all of a device's functions on a single chip. As a result, these chips need to include numerous technical standards that the device will use. One solution to designing such a chip is the reuse of high quality IP. Reusing IP from others means that the chip designer does not need to redesign these elements. IP quality is the key to successful SoC designs, but it is one of the SoC’s most challenging problems. QIP metric allows both the IP designers and IP integrators to measure the quality of an IP core against a checklist of critical issues. IP integrators make use of the IP cores into their own design and deliver final integrated circuit for an application, e.g. an integrated circuit designer of iPhone main processor IC (ARM architecture CPU) integrates other IP cores like USB 2.0, DSP, MP4 decoder, etc., so that the additional features of USB 2.0, MP4 decoder, etc. can be easily embedded into the final IC. The QIP typically consists of interactive Microsoft Excel spreadsheets with sets of questions to be answered by the IP vendor. SIP quality measure framework Hong Kong Science and Technology Parks Corporation (HKSTP) and Hong Kong University of Science and Technology (HKUST) started to develop a SIP verification and quality measures framework in 2005, based on QIP metric. The objective is to develop a technical framework for SIP quality measures and evaluation based on QIP. Third-party SIP evaluation service is provided by HKSTP, so that IP integrators can know the qua
https://en.wikipedia.org/wiki/Solovay%E2%80%93Kitaev%20theorem
In quantum information and computation, the Solovay–Kitaev theorem says, roughly, that if a set of single-qubit quantum gates generates a dense subset of SU(2), then that set can be used to approximate any desired quantum gate with a relatively short sequence of gates. This theorem is considered one of the most significant results in the field of quantum computation and was first announced by Robert M. Solovay in 1995 and independently proven by Alexei Kitaev in 1997. Michael Nielsen and Christopher M. Dawson have noted its importance in the field. A consequence of this theorem is that a quantum circuit of constant-qubit gates can be approximated to error (in operator norm) by a quantum circuit of gates from a desired finite universal gate set. By comparison, just knowing that a gate set is universal only implies that constant-qubit gates can be approximated by a finite circuit from the gate set, with no bound on its length. So, the Solovay–Kitaev theorem shows that this approximation can be made surprisingly efficient, thereby justifying that quantum computers need only implement a finite number of gates to gain the full power of quantum computation. Statement Let be a finite set of elements in SU(2) containing its own inverses (so implies ) and such that the group they generate is dense in SU(2). Consider some . Then there is a constant such that for any , there is a sequence of gates from of length such that . That is, approximates to operator norm error. Quantitative bounds The constant can be made to be for any fixed . However, there exist particular gate sets for which we can take , which makes the length of the gate sequence tight up to a constant factor. Proof idea The proof of the Solovay–Kitaev theorem proceeds by recursively constructing a gate sequence giving increasingly good approximations to . Suppose we have an approximation such that . Our goal is to find a sequence of gates approximating to error, for . By concatenating thi
https://en.wikipedia.org/wiki/List%20of%20convexity%20topics
This is a list of convexity topics, by Wikipedia page. Alpha blending - the process of combining a translucent foreground color with a background color, thereby producing a new blended color. This is a convex combination of two colors allowing for transparency effects in computer graphics. Barycentric coordinates - a coordinate system in which the location of a point of a simplex (a triangle, tetrahedron, etc.) is specified as the center of mass, or barycenter, of masses placed at its vertices. The coordinates are non-negative for points in the convex hull. Borsuk's conjecture - a conjecture about the number of pieces required to cover a body with a larger diameter. Solved by Hadwiger for the case of smooth convex bodies. Bond convexity - a measure of the non-linear relationship between price and yield duration of a bond to changes in interest rates, the second derivative of the price of the bond with respect to interest rates. A basic form of convexity in finance. Carathéodory's theorem (convex hull) - If a point x of Rd lies in the convex hull of a set P, there is a subset of P with d+1 or fewer points such that x lies in its convex hull. Choquet theory - an area of functional analysis and convex analysis concerned with measures with support on the extreme points of a convex set C. Roughly speaking, all vectors of C should appear as 'averages' of extreme points. Complex convexity — extends the notion of convexity to complex numbers. Convex analysis - the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization. Convex combination - a linear combination of points where all coefficients are non-negative and sum to 1. All convex combinations are within the convex hull of the given points. Convex and Concave - a print by Escher in which many of the structure's features can be seen as both convex shapes and concave impressions. Convex body - a compact convex set in a Euclide
https://en.wikipedia.org/wiki/Z-factor
The Z-factor is a measure of statistical effect size. It has been proposed for use in high-throughput screening (where it is also known as Z-prime), and commonly written as Z' to judge whether the response in a particular assay is large enough to warrant further attention. Background In high-throughput screens, experimenters often compare a large number (hundreds of thousands to tens of millions) of single measurements of unknown samples to positive and negative control samples. The particular choice of experimental conditions and measurements is called an assay. Large screens are expensive in time and resources. Therefore, prior to starting a large screen, smaller test (or pilot) screens are used to assess the quality of an assay, in an attempt to predict if it would be useful in a high-throughput setting. The Z-factor is an attempt to quantify the suitability of a particular assay for use in a full-scale, high-throughput screen. Definition The Z-factor is defined in terms of four parameters: the means () and standard deviations () of both the positive (p) and negative (n) controls (, , and , ). Given these values, the Z-factor is defined as: In practice, the Z-factor is estimated from the sample means and sample standard deviations Interpretation The following interpretations for the Z-factor are taken from: Note that by the standards of many types of experiments, a zero Z-factor would suggest a large effect size, rather than a borderline useless result as suggested above. For example, if σp=σn=1, then μp=6 and μn=0 gives a zero Z-factor. But for normally-distributed data with these parameters, the probability that the positive control value would be less than the negative control value is less than 1 in 105. Extreme conservatism is used in high throughput screening due to the large number of tests performed. Limitations The constant factor 3 in the definition of the Z-factor is motivated by the normal distribution, for which more than 99% of values
https://en.wikipedia.org/wiki/Board%20support%20package
In embedded systems, a board support package (BSP) is the layer of software containing hardware-specific boot firmware and device drivers and other routines that allow a given embedded operating system, for example a real-time operating system (RTOS), to function in a given hardware environment (a motherboard), integrated with the embedded operating system. Software Third-party hardware developers who wish to support a given embedded operating system must create a BSP that allows that embedded operating system to run on their platform. In most cases, the embedded operating system image and software license, the BSP containing it, and the hardware are bundled together by the hardware vendor. BSPs are typically customizable, allowing the user to specify which drivers and routines should be included in the build based on their selection of hardware and software options. For instance, a particular single-board computer might be paired with several peripheral chips; in that case the BSP might include drivers for peripheral chips supported; when building the BSP image the user would specify which peripheral drivers to include based on their choice of hardware. Some suppliers also provide a root file system, a toolchain for building programs to run on the embedded system, and utilities to configure the device (while running) along with the BSP. Many embedded operating system providers provide template BSP's, developer assistance, and test suites to aid BSP developers to set up an embedded operating system on a new hardware platform. History The term BSP has been in use since 1981 when Hunter & Ready, the developers of the Versatile Real-Time Executive (VRTX), first coined the term to describe the hardware-dependent software needed to run VRTX on a specific hardware platform. Since the 1980s, it has been in wide use throughout the industry. Virtually all RTOS providers now use the term BSP. Example The Wind River Systems board support package for the ARM Integrator 92
https://en.wikipedia.org/wiki/Football%20Live
Football Live was the name given to the project and computer system created and utilised by PA Sport to collect Real Time Statistics from major English & Scottish Football Matches and distribute to most leading media organisations. At the time of its operation, more than 99% of all football statistics displayed across Print, Internet, Radio & TV Media outlets would have been collected via Football Live. Background Prior to implementation of Football Live, the collection process consisted of a news reporter or press officer at each club telephoning the Press Association, relaying information on Teams, Goals and Half-Time & Full Time. The basis for Football Live was to have a representative of the Press Association (FBA - Football Analyst) at every ground. Throughout the whole match they would stay on an open line on a mobile phone to a Sports Information Processor (SIP), constantly relaying in real time statistical information for every : Shot Foul Free Kick Goal Cross Goal Kick Offside This information would be entered in real time and passed to our media customers. The Football Live project was in use from Season 2001/02 until the service was taken over by Opta in 2013/14 Commercial Customers The most famous use for the Football Live data was for the Vidiprinter services on BBC & Sky Sports, allowing goals to be viewed on TV screens within 20 seconds of the event happening. League competitions From its inception in 2001/02 season, the following leagues/competitions were fully covered by Football live English Premier League Championship League One League Two Conference Scottish Premier League English FA Cup English Football League Cup World Cup European Championships Champions League Europa League Football Analysts (FBA's) During the early development stages, the initial idea was to employee ex-referees to act as Football Analysts, but this was soon dismissed in favour of ex-professional Footballers. The most famous of which were Brendon O
https://en.wikipedia.org/wiki/Versit%20Consortium
The versit Consortium was a multivendor initiative founded by Apple Computer, AT&T, IBM and Siemens in the early 1990s in order to create Personal Data Interchange (PDI) technology, open specifications for exchanging personal data over the Internet, wired and wireless connectivity and Computer Telephony Integration (CTI). The Consortium started a number of projects to deliver open specifications aimed at creating industry standards. Computer Telephony Integration One of the most ambitious projects of the Consortium was the Versit CTI Encyclopedia (VCTIE), a 3,000 page, 6 volume set of specifications defining how computer and telephony systems are to interact and become interoperable. The Encyclopedia was built on existing technologies and specifications such as ECMA's call control specifications, TSAPI and industry expertise of the core technical team. The volumes are: Volume 1, Concepts & Terminology Volume 2, Configurations & Landscape Volume 3, Telephony Feature Set Volume 4, Call Flow Scenarios Volume 5, CTI Protocols Volume 6, Versit TSAPI Appendices include: Versit TSAPI header file Protocol 1 ASN.1 description Protocol 2 ASN.1 description Versit Server Mapper Interface header file Versit TSDI header file The core Versit CTI Encyclopedia technical team was composed of David H. Anderson and Marcus W. Fath from IBM, Frédéric Artru and Michael Bayer from Apple Computer, James L. Knight and Steven Rummel from AT&T (then Lucent Technologies), Tom Miller from Siemens, and consultants Ellen Feaheny and Charles Hudson. Upon completion, the Versit CTI Encyclopedia was transferred to the ECTF and has been adopted in the form of ECTF C.001. This model represents the basis for the ECTF's call control efforts. Though the Versit CTI Encyclopedia ended up influencing many products, there was one full compliant implementation of the specifications that was brought to market: Odisei, a French company founded by team member Frédéric Artru developed the IntraSw
https://en.wikipedia.org/wiki/Comb%20filter
In signal processing, a comb filter is a filter implemented by adding a delayed version of a signal to itself, causing constructive and destructive interference. The frequency response of a comb filter consists of a series of regularly spaced notches in between regularly spaced peaks (sometimes called teeth) giving the appearance of a comb. Comb filters exist in two forms, feedforward and feedback; which refer to the direction in which signals are delayed before they are added to the input. Comb filters may be implemented in discrete time or continuous time forms which are very similar. Applications Comb filters are employed in a variety of signal processing applications, including: Cascaded integrator–comb (CIC) filters, commonly used for anti-aliasing during interpolation and decimation operations that change the sample rate of a discrete-time system. 2D and 3D comb filters implemented in hardware (and occasionally software) in PAL and NTSC analog television decoders, reduce artifacts such as dot crawl. Audio signal processing, including delay, flanging, physical modelling synthesis and digital waveguide synthesis. If the delay is set to a few milliseconds, a comb filter can model the effect of acoustic standing waves in a cylindrical cavity or in a vibrating string. In astronomy the astro-comb promises to increase the precision of existing spectrographs by nearly a hundredfold. In acoustics, comb filtering can arise as an unwanted artifact. For instance, two loudspeakers playing the same signal at different distances from the listener, create a comb filtering effect on the audio. In any enclosed space, listeners hear a mixture of direct sound and reflected sound. The reflected sound takes a longer, delayed path compared to the direct sound, and a comb filter is created where the two mix at the listener. Similarly, comb filtering may result from mono mixing of multiple mics, hence the 3:1 rule of thumb that neighboring mics should be separated at least t
https://en.wikipedia.org/wiki/Electromagnetic%20field%20solver
Electromagnetic field solvers (or sometimes just field solvers) are specialized programs that solve (a subset of) Maxwell's equations directly. They form a part of the field of electronic design automation, or EDA, and are commonly used in the design of integrated circuits and printed circuit boards. They are used when a solution from first principles or the highest accuracy is required. Introduction The extraction of parasitic circuit models is essential for various aspects of physical verification such as timing, signal integrity, substrate coupling, and power grid analysis. As circuit speeds and densities have increased, the need has grown to account accurately for parasitic effects for more extensive and more complicated interconnect structures. In addition, the electromagnetic complexity has grown as well, from resistance and capacitance to inductance, and now even full electromagnetic wave propagation. This increase in complexity has also grown for the analysis of passive devices such as integrated inductors. Electromagnetic behavior is governed by Maxwell's equations, and all parasitic extraction requires solving some form of Maxwell's equations. That form may be a simple analytic parallel plate capacitance equation or may involve a full numerical solution for a complex 3D geometry with wave propagation. In layout extraction, analytic formulas for simple or simplified geometry can be used where accuracy is less important than speed. Still, when the geometric configuration is not simple, and accuracy demands do not allow simplification, a numerical solution of the appropriate form of Maxwell's equations must be employed. The appropriate form of Maxwell's equations is typically solved by one of two classes of methods. The first uses a differential form of the governing equations and requires the discretization (meshing) of the entire domain in which the electromagnetic fields reside. Two of the most common approaches in this first class are the finite diffe
https://en.wikipedia.org/wiki/Common%20normal%20%28robotics%29
In robotics the common normal of two non-intersecting joint axes is a line perpendicular to both axes. The common normal can be used to characterize robot arm links, by using the "common normal distance" and the angle between the link axes in a plane perpendicular to the common normal. When two consecutive joint axes are parallel, the common normal is not unique and an arbitrary common normal may be used, usually one that passes through the center of a coordinate system. The common normal is widely used in the representation of the frames of reference for robot joints and links, and the selection of minimal representations with the Denavit–Hartenberg parameters. See also Denavit–Hartenberg parameters Forward kinematics Robotic arm
https://en.wikipedia.org/wiki/Computer%20network%20diagram
A computer network diagram is a schematic depicting the nodes and connections amongst nodes in a computer network or, more generally, any telecommunications network. Computer network diagrams form an important part of network documentation. Symbolization Readily identifiable icons are used to depict common network appliances, e.g. routers, and the style of lines between them indicates the type of connection. Clouds are used to represent networks external to the one pictured for the purposes of depicting connections between internal and external devices, without indicating the specifics of the outside network. For example, in the hypothetical local area network pictured to the right, three personal computers and a server are connected to a switch; the server is further connected to a printer and a gateway router, which is connected via a WAN link to the Internet. Depending on whether the diagram is intended for formal or informal use, certain details may be lacking and must be determined from context. For example, the sample diagram does not indicate the physical type of connection between the PCs and the switch, but since a modern LAN is depicted, Ethernet may be assumed. If the same style of line was used in a WAN (wide area network) diagram, however, it may indicate a different type of connection. At different scales diagrams may represent various levels of network granularity. At the LAN level, individual nodes may represent individual physical devices, such as hubs or file servers, while at the WAN level, individual nodes may represent entire cities. In addition, when the scope of a diagram crosses the common LAN/MAN/WAN boundaries, representative hypothetical devices may be depicted instead of showing all actually existing nodes. For example, if a network appliance is intended to be connected through the Internet to many end-user mobile devices, only a single such device may be depicted for the purposes of showing the general relationship between the ap
https://en.wikipedia.org/wiki/Network%20on%20a%20chip
A network on a chip or network-on-chip (NoC or ) is a network-based communications subsystem on an integrated circuit ("microchip"), most typically between modules in a system on a chip (SoC). The modules on the IC are typically semiconductor IP cores schematizing various functions of the computer system, and are designed to be modular in the sense of network science. The network on chip is a router-based packet switching network between SoC modules. NoC technology applies the theory and methods of computer networking to on-chip communication and brings notable improvements over conventional bus and crossbar communication architectures. Networks-on-chip come in many network topologies, many of which are still experimental as of 2018. In 2000s researchers had started to propose a type of on-chip interconnection in the form of packet switching networks in order to address the scalability issues of bus-based design. Preceding researches proposed the design that routes data packets instead of routing the wires. Then, the concept of "network on chips" was proposed in 2002. NoCs improve the scalability of systems-on-chip and the power efficiency of complex SoCs compared to other communication subsystem designs. They are an emerging technology, with projections for large growth in the near future as multicore computer architectures become more common. Structure NoCs can span synchronous and asynchronous clock domains, known as clock domain crossing, or use unclocked asynchronous logic. NoCs support globally asynchronous, locally synchronous electronics architectures, allowing each processor core or functional unit on the System-on-Chip to have its own clock domain. Architectures NoC architectures typically model sparse small-world networks (SWNs) and scale-free networks (SFNs) to limit the number, length, area and power consumption of interconnection wires and point-to-point connections. Topology The topology is the first fundamental aspect of NoC design
https://en.wikipedia.org/wiki/Conway%20chained%20arrow%20notation
Conway chained arrow notation, created by mathematician John Horton Conway, is a means of expressing certain extremely large numbers. It is simply a finite sequence of positive integers separated by rightward arrows, e.g. . As with most combinatorial notations, the definition is recursive. In this case the notation eventually resolves to being the leftmost number raised to some (usually enormous) integer power. Definition and overview A "Conway chain" is defined as follows: Any positive integer is a chain of length . A chain of length n, followed by a right-arrow → and a positive integer, together form a chain of length . Any chain represents an integer, according to the six rules below. Two chains are said to be equivalent if they represent the same integer. Let denote positive integers and let denote the unchanged remainder of the chain. Then: An empty chain (or a chain of length 0) is equal to The chain represents the number . The chain represents the number . The chain represents the number (see Knuth's up-arrow notation) The chain represents the same number as the chain Else, the chain represents the same number as the chain . Properties A chain evaluates to a perfect power of its first number Therefore, is equal to is equivalent to is equal to is equivalent to (not to be confused with ) Interpretation One must be careful to treat an arrow chain as a whole. Arrow chains do not describe the iterated application of a binary operator. Whereas chains of other infixed symbols (e.g. 3 + 4 + 5 + 6 + 7) can often be considered in fragments (e.g. (3 + 4) + 5 + (6 + 7)) without a change of meaning (see associativity), or at least can be evaluated step by step in a prescribed order, e.g. 34567 from right to left, that is not so with Conway's arrow chains. For example: The sixth definition rule is the core: A chain of 4 or more elements ending with 2 or higher becomes a chain of the same length with a (usually vastly) increased penult
https://en.wikipedia.org/wiki/Legendre%27s%20constant
Legendre's constant is a mathematical constant occurring in a formula constructed by Adrien-Marie Legendre to approximate the behavior of the prime-counting function . The value that corresponds precisely to its asymptotic behavior is now known to be 1. Examination of available numerical data for known values of led Legendre to an approximating formula. Legendre constructed in 1808 the formula where (), as giving an approximation of with a "very satisfying precision". Today, one defines the value of such that which is solved by putting provided that this limit exists. Not only is it now known that the limit exists, but also that its value is equal to somewhat less than Legendre's Regardless of its exact value, the existence of the limit implies the prime number theorem. Pafnuty Chebyshev proved in 1849 that if the limit B exists, it must be equal to 1. An easier proof was given by Pintz in 1980. It is an immediate consequence of the prime number theorem, under the precise form with an explicit estimate of the error term (for some positive constant a, where O(…) is the big O notation), as proved in 1899 by Charles de La Vallée Poussin, that B indeed is equal to 1. (The prime number theorem had been proved in 1896, independently by Jacques Hadamard and La Vallée Poussin, but without any estimate of the involved error term). Being evaluated to such a simple number has made the term Legendre's constant mostly only of historical value, with it often (technically incorrectly) being used to refer to Legendre's first guess 1.08366... instead.
https://en.wikipedia.org/wiki/List%20of%20misnamed%20theorems
This is a list of misnamed theorems in mathematics. It includes theorems (and lemmas, corollaries, conjectures, laws, and perhaps even the odd object) that are well known in mathematics, but which are not named for the originator. That is, these items on this list illustrate Stigler's law of eponymy (which is not, of course, due to Stephen Stigler, who credits Robert K Merton). == Applied mathematics == Benford's law. This was first stated in 1881 by Simon Newcomb, and rediscovered in 1938 by Frank Benford. The first rigorous formulation and proof seems to be due to Ted Hill in 1988.; see also the contribution by Persi Diaconis. Bertrand's ballot theorem. This result concerning the probability that the winner of an election was ahead at each step of ballot counting was first published by W. A. Whitworth in 1878, but named after Joseph Louis François Bertrand who rediscovered it in 1887. A common proof uses André's reflection method, though the proof by Désiré André did not use any reflections. Algebra Burnside's lemma. This was stated and proved without attribution in Burnside's 1897 textbook, but it had previously been discussed by Augustin Cauchy, in 1845, and by Georg Frobenius in 1887. Cayley–Hamilton theorem. The theorem was first proved in the easy special case of 2×2 matrices by Cayley, and later for the case of 4×4 matrices by Hamilton. But it was only proved in general by Frobenius in 1878. Hölder's inequality. This inequality was first established by Leonard James Rogers, and published in 1888. Otto Hölder discovered it independently, and published it in 1889. Marden's theorem. This theorem relating the location of the zeros of a complex cubic polynomial to the zeros of its derivative was named by Dan Kalman after Kalman read it in a 1966 book by Morris Marden, who had first written about it in 1945. But, as Marden had himself written, its original proof was by Jörg Siebeck in 1864. Pólya enumeration theorem. This was proven in 1927 in a difficult pape
https://en.wikipedia.org/wiki/Einstein%20notation
In mathematics, especially the usage of linear algebra in mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in physics applications that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916. Introduction Statement of convention According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see Free and bound variables), it implies summation of that term over all the values of the index. So where the indices can range over the set , is simplified by the convention to: The upper indices are not exponents but are indices of coordinates, coefficients or basis vectors. That is, in this context should be understood as the second component of rather than the square of (this can occasionally lead to ambiguity). The upper index position in is because, typically, an index occurs once in an upper (superscript) and once in a lower (subscript) position in a term (see below). Typically, would be equivalent to the traditional . In general relativity, a common convention is that the Greek alphabet is used for space and time components, where indices take on values 0, 1, 2, or 3 (frequently used letters are ), the Latin alphabet is used for spatial components only, where indices take on values 1, 2, or 3 (frequently used letters are ), In general, indices can range over any indexing set, including an infinite set. This should not be confused with a typographically similar convention used to distinguish between tensor index notation and the closely related but distinct basis-independent abstract index notation. An index that is summed over is a summation index, in this case "". I
https://en.wikipedia.org/wiki/LOBSTER
LOBSTER was a European network monitoring system, based on passive monitoring of traffic on the internet. Its functions were to gather traffic information as a basis for improving internet performance, and to detect security incidents. Objectives To build an advanced pilot European Internet traffic monitoring infrastructure based on passive network monitoring sensors. To develop novel performance and security monitoring applications, enabled by the availability of the passive network monitoring infrastructure, and to develop the appropriate data anonymisation tools for prohibiting unauthorised access or tampering of the original traffic data. History The project originated from SCAMPI, a European project active in 2004–5, aiming to develop a scalable monitoring platform for the Internet. LOBSTER was funded by the European Commission and ceased in 2007. It fed into "IST 2.3.5 Research Networking testbeds", which aimed to contribute to improving internet infrastructure in Europe. 36 LOBSTER sensors were deployed in nine countries across Europe by several organisations. At any one time the system could monitor traffic across 2.3 million IP addresses. It was claimed that more than 400,000 Internet attacks were detected by LOBSTER. Passive monitoring LOBSTER was based on passive network traffic monitoring. Instead of collecting flow-level traffic summaries or actively probing the network, passive network monitoring records all IP packets (both headers and payloads) that flow through the monitored link. This enables passive monitoring methods to record complete information about the actual traffic of the network, which allows for tackling monitoring problems more accurately compared to methods based on flow-level statistics or active monitoring. The passive monitoring applications running on the sensors were developed on top of MAPI (Monitoring Application Programming Interface), an expressive programming interface for building network monitoring applications, deve
https://en.wikipedia.org/wiki/Empirical%20software%20engineering
Empirical software engineering (ESE) is a subfield of software engineering (SE) research that uses empirical research methods to study and evaluate an SE phenomenon of interest. The phenomenon may refer to software development tools/technology, practices, processes, policies, or other human and organizational aspects. ESE has roots in experimental software engineering, but as the field has matured the need and acceptance for both quantitative and qualitative research has grown. Today, common research methods used in ESE for primary and secondary research are the following: Primary research (experimentation, case study research, survey research, simulations in particular software Process simulation) Secondary research methods (Systematic reviews, Systematic mapping studies, rapid reviews, tertiary review) Teaching empirical software engineering Some comprehensive books for students, professionals and researchers interested in ESE are available. Research community Journals, conferences, and communities devoted specifically to ESE: Empirical Software Engineering: An International Journal International Symposium on Empirical Software Engineering and Measurement International Software Engineering Research Network (ISERN)
https://en.wikipedia.org/wiki/Gating%20%28telecommunication%29
In telecommunication, the term gating has the following meanings: The process of selecting only those portions of a wave between specified time intervals or between specified amplitude limits. The controlling of signals by means of combinational logic elements. A process in which a predetermined set of conditions, when established, permits a second process to occur. Telecommunications engineering Signal processing
https://en.wikipedia.org/wiki/Apodization
In signal processing, apodization (from Greek "removing the foot") is the modification of the shape of a mathematical function. The function may represent an electrical signal, an optical transmission, or a mechanical structure. In optics, it is primarily used to remove Airy disks caused by diffraction around an intensity peak, improving the focus. Apodization in electronics Apodization in signal processing The term apodization is used frequently in publications on Fourier-transform infrared (FTIR) signal processing. An example of apodization is the use of the Hann window in the fast Fourier transform analyzer to smooth the discontinuities at the beginning and end of the sampled time record. Apodization in digital audio An apodizing filter can be used in digital audio processing instead of the more common brick-wall filters, in order to reduce the pre- and post-ringing that the latter introduces. Apodization in mass spectrometry During oscillation within an Orbitrap, ion transient signal may not be stable until the ions settle into their oscillations. Toward the end, subtle ion collisions have added up to cause noticeable dephasing. This presents a problem for the Fourier transformation, as it averages the oscillatory signal across the length of the time-domain measurement. The software allows “apodization”, the removal of the front and back section of the transient signal from consideration in the FT calculation. Thus, apodization improves the resolution of the resulting mass spectrum. Another way to improve the quality of the transient is to wait to collect data until ions have settled into stable oscillatory motion within the trap. Apodization in nuclear magnetic resonance spectroscopy Apodization is applied to NMR signals before discrete Fourier Transformation. Typically, NMR signals are truncated due to time constraints (indirect dimension) or to obtain a higher signal-to-noise ratio. In order to reduce truncation artifacts, the signals are subjected
https://en.wikipedia.org/wiki/Phase%20response
In signal processing, phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter. Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time. See also Group delay and phase delay
https://en.wikipedia.org/wiki/Cypress%20PSoC
PSoC (programmable system on a chip) is a family of microcontroller integrated circuits by Cypress Semiconductor. These chips include a CPU core and mixed-signal arrays of configurable integrated analog and digital peripherals. History In 2002, Cypress began shipping commercial quantities of the PSoC 1. To promote the PSoC, Cypress sponsored a "PSoC Design Challenge" in Circuit Cellar magazine in 2002 and 2004. In April 2013, Cypress released the fourth generation, PSoC 4. The PSoC 4 features a 32-bit ARM Cortex-M0 CPU, with programmable analog blocks (operational amplifiers and comparators), programmable digital blocks (PLD-based UDBs), programmable routing and flexible GPIO (route any function to any pin), a serial communication block (for SPI, UART, I²C), a timer/counter/PWM block and more. PSoC is used in devices as simple as Sonicare toothbrushes and Adidas sneakers, and as complex as the TiVo set-top box. One PSoC implements capacitive sensing for the touch-sensitive scroll wheel on the Apple iPod click wheel. In 2014, Cypress extended the PSoC 4 family by integrating a Bluetooth Low Energy radio along with a PSoC 4 Cortex-M0-based SoC in a single, monolithic die. In 2016, Cypress released PSoC 4 S-Series, featuring ARM Cortex-M0+ CPU. Overview A PSoC integrated circuit is composed of a core, configurable analog and digital blocks, and programmable routing and interconnect. The configurable blocks in a PSoC are the biggest difference from other microcontrollers. PSoC has three separate memory spaces: paged SRAM for data, Flash memory for instructions and fixed data, and I/O registers for controlling and accessing the configurable logic blocks and functions. The device is created using SONOS technology. PSoC resembles an ASIC: blocks can be assigned a wide range of functions and interconnected on-chip. Unlike an ASIC, there is no special manufacturing process required to create the custom configuration — only startup code that is created by Cypress'
https://en.wikipedia.org/wiki/ION%20LMD
ION LMD system is one of the laser microdissection systems and a name of device that follows Gravity-Assisted Microdissection method, also known as GAM method. This non-contact laser microdissection system makes cell isolation for further genetic analysis possible. It is the first developed laser microdissection system in Asia. History At first, proto type of ION LMD system was developed in 2004. The first generation of ION LMD was developed in 2005 and then the second generation(so-called G2) was developed in 2008. At last, the third generation(so-called ION LMD Pro) was developed in 2012. Manufacturer JungWoo F&B was founded in 1994, and offers various factory automation products for clients in semiconductor, consumer electronics, LCD, automotive manufacturing and ship-building industries. In 2003, the company entered the bio-mechanics business for the medical laboratory market and developed an ION LMD system which is utilized in cancer research. Awards This ION LMD system has got some reliable awards. 2005 Excellent Machine by Ministry of Commerce, Industry and Energy, Republic of Korea 2005 Best Medical Device by Korean Medical Association 2006 New Excellent Product by Ministry of Commerce, Industry and Energy, Republic of Korea
https://en.wikipedia.org/wiki/Systema%20Naturae
(originally in Latin written with the ligature æ) is one of the major works of the Swedish botanist, zoologist and physician Carl Linnaeus (1707–1778) and introduced the Linnaean taxonomy. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers, Gaspard and Johann, Linnaeus was first to use it consistently throughout his book. The first edition was published in 1736. The full title of the 10th edition (1758), which was the most important one, was or translated: "System of nature through the three kingdoms of nature, according to classes, orders, genera and species, with characters, differences, synonyms, places". The tenth edition of this book (1758) is considered the starting point of zoological nomenclature. In 1766–1768 Linnaeus published the much enhanced 12th edition, the last under his authorship. Another again enhanced work in the same style and titled "" was published by Johann Friedrich Gmelin between 1788 and 1793. Since at least the early 20th century, zoologists have commonly recognized this as the last edition belonging to this series. Overview Linnaeus (later known as "Carl von Linné", after his ennoblement in 1761) published the first edition of in the year 1735, during his stay in the Netherlands. As was customary for the scientific literature of its day, the book was published in Latin. In it, he outlined his ideas for the hierarchical classification of the natural world, dividing it into the animal kingdom (), the plant kingdom (), and the "mineral kingdom" (). Linnaeus's Systema Naturae lists only about 10,000 species of organisms, of which about 6,000 are plants and 4,236 are animals. According to the historian of botany William T. Stearn, "Even in 1753 he believed that the number of species of plants in the whole world would hardly reach 10,000; in his whole career he named about 7,700 species of flowering plants." Linnaeus developed his classification of the plant kingdom in an attempt to
https://en.wikipedia.org/wiki/Taste
The gustatory system or sense of taste is the sensory system that is partially responsible for the perception of taste (flavor). Taste is the perception stimulated when a substance in the mouth reacts chemically with taste receptor cells located on taste buds in the oral cavity, mostly on the tongue. Taste, along with the sense of smell and trigeminal nerve stimulation (registering texture, pain, and temperature), determines flavors of food and other substances. Humans have taste receptors on taste buds and other areas, including the upper surface of the tongue and the epiglottis. The gustatory cortex is responsible for the perception of taste. The tongue is covered with thousands of small bumps called papillae, which are visible to the naked eye. Within each papilla are hundreds of taste buds. The exception to this is the filiform papillae that do not contain taste buds. There are between 2000 and 5000 taste buds that are located on the back and front of the tongue. Others are located on the roof, sides and back of the mouth, and in the throat. Each taste bud contains 50 to 100 taste receptor cells. Taste receptors in the mouth sense the five basic tastes: sweetness, sourness, saltiness, bitterness, and savoriness (also known as savory or umami). Scientific experiments have demonstrated that these five tastes exist and are distinct from one another. Taste buds are able to tell different tastes apart when they interact with different molecules or ions. Sweetness, savoriness, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metals or hydrogen ions meet taste buds, respectively. The basic tastes contribute only partially to the sensation and flavor of food in the mouth—other factors include smell, detected by the olfactory epithelium of the nose; texture, detected through a variety of mechanoreceptors, muscle nerves, etc.; temperature, det
https://en.wikipedia.org/wiki/Knuth%27s%20up-arrow%20notation
In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976. In his 1947 paper, R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations. Goodstein also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation. The sequence starts with a unary operation (the successor function with n = 0), and continues with the binary operations of addition (n = 1), multiplication (n = 2), exponentiation (n = 3), tetration (n = 4), pentation (n = 5), etc. Various notations have been used to represent hyperoperations. One such notation is . Knuth's up-arrow notation is another. For example: the single arrow represents exponentiation (iterated multiplication) the double arrow represents tetration (iterated exponentiation) the triple arrow represents pentation (iterated tetration) The general definition of the up-arrow notation is as follows (for ): Here, stands for n arrows, so for example The square brackets are another notation for hyperoperations. Introduction The hyperoperations naturally extend the arithmetical operations of addition and multiplication as follows. Addition by a natural number is defined as iterated incrementation: Multiplication by a natural number is defined as iterated addition: For example, Exponentiation for a natural power is defined as iterated multiplication, which Knuth denoted by a single up-arrow: For example, Tetration is defined as iterated exponentiation, which Knuth denoted by a “double arrow”: For example, Expressions are evaluated from right to left, as the operators are defined to be right-associative. According to this definition, etc. This already leads to some fairly large numbers, but the hyperoperator sequence does not stop here. Pentation, defined as iterated tetration, is represented by the “triple arrow”: Hexation, defined as iterated pentation, is
https://en.wikipedia.org/wiki/List%20of%20finite-dimensional%20Nichols%20algebras
In mathematics, a Nichols algebra is a Hopf algebra in a braided category assigned to an object V in this category (e.g. a braided vector space). The Nichols algebra is a quotient of the tensor algebra of V enjoying a certain universal property and is typically infinite-dimensional. Nichols algebras appear naturally in any pointed Hopf algebra and enabled their classification in important cases. The most well known examples for Nichols algebras are the Borel parts of the infinite-dimensional quantum groups when q is no root of unity, and the first examples of finite-dimensional Nichols algebras are the Borel parts of the Frobenius–Lusztig kernel (small quantum group) when q is a root of unity. The following article lists all known finite-dimensional Nichols algebras where is a Yetter–Drinfel'd module over a finite group , where the group is generated by the support of . For more details on Nichols algebras see Nichols algebra. There are two major cases: abelian, which implies is diagonally braided . nonabelian. The rank is the number of irreducible summands in the semisimple Yetter–Drinfel'd module . The irreducible summands are each associated to a conjugacy class and an irreducible representation of the centralizer . To any Nichols algebra there is by attached a generalized root system and a Weyl groupoid. These are classified in. In particular several Dynkin diagrams (for inequivalent types of Weyl chambers). Each Dynkin diagram has one vertex per irreducible and edges depending on their braided commutators in the Nichols algebra. The Hilbert series of the graded algebra is given. An observation is that it factorizes in each case into polynomials . We only give the Hilbert series and dimension of the Nichols algebra in characteristic . Note that a Nichols algebra only depends on the braided vector space and can therefore be realized over many different groups. Sometimes there are two or three Nichols algebras with different and non
https://en.wikipedia.org/wiki/Steinhaus%20longimeter
The Steinhaus longimeter, patented by the professor Hugo Steinhaus, is an instrument used to measure the lengths of curves on maps. Description It is a transparent sheet of three grids, turned against each other by 30 degrees, each consisting of parallel lines spaced at equal distances 3.82 mm. The measurement is done by counting crossings of the curve with grid lines. The number of crossings is the approximate length of the curve in millimetres. The design of the Steinhaus longimeter can be seen as an application of the Crofton formula, according to which the length of a curve equals the expected number of times it is crossed by a random line. See also Opisometer, a mechanical device for measuring curve length by rolling a small wheel along the curve Dot planimeter, a similar transparency-based device for estimating area, based on Pick's theorem
https://en.wikipedia.org/wiki/List%20of%20shapes%20with%20known%20packing%20constant
The packing constant of a geometric body is the largest average density achieved by packing arrangements of congruent copies of the body. For most bodies the value of the packing constant is unknown. The following is a list of bodies in Euclidean spaces whose packing constant is known. Fejes Tóth proved that in the plane, a point symmetric body has a packing constant that is equal to its translative packing constant and its lattice packing constant. Therefore, any such body for which the lattice packing constant was previously known, such as any ellipse, consequently has a known packing constant. In addition to these bodies, the packing constants of hyperspheres in 8 and 24 dimensions are almost exactly known.
https://en.wikipedia.org/wiki/Reverse%20engineering
Reverse engineering (also known as backwards engineering or back engineering) is a process or method through which one attempts to understand through deductive reasoning how a previously made device, process, system, or piece of software accomplishes a task with very little (if any) insight into exactly how it does so. Depending on the system under consideration and the technologies employed, the knowledge gained during reverse engineering can help with repurposing obsolete objects, doing security analysis, or learning how something works. Although the process is specific to the object on which it is being performed, all reverse engineering processes consist of three basic steps: information extraction, modeling, and review. Information extraction is the practice of gathering all relevant information for performing the operation. Modeling is the practice of combining the gathered information into an abstract model, which can be used as a guide for designing the new object or system. Review is the testing of the model to ensure the validity of the chosen abstract. Reverse engineering is applicable in the fields of computer engineering, mechanical engineering, design, electronic engineering, software engineering, chemical engineering, and systems biology. Overview There are many reasons for performing reverse engineering in various fields. Reverse engineering has its origins in the analysis of hardware for commercial or military advantage. However, the reverse engineering process may not always be concerned with creating a copy or changing the artifact in some way. It may be used as part of an analysis to deduce design features from products with little or no additional knowledge about the procedures involved in their original production. In some cases, the goal of the reverse engineering process can simply be a redocumentation of legacy systems. Even when the reverse-engineered product is that of a competitor, the goal may not be to copy it but to perform competit
https://en.wikipedia.org/wiki/Lab-on-a-chip
A lab-on-a-chip (LOC) is a device that integrates one or several laboratory functions on a single integrated circuit (commonly called a "chip") of only millimeters to a few square centimeters to achieve automation and high-throughput screening. LOCs can handle extremely small fluid volumes down to less than pico-liters. Lab-on-a-chip devices are a subset of microelectromechanical systems (MEMS) devices and sometimes called "micro total analysis systems" (µTAS). LOCs may use microfluidics, the physics, manipulation and study of minute amounts of fluids. However, strictly regarded "lab-on-a-chip" indicates generally the scaling of single or multiple lab processes down to chip-format, whereas "µTAS" is dedicated to the integration of the total sequence of lab processes to perform chemical analysis. History After the invention of microtechnology (~1954) for realizing integrated semiconductor structures for microelectronic chips, these lithography-based technologies were soon applied in pressure sensor manufacturing (1966) as well. Due to further development of these usually CMOS-compatibility limited processes, a tool box became available to create micrometre or sub-micrometre sized mechanical structures in silicon wafers as well: the microelectromechanical systems (MEMS) era had started. Next to pressure sensors, airbag sensors and other mechanically movable structures, fluid handling devices were developed. Examples are: channels (capillary connections), mixers, valves, pumps and dosing devices. The first LOC analysis system was a gas chromatograph, developed in 1979 by S.C. Terry at Stanford University. However, only at the end of the 1980s and beginning of the 1990s did the LOC research start to seriously grow as a few research groups in Europe developed micropumps, flowsensors and the concepts for integrated fluid treatments for analysis systems. These µTAS concepts demonstrated that integration of pre-treatment steps, usually done at lab-scale, could extend t
https://en.wikipedia.org/wiki/Anthropology%20of%20food
Anthropology of food is a sub-discipline of anthropology that connects an ethnographic and historical perspective with contemporary social issues in food production and consumption systems. Although early anthropological accounts often dealt with cooking and eating as part of ritual or daily life, food was rarely regarded as the central point of academic focus. This changed in the later half of the 20th century, when foundational work by Mary Douglas, Marvin Harris, Arjun Appadurai, Jack Goody, and Sidney Mintz cemented the study of food as a key insight into modern social life. Mintz is known as the "Father of food anthropology" for his 1985 work Sweetness and Power, which linked British demand for sugar with the creation of empire and exploitative industrial labor conditions. Research has traced the material and symbolic importance of food, as well as how they intersect. Examples of ongoing themes are food as a form of differentiation, commensality, and food's role in industrialization and globalizing labor and commodity chains. Several related and interdisciplinary academic programs exist in the US and UK (listed under Food studies institutions). "Anthropology of food" is also the name of a scientific journal dedicated to a social analysis of food practices and representations. Created in 1999 (first issue published in 2001), it is multilingual (English, French, Spanish, Portuguese). It is OpenAccess, and accessible through the portal OpenEdition Journals. It complies with academic standards for scientific journals (double-blind peer-review). It publishes a majority of papers in social anthropology, but is also open to contributions from historians, geographers, philosophers, economists. The first issues published include: 16 | 2022 Feeding genders 15 | 2021 Aesthetics, gestures and tastes in South and East Asia: crossed approaches on culinary arts 14 | 2019 Gastro-politics: Culture, Identity and Culinary Politics in Peru 13 | 2018 Tourism and Gastronomy
https://en.wikipedia.org/wiki/Nominal%20level
Nominal level is the operating level at which an electronic signal processing device is designed to operate. The electronic circuits that make up such equipment are limited in the maximum signal they can handle and the low-level internally generated electronic noise they add to the signal. The difference between the internal noise and the maximum level is the device's dynamic range. The nominal level is the level that these devices were designed to operate at, for best dynamic range and adequate headroom. When a signal is chained with improper gain staging through many devices, clipping may occur or the system may operate with reduced dynamic range. In audio, a related measurement, signal-to-noise ratio, is usually defined as the difference between the nominal level and the noise floor, leaving the headroom as the difference between nominal and maximum output. It is important to realize that the measured level is a time average, meaning that the peaks of audio signals regularly exceed the measured average level. The headroom measurement defines how far the peak levels can stray from the nominal measured level before clipping. The difference between the peaks and the average for a given signal is the crest factor. Standards VU meters are designed to represent the perceived loudness of a passage of music, or other audio content, measuring in volume units. Devices are designed so that the best signal quality is obtained when the meter rarely goes above nominal. The markings are often in dB instead of "VU", and the reference level should be defined in the device's manual. In most professional recording and sound reinforcement equipment, the nominal level is . In semi-professional and domestic equipment, the nominal level is usually −10 dBV. This difference is due to the cost required to create larger power supplies and output higher levels. In broadcasting equipment, this is termed the Maximum Permitted Level, which is defined by European Broadcasting Union stand
https://en.wikipedia.org/wiki/Magnesium%20in%20biology
Magnesium is an essential element in biological systems. Magnesium occurs typically as the Mg2+ ion. It is an essential mineral nutrient (i.e., element) for life and is present in every cell type in every organism. For example, adenosine triphosphate (ATP), the main source of energy in cells, must bind to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. As such, magnesium plays a role in the stability of all polyphosphate compounds in the cells, including those associated with the synthesis of DNA and RNA. Over 300 enzymes require the presence of magnesium ions for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA. In plants, magnesium is necessary for synthesis of chlorophyll and photosynthesis. Function A balance of magnesium is vital to the well-being of all organisms. Magnesium is a relatively abundant ion in Earth's crust and mantle and is highly bioavailable in the hydrosphere. This availability, in combination with a useful and very unusual chemistry, may have led to its utilization in evolution as an ion for signaling, enzyme activation, and catalysis. However, the unusual nature of ionic magnesium has also led to a major challenge in the use of the ion in biological systems. Biological membranes are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium, both into and out of cells and intracellular compartments. Human health Inadequate magnesium intake frequently causes muscle spasms, and has been associated with cardiovascular disease, diabetes, high blood pressure, anxiety disorders, migraines, osteoporosis, and cerebral infarction. Acute deficiency (see hypomagnesemia) is rare, and is more common as a drug side-effect (such as chronic alcohol or diuretic use) than from low food intake per se, but it can occur in people fed intravenously for extended periods of time.
https://en.wikipedia.org/wiki/Tsunami%20UDP%20Protocol
The Tsunami UDP Protocol is a UDP-based protocol that was developed for high-speed file transfer over network paths that have a high bandwidth-delay product. Such protocols are needed because standard TCP does not perform well over paths with high bandwidth-delay products. Tsunami was developed at the Advanced Network Management Laboratory of Indiana University. Tsunami effects a file transfer by chunking the file into numbered blocks of 32 kilobyte. Communication between the client and server applications flows over a low bandwidth TCP connection, and the bulk data is transferred over UDP.
https://en.wikipedia.org/wiki/Recurrence%20plot
In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for each moment in time, the times at which the state of a dynamical system returns to the previous state at , i.e., when the phase space trajectory visits roughly the same area in the phase space as at time . In other words, it is a plot of showing on a horizontal axis and on a vertical axis, where is the state of the system (or its phase space trajectory). Background Natural processes can have a distinct recurrent behaviour, e.g. periodicities (as seasonal or Milankovich cycles), but also irregular cyclicities (as El Niño Southern Oscillation, heart beat intervals). Moreover, the recurrence of states, in the meaning that states are again arbitrarily close after some time of divergence, is a fundamental property of deterministic dynamical systems and is typical for nonlinear or chaotic systems (cf. Poincaré recurrence theorem). The recurrence of states in nature has been known for a long time and has also been discussed in early work (e.g. Henri Poincaré 1890). Detailed description One way to visualize the recurring nature of states by their trajectory through a phase space is the recurrence plot, introduced by Eckmann et al. (1987). Often, the phase space does not have a low enough dimension (two or three) to be pictured, since higher-dimensional phase spaces can only be visualized by projection into the two or three-dimensional sub-spaces. However, making a recurrence plot enables us to investigate certain aspects of the m-dimensional phase space trajectory through a two-dimensional representation. At a recurrence the trajectory returns to a location in phase space it has visited before up to a small error (i.e., the system returns to a state that it has before). The recurrence plot represents the collection of pairs of times such recurrences, i.e., the set of with , with and discrete points of time and the state of the system at time (location of the trajectory
https://en.wikipedia.org/wiki/Slashdot%20effect
The Slashdot effect, also known as slashdotting, occurs when a popular website links to a smaller website, causing a massive increase in traffic. This overloads the smaller site, causing it to slow down or even temporarily become unavailable. Typically, less robust sites are unable to cope with the huge increase in traffic and become unavailable – common causes are lack of sufficient data bandwidth, servers that fail to cope with the high number of requests, and traffic quotas. Sites that are maintained on shared hosting services often fail when confronted with the Slashdot effect. This has the same effect as a denial-of-service attack, albeit accidentally. The name stems from the huge influx of web traffic which would result from the technology news site Slashdot linking to websites. The term flash crowd is a more generic term. The original circumstances have changed, as flash crowds from Slashdot were reported in 2005 to be diminishing due to competition from similar sites, and the general adoption of elastically scalable cloud hosting platforms. Terminology The term "Slashdot effect" refers to the phenomenon of a website becoming virtually unreachable because too many people are hitting it after the site was mentioned in an interesting article on the popular Slashdot news service. It was later extended to describe any similar effect from being listed on a popular site, similar to the more generic term, flash crowd, which is a more appropriate term. The term "flash crowd" was coined in 1973 by Larry Niven in his science fiction short story, Flash Crowd. It predicted that a consequence of inexpensive teleportation would be huge crowds materializing almost instantly at the sites of interesting news stories. Twenty years later, the term became commonly used on the Internet to describe exponential spikes in website or server usage when it passes a certain threshold of popular interest. This effect was anticipated years earlier in 1956 in Alfred Bester's novel The
https://en.wikipedia.org/wiki/List%20of%20computer%20algebra%20systems
The following tables provide a comparison of computer algebra systems (CAS). A CAS is a package comprising a set of algorithms for performing symbolic manipulations on algebraic objects, a language to implement them, and an environment in which to use the language. A CAS may include a user interface and graphics capability; and to be effective may require a large library of algorithms, efficient data structures and a fast kernel. General These computer algebra systems are sometimes combined with "front end" programs that provide a better user interface, such as the general-purpose GNU TeXmacs. Functionality Below is a summary of significantly developed symbolic functionality in each of the systems. via SymPy <li> via qepcad optional package Those which do not "edit equations" may have a GUI, plotting, ASCII graphic formulae and math font printing. The ability to generate plaintext files is also a sought-after feature because it allows a work to be understood by people who do not have a computer algebra system installed. Operating system support The software can run under their respective operating systems natively without emulation. Some systems must be compiled first using an appropriate compiler for the source language and target platform. For some platforms, only older releases of the software may be available. Graphing calculators Some graphing calculators have CAS features. See also :Category:Computer algebra systems Comparison of numerical-analysis software Comparison of statistical packages List of information graphics software List of numerical-analysis software List of numerical libraries List of statistical software Mathematical software Web-based simulation
https://en.wikipedia.org/wiki/Novel%20food
A novel food is a type of food that does not have a significant history of consumption or is produced by a method that has not previously been used for food. Designer food Designer food is a type of novel food that has not existed on any regional or global consumer market before. Instead it has been "designed" using biotechnological / bioengineering methods (e.g. genetically modified food) or "enhanced" using engineered additives. Examples like designer egg, designer milk, designer grains, probiotics, and enrichment with micro- and macronutrients and designer proteins have been cited. The enhancement process is called food fortification or nutrification. Designer novel food often comes with sometimes unproven health claims ("superfoods"). Designer food is distinguished from food design, the aesthetic arrangement of food items for marketing purposes. European Union Novel foods or novel food ingredients have no history of "significant" consumption in the European Union prior to 15 May 1997. Any food or food ingredient that falls within this definition must be authorised according to the Novel Food legislation, Regulation (EC) No 258/97 of the European Parliament and of the Council. Applicants can consult the guidance document compiled by the European Commission, which highlights the scientific information and the safety assessment report required in each case. The Novel Food regulation stipulates that foods and food ingredients falling within the scope of this regulation must not: present a danger for the consumer; mislead the consumer; or differ from foods or food ingredients which they are intended to replace to such an extent that their normal consumption would be nutritionally disadvantageous for the consumer. There are two possible routes for authorization under the Novel Food legislation: a full application and a simplified application. The simplified application route is only applicable where the EU member national competent authority, e.g. Food Standard
https://en.wikipedia.org/wiki/Planimeter
A planimeter, also known as a platometer, is a measuring instrument used to determine the area of an arbitrary two-dimensional shape. Construction There are several kinds of planimeters, but all operate in a similar way. The precise way in which they are constructed varies, with the main types of mechanical planimeter being polar, linear, and Prytz or "hatchet" planimeters. The Swiss mathematician Jakob Amsler-Laffon built the first modern planimeter in 1854, the concept having been pioneered by Johann Martin Hermann in 1814. Many developments followed Amsler's famous planimeter, including electronic versions. The Amsler (polar) type consists of a two-bar linkage. At the end of one link is a pointer, used to trace around the boundary of the shape to be measured. The other end of the linkage pivots freely on a weight that keeps it from moving. Near the junction of the two links is a measuring wheel of calibrated diameter, with a scale to show fine rotation, and worm gearing for an auxiliary turns counter scale. As the area outline is traced, this wheel rolls on the surface of the drawing. The operator sets the wheel, turns the counter to zero, and then traces the pointer around the perimeter of the shape. When the tracing is complete, the scales at the measuring wheel show the shape's area. When the planimeter's measuring wheel moves perpendicular to its axis, it rolls, and this movement is recorded. When the measuring wheel moves parallel to its axis, the wheel skids without rolling, so this movement is ignored. That means the planimeter measures the distance that its measuring wheel travels, projected perpendicularly to the measuring wheel's axis of rotation. The area of the shape is proportional to the number of turns through which the measuring wheel rotates. The polar planimeter is restricted by design to measuring areas within limits determined by its size and geometry. However, the linear type has no restriction in one dimension, because it can roll. Its
https://en.wikipedia.org/wiki/Diebold%2010xx
The Diebold 10xx (or Modular Delivery System, MDS) series is a third and fourth generation family of automated teller machines manufactured by Diebold. History Introduced in 1985 as a successor to the TABS 9000 series, the 10xx family of ATMs was re-styled to the "i Series" variant in 1991, the "ix Series" variant in 1994, and finally replaced by the Diebold Opteva series of ATMs in 2003. The 10xx series of ATMs were also marketed under the InterBold brand; a joint venture between IBM and Diebold. IBM machines were marketed under the IBM 478x series. Not all of the 10xx series of ATMs were offered by IBM. Diebold stopped producing the 1000-series ATM's around 2008. Listing of 10xx Series Models Members of the 10xx Series included: MDS Series - Used a De La Rue cash dispensing mechanism 1060 - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1062 - Multi-function, indoor lobby unit 1072 - Multi-function, exterior "through-the-wall" unit i Series - Used an ExpressBus Multi Media Dispenser (MMD) cash dispensing mechanism 1060i - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1061i - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1062i - Multi-function, indoor lobby unit 1064i - Mono-function, indoor cash dispenser 1070i - Multi-function, exterior "through-the-wall" unit with a longer "top-hat throat" 1072i - Multi-function, exterior "through-the-wall" unit 1073i - Multi-function, exterior "through-the-wall" unit, modified for use while sitting in a car 1074i - Multi-function, exterior unit, designed as a stand-alone unit for use in a drive-up lane. ix Series - Used an ExpressBus Multi Media Dispenser (MMD) cash dispensing mechanism 1062ix - Multi-function, indoor lobby unit 1063ix - Mono-function, indoor cash dispenser with a smaller screen than the 1064ix 1064ix - Mono-function, indoor cash dispenser 1070ix - Multi-function, exterior "through-the-wall" unit 1071
https://en.wikipedia.org/wiki/Responsiveness
Responsiveness as a concept of computer science refers to the specific ability of a system or functional unit to complete assigned tasks within a given time. For example, it would refer to the ability of an artificial intelligence system to understand and carry out its tasks in a timely fashion. In the Reactive principle, Responsiveness is one of the fundamental criteria along with resilience, elasticity and message driven. It is one of the criteria under the principle of robustness (from a v principle). The other three are observability, recoverability, and task conformance. Vs performance Software which lacks a decent process management can have poor responsiveness even on a fast machine. On the other hand, even slow hardware can run responsive software. It is much more important that a system actually spend the available resources in the best way possible. For instance, it makes sense to let the mouse driver run at a very high priority to provide fluid mouse interactions. For long-term operations, such as copying, downloading or transforming big files the most important factor is to provide good user-feedback and not the performance of the operation since it can quite well run in the background, using only spare processor time. Delays Long delays can be a major cause of user frustration, or can lead the user to believe the system is not functioning, or that a command or input gesture has been ignored. Responsiveness is therefore considered an essential usability issue for human-computer-interaction (HCI). The rationale behind the responsiveness principle is that the system should deliver results of an operation to users in a timely and organized manner. The frustration threshold can be quite different, depending on the situation and the fact that user interface depends on local or remote systems to show a visible response. There are at least three user tolerance thresholds (i.e.): 0.1 seconds under 0.1 seconds the response is perceived as instantaneous
https://en.wikipedia.org/wiki/Intersex%20%28biology%29
Intersex is a general term for an organism that has sex characteristics that are between male and female. It typically applies to a minority of members of gonochoric animal species such as mammals (as opposed to hermaphroditic species in which the majority of members can have both male and female sex characteristics). Such organisms are usually sterile. Intersexuality can occur due to both genetic and environmental factors and has been reported in mammals, fishes, nematodes, and crustaceans. Mammals Intersex can also occur in non-human mammals such as pigs, with it being estimated that 0.1% to 1.4% of pigs are intersex. In Vanuatu, Narave pigs are sacred intersex pigs that are found on Malo Island. An analysis of Navare pig mitochondrial DNA by Lum et al. (2006) found that they are descended from Southeast Asian pigs. At least six different mole species have an intersex adaption where by the female mole has an ovotestis, "a hybrid organ made up of both ovarian and testicular tissue. This effectively makes them intersex, giving them an extra dose of testosterone to make them just as muscular and aggressive as male moles". The ovarian part of the ovotestis is reproductively functional. Intersexuality in humans is relatively rare. Depending on the definition, the prevalence of intersex among humans have been reported to range from 0.018% to up to 1.7% of humans. Nematodes Intersex is known to occur in all main groups of nematodes. Most of them are functionally female. Male intersexes with female characteristics have been reported but are less common. Fishes Gonadal intersex also occurs in fishes, where the individual has both ovarian and testicular tissue. Although it is a rare anomaly among gonochoric fishes, it is a transitional state in fishes that are protandric or protogynous. Intersexuality has been reported in 23 fish families. Crustaceans The oldest evidence for intersexuality in crustaceans comes from fossils dating back 70 million years ago. Inte
https://en.wikipedia.org/wiki/European%20Union%20food%20quality%20scandal
The European Union food quality scandal is a controversy claiming that certain food brands and items targeted at Central and Eastern European Union countries' markets are of lower quality than their exact equivalent produced for the Western European Union markets. European Commission President Jean-Claude Juncker acknowledged the issue in his State of the Union address pledging funding to help national food authorities test the inferior products and start to tackle the food inequality. In April 2018 EU Justice and Consumers Commissioner Věra Jourová stated that "“We will step up the fight against dual food quality. We have amended the Unfair Commercial Practice Directive to make it black and white that dual food quality is forbidden."
https://en.wikipedia.org/wiki/Energy%20%28signal%20processing%29
In signal processing, the energy of a continuous-time signal x(t) is defined as the area under the squared magnitude of the considered signal i.e., mathematically Unit of will be (unit of signal)2. And the energy of a discrete-time signal x(n) is defined mathematically as Relationship to energy in physics Energy in this context is not, strictly speaking, the same as the conventional notion of energy in physics and the other sciences. The two concepts are, however, closely related, and it is possible to convert from one to the other: where Z represents the magnitude, in appropriate units of measure, of the load driven by the signal. For example, if x(t) represents the potential (in volts) of an electrical signal propagating across a transmission line, then Z would represent the characteristic impedance (in ohms) of the transmission line. The units of measure for the signal energy would appear as volt2·seconds, which is not dimensionally correct for energy in the sense of the physical sciences. After dividing by Z, however, the dimensions of E would become volt2·seconds per ohm, which is equivalent to joules, the SI unit for energy as defined in the physical sciences. Spectral energy density Similarly, the spectral energy density of signal x(t) is where X(f) is the Fourier transform of x(t). For example, if x(t) represents the magnitude of the electric field component (in volts per meter) of an optical signal propagating through free space, then the dimensions of X(f) would become volt·seconds per meter and would represent the signal's spectral energy density (in volts2·second2 per meter2) as a function of frequency f (in hertz). Again, these units of measure are not dimensionally correct in the true sense of energy density as defined in physics. Dividing by Zo, the characteristic impedance of free space (in ohms), the dimensions become joule-seconds per meter2 or, equivalently, joules per meter2 per hertz, which is dimensionally correct in SI
https://en.wikipedia.org/wiki/Secure%20transmission
In computer science, secure transmission refers to the transfer of data such as confidential or proprietary information over a secure channel. Many secure transmission methods require a type of encryption. The most common email encryption is called PKI. In order to open the encrypted file, an exchange of key is done. Many infrastructures such as banks rely on secure transmission protocols to prevent a catastrophic breach of security. Secure transmissions are put in place to prevent attacks such as ARP spoofing and general data loss. Software and hardware implementations which attempt to detect and prevent the unauthorized transmission of information from the computer systems to an organization on the outside may be referred to as Information Leak Detection and Prevention (ILDP), Information Leak Prevention (ILP), Content Monitoring and Filtering (CMF) or Extrusion Prevention systems and are used in connection with other methods to ensure secure transmission of data. Secure transmission over wireless infrastructure WEP is a deprecated algorithm to secure IEEE 802.11 wireless networks. Wireless networks broadcast messages using radio, so are more susceptible to eavesdropping than wired networks. When introduced in 1999, WEP was intended to provide confidentiality comparable to that of a traditional wired network. A later system, called Wi-Fi Protected Access (WPA) has since been developed to provide stronger security. Web-based secure transmission Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide secure communications on the Internet for such things as web browsing, e-mail, Internet faxing, instant messaging and other data transfers. There are slight differences between SSL and TLS, but they are substantially the same.
https://en.wikipedia.org/wiki/Stretchable%20electronics
Stretchable electronics, also known as elastic electronics or elastic circuits, is a group of technologies for building electronic circuits by depositing or embedding electronic devices and circuits onto stretchable substrates such as silicones or polyurethanes, to make a completed circuit that can experience large strains without failure. In the simplest case, stretchable electronics can be made by using the same components used for rigid printed circuit boards, with the rigid substrate cut (typically in a serpentine pattern) to enable in-plane stretchability. However, many researchers have also sought intrinsically stretchable conductors, such as liquid metals. One of the major challenges in this domain is designing the substrate and the interconnections to be stretchable, rather than flexible (see Flexible electronics) or rigid (Printed Circuit Boards). Typically, polymers are chosen as substrates or material to embed. When bending the substrate, the outermost radius of the bend will stretch (see Strain in an Euler–Bernoulli beam, subjecting the interconnects to high mechanical strain. Stretchable electronics often attempts biomimicry of human skin and flesh, in being stretchable, whilst retaining full functionality. The design space for products is opened up with stretchable electronics, including sensitive electronic skin for robotic devices and in vivo implantable sponge-like electronics. Strechable Skin electronics Mechanical Properties of Skin Skin is composed of collagen, keratin, and elastin fibers, which provide robust mechanical strength, low modulus, tear resistance, and softness. The skin can be considered as a bilayer of epidermis and dermis. The epidermal layer has a modulus of about 140-600 kPa and a thickness of 0.05-1.5 mm. Dermis has a modulus of 2-80 kPa and a thickness of 0.3–3 mm. This bilayer skin exhibits an elastic linear response for strains less than 15% and a non linear response at larger strains. To achieve conformability, it is p
https://en.wikipedia.org/wiki/Systematic%20Census%20of%20Australian%20Plants
The Systematic census of Australian plants, with chronologic, literary and geographic annotations, more commonly known as the Systematic Census of Australian Plants, also known by its standard botanic abbreviation Syst. Census Austral. Pl., is a survey of the vascular flora of Australia prepared by Government botanist for the state of Victoria Ferdinand von Mueller and published in 1882. Von Mueller describes the development of the census in the preface of the volume as an extension of the seven volumes of the Flora Australiensis written by George Bentham. A new flora was necessary since as more areas of Australia were explored and settled, the flora of the island-continent became better collected and described. The first census increased the number of described species from the 8125 in Flora Australiensis to 8646. The book records all the known species indigenous to Australia and Norfolk Island; with records of species distribution. Von Mueller noted that by 1882 it had become difficult to distinguish some introduced species from native ones: The lines of demarkation between truly indigenous and more recently immigrated plants can no longer in all cases be drawn with precision; but whereas Alchemilla vulgaris and Veronica serpyllifolia were found along with several European Carices in untrodden parts of the Australian Alps during the author's earliest explorations, Alchemilla arvensis and Veronica peregrina were at first only noticed near settlements. The occurrence of Arabis glabra, Geum urbanum, Agiimonia eupatoria, Eupatorium cannabinum, Cavpesium cernuum and some others may therefore readily be disputed as indigenous, and some questions concerning the nativity of various of our plants will probably remain for ever involved in doubts. In 1889 an updated edition of the census was published, the Second Systematic Census increased the number of described species to 8839. Von Mueller dedicated both works to Joseph Dalton Hooker and Augustin Pyramus de Candolle.
https://en.wikipedia.org/wiki/Sensitivity%20index
The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected. Definition The discriminability index is the separation between the means of two distributions (typically the signal and the noise distributions), in units of the standard deviation. Equal variances/covariances For two univariate distributions and with the same standard deviation, it is denoted by ('dee-prime'): . In higher dimensions, i.e. with two multivariate distributions with the same variance-covariance matrix , (whose symmetric square-root, the standard deviation matrix, is ), this generalizes to the Mahalanobis distance between the two distributions: , where is the 1d slice of the sd along the unit vector through the means, i.e. the equals the along the 1d slice through the means. For two bivariate distributions with equal variance-covariance, this is given by: , where is the correlation coefficient, and here and , i.e. including the signs of the mean differences instead of the absolute. is also estimated as . Unequal variances/covariances When the two distributions have different standard deviations (or in general dimensions, different covariance matrices), there exist several contending indices, all of which reduce to for equal variance/covariance. Bayes discriminability index This is the maximum (Bayes-optimal) discriminability index for two distributions, based on the amount of their overlap, i.e. the optimal (Bayes) error of classification by an ideal observer, or its complement, the optimal accuracy : , where is the inverse cumulative distribution function of the standard normal. The Bayes discriminability between univariate or multivariate normal distributions can be numerically computed (Matlab code), and may also be used as an approximation when the distributions are close to normal. is a positive-definite statistical d
https://en.wikipedia.org/wiki/Retort%20pouch
A retort pouch or retortable pouch is a type of food packaging made from a laminate of flexible plastic and metal foils. It allows the sterile packaging of a wide variety of food and drink handled by aseptic processing, and is used as an alternative to traditional industrial canning methods. Retort pouches are used in field rations, space food, fish products, camping food, instant noodles, and brands such as Capri-Sun and Tasty Bite. Some varieties have a bottom gusset and are known as stand-up pouches. Origin The retort pouch was invented by the United States Army Natick Soldier Research, Development and Engineering Center, Reynolds Metals Company, and Continental Flexible Packaging, who jointly received the Food Technology Industrial Achievement Award for its invention in 1978. Construction A retort pouch is constructed from a flexible metal-plastic laminate that is able to withstand the thermal processing used for sterilization. The food is first prepared, either raw or cooked, and then sealed into the retort pouch. The pouch is then heated to 240-250 °F (116-121 °C) for several minutes under high pressure inside a retort or autoclave machine. The food inside is cooked in a similar way to pressure cooking. This process reliably kills all commonly occurring microorganisms (particularly Clostridium botulinum), preventing it from spoiling. The packaging process is very similar to canning, except that the package itself is flexible. The lamination structure does not allow permeation of gases from outside into the pouch. The retort pouch construction varies from one application to another, as a liquid product needs different barrier properties than a dry product, and similarly an acidic product needs different chemical resistance than a basic product. Some different layers used in retort pouches include: polyester (PET) – provides a gloss and rigid layer, may be printed inside nylon (bi-oriented polyamide) – provides puncture resistance aluminum (Al) – provides
https://en.wikipedia.org/wiki/Sznajd%20model
The Sznajd model or United we stand, divided we fall (USDF) model is a sociophysics model introduced in 2000 to gain fundamental understanding about opinion dynamics. The Sznajd model implements a phenomenon called social validation and thus extends the Ising spin model. In simple words, the model states: Social validation: If two people share the same opinion, their neighbors will start to agree with them. Discord destroys: If a block of adjacent persons disagree, their neighbors start to argue with them. Mathematical formulation For simplicity, one assumes that each individual  has an opinion Si which might be Boolean ( for no, for yes) in its simplest formulation, which means that each individual either agrees or disagrees to a given question. In the original 1D-formulation, each individual has exactly two neighbors just like beads on a bracelet. At each time step a pair of individual and is chosen at random to change their nearest neighbors' opinion (or: Ising spins) and according to two dynamical rules: If then and . This models social validation, if two people share the same opinion, their neighbors will change their opinion. If then and . Intuitively: If the given pair of people disagrees, both adopt the opinion of their other neighbor. Findings for the original formulations In a closed (1 dimensional) community, two steady states are always reached, namely complete consensus (which is called ferromagnetic state in physics) or stalemate (the antiferromagnetic state). Furthermore, Monte Carlo simulations showed that these simple rules lead to complicated dynamics, in particular to a power law in the decision time distribution with an exponent of -1.5. Modifications The final (antiferromagnetic) state of alternating all-on and all-off is unrealistic to represent the behavior of a community. It would mean that the complete population uniformly changes their opinion from one time step to the next. For this reason an alternative dynamical ru
https://en.wikipedia.org/wiki/Starch%20gelatinization
Starch gelatinization is a process of breaking down of intermolecular bonds of starch molecules in the presence of water and heat, allowing the hydrogen bonding sites (the hydroxyl hydrogen and oxygen) to engage more water. This irreversibly dissolves the starch granule in water. Water acts as a plasticizer. Gelatinization Process Three main processes happen to the starch granule: granule swelling, crystallite and double-helical melting, and amylose leaching. Granule swelling: During heating, water is first absorbed in the amorphous space of starch, which leads to a swelling phenomenon. Melting of double helical structures: Water then enters via amorphous regions into the tightly bound areas of double helical structures of amylopectin. At ambient temperatures these crystalline regions do not allow water to enter. Heat causes such regions to become diffuse, the amylose chains begin to dissolve, to separate into an amorphous form and the number and size of crystalline regions decreases. Under the microscope in polarized light starch loses its birefringence and its extinction cross. Amylose Leaching: Penetration of water thus increases the randomness in the starch granule structure, and causes swelling; eventually amylose molecules leach into the surrounding water and the granule structure disintegrates. The gelatinization temperature of starch depends upon plant type and the amount of water present, pH, types and concentration of salt, sugar, fat and protein in the recipe, as well as starch derivatisation technology are used. Some types of unmodified native starches start swelling at 55 °C, other types at 85 °C. The gelatinization temperature of modified starch depends on, for example, the degree of cross-linking, acid treatment, or acetylation. Gel temperature can also be modified by genetic manipulation of starch synthase genes. Gelatinization temperature also depends on the amount of damaged starch granules; these will swell faster. Damaged starch can be