source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Output%20compare
|
Output compare is the ability to trigger an output based on a timestamp in memory, without interrupting the execution of code by a processor or microcontroller. This is a functionality provided by many embedded systems.
The corresponding ability to record a timestamp in memory when an input occurs is called input capture.
Embedded systems
Microchip Documentation on Output Compare: DS39706A-page 16-1 - Section 16. Output Compare http://ww1.microchip.com/downloads/en/DeviceDoc/39706a.pdf
|
https://en.wikipedia.org/wiki/Argument%20%28complex%20analysis%29
|
In mathematics (particularly in complex analysis), the argument of a complex number z, denoted arg(z), is the angle between the positive real axis and the line joining the origin and z, represented as a point in the complex plane, shown as in Figure 1.
It is a multivalued function operating on the nonzero complex numbers.
To define a single-valued function, the principal value of the argument (sometimes denoted Arg z) is used. It is often chosen to be the unique value of the argument that lies within the interval .
Definition
An argument of the complex number , denoted , is defined in two equivalent ways:
Geometrically, in the complex plane, as the 2D polar angle from the positive real axis to the vector representing . The numeric value is given by the angle in radians, and is positive if measured counterclockwise.
Algebraically, as any real quantity such that for some positive real (see Euler's formula). The quantity is the modulus (or absolute value) of , denoted ||:
The names magnitude, for the modulus, and phase, for the argument, are sometimes used equivalently.
Under both definitions, it can be seen that the argument of any non-zero complex number has many possible values: firstly, as a geometrical angle, it is clear that whole circle rotations do not change the point, so angles differing by an integer multiple of radians (a complete circle) are the same, as reflected by figure 2 on the right. Similarly, from the periodicity of and , the second definition also has this property. The argument of zero is usually left undefined.
Alternative definition
The complex argument can also be defined algebraically in terms of complex roots as:
This definition removes reliance on other difficult-to-compute functions such as arctangent as well as eliminating the need for the piecewise definition. Because it's defined in terms of roots, it also inherits the principal branch of square root as its own principal branch. The normalization of by dividing by is
|
https://en.wikipedia.org/wiki/Deconvolution
|
In mathematics, deconvolution is the operation inverse to convolution. Both operations are used in signal processing and image processing. For example, it may be possible to recover the original signal after a filter (convolution) by using a deconvolution method with a certain degree of accuracy. Due to the measurement error of the recorded signal or image, it can be demonstrated that the worse the signal-to-noise ratio (SNR), the worse the reversing of a filter will be; hence, inverting a filter is not always a good solution as the error amplifies. Deconvolution offers a solution to this problem.
The foundations for deconvolution and time-series analysis were largely laid by Norbert Wiener of the Massachusetts Institute of Technology in his book Extrapolation, Interpolation, and Smoothing of Stationary Time Series (1949). The book was based on work Wiener had done during World War II but that had been classified at the time. Some of the early attempts to apply these theories were in the fields of weather forecasting and economics.
Description
In general, the objective of deconvolution is to find the solution f of a convolution equation of the form:
Usually, h is some recorded signal, and f is some signal that we wish to recover, but has been convolved with a filter or distortion function g, before we recorded it. Usually, h is a distorted version of f and the shape of f can't be easily recognized by the eye or simpler time-domain operations. The function g represents the impulse response of an instrument or a driving force that was applied to a physical system. If we know g, or at least know the form of g, then we can perform deterministic deconvolution. However, if we do not know g in advance, then we need to estimate it. This can be done using methods of statistical estimation or building the physical principles of the underlying system, such as the electrical circuit equations or diffusion equations.
There are several deconvolution techniques, depend
|
https://en.wikipedia.org/wiki/Design%20closure
|
Design Closure is a part of the digital electronic design automation workflow by which an integrated circuit (i.e. VLSI) design is modified from its initial description to meet a growing list of design constraints and objectives.
Every step in the IC design (such as static timing analysis, placement, routing, and so on) is already complex and often forms its own field of study. This article, however, looks at the overall design closure process, which takes a chip from its initial design state to the final form in which all of its design constraints are met.
Introduction
Every chip starts off as someone’s idea of a good thing: "If we can make a part that performs function X, we will all be rich!" Once the concept is established, someone from marketing says "To make this chip profitably, it must cost $C and run at frequency F." Someone from manufacturing says "To meet this chip’s targets, it must have a yield of Y%." Someone from packaging says “It must fit in the P package and dissipate no more than W watts.” Eventually, the team generates an extensive list of all the constraints and objectives they must meet to manufacture a product that can be sold profitably. The management then forms a design team, which consists of chip architects, logic designers, functional verification engineers, physical designers, and timing engineers, and assigns them to create a chip to the specifications.
Constraints vs Objectives
The distinction between constraints and objectives is straightforward: a constraint is a design target that must be met for the design to be successful. For example, a chip may be required to run at a specific frequency so it can interface with other components in a system. In contrast, an objective is a design target where more
(or less) is better. For example, yield is generally an objective, which is maximized to lower manufacturing cost. For the purposes of design closure, the distinction between constraints and objectives is not important; this artic
|
https://en.wikipedia.org/wiki/Holozoic%20nutrition
|
Holozoic nutrition (Greek: holo-whole ; zoikos-of animals) is a type of heterotrophic nutrition that is characterized by the internalization (ingestion) and internal processing of liquids or solid food particles. Protozoa, such as amoebas, and most of the free living animals, such as humans, exhibit this type of nutrition where food is taken into the body as a liquid or solid and then further broken down is known as holozoic nutrition. Most animals exhibit this kind of nutrition.
In Holozoic nutrition, the energy and organic building blocks are obtained by ingesting and then digesting other organisms or pieces of other organisms, including blood and decaying organic matter. This contrasts with holophytic nutrition, in which energy and organic building blocks are obtained through photosynthesis or chemosynthesis, and with saprozoic nutrition, in which digestive enzymes are released externally and the resulting monomers (small organic molecules) are absorbed directly from the environment.
There are several stages of holozoic nutrition, which often occur in separate compartments within an organism (such as the stomach and intestines):
Ingestion: In animals, this is merely taking food in through the mouth. In protozoa, this most commonly occurs through phagocytosis.
Digestion: The physical breakdown of complex large and organic food particles and the enzymatic breakdown of complex organic compounds into small, simple molecules.
Absorption: The active and passive transport of the chemical products of digestion out of the food-containing compartment and into the body
4. Assimilation: The chemical products used up for various metabolic processes
|
https://en.wikipedia.org/wiki/IAR%20Systems
|
IAR Systems is a Swedish computer software company that offers development tools for embedded systems. IAR Systems was founded in 1983, and is listed on Nasdaq Nordic in Stockholm. IAR is an abbreviation of Ingenjörsfirma Anders Rundgren, which means Anders Rundgren Engineering Company.
IAR Systems develops C and C++ language compilers, debuggers, and other tools for developing and debugging firmware for 8-, 16-, and 32-bit processors. The firm began in the 8-bit market, but moved into the expanding 32-bit market, more so for 32-bit microcontrollers.
IAR Systems is headquartered in Uppsala, Sweden, and has more than 200 employees globally. The company operates subsidiaries in Germany, France, Japan, South Korea, China, United States, and United Kingdom and reaches the rest of the world through distributors. IAR Systems is a subsidiary of IAR Systems Group.
Products
IAR Embedded Workbench – a development environment that includes a C/C++ compiler, code analysis tools C-STAT and C-RUN, security tools C-Trust and Embedded Trust, and debugging and trace probes
Functional Safety Certification option
Visual State – a design tool for developing event-driven programming systems based on the event-driven finite-state machine paradigm. IAR Visual State presents the developer with the finite-state machine subset of Unified Modeling Language (UML) for C/C++ code generation. By restricting the design abilities to state machines, it is possible to employ formal model checking to find and flag unwanted properties like state dead-ends and unreachable parts of the design. It is not a full UML editor.
IAR KickStart Kit – a series of software and hardware evaluation environments based on various microcontrollers.
IAR Embedded Workbench
The toolchain IAR Embedded Workbench, which supports more than 30 different processor families, is a complete integrated development environment (IDE) with compiler, analysis tools, debugger, functional safety, and security. The development too
|
https://en.wikipedia.org/wiki/List%20of%20Banach%20spaces
|
In the mathematical field of functional analysis, Banach spaces are among the most important objects of study. In other areas of mathematical analysis, most spaces which arise in practice turn out to be Banach spaces as well.
Classical Banach spaces
According to , the classical Banach spaces are those defined by , which is the source for the following table.
Banach spaces in other areas of analysis
The Asplund spaces
The Hardy spaces
The space of functions of bounded mean oscillation
The space of functions of bounded variation
Sobolev spaces
The Birnbaum–Orlicz spaces
Hölder spaces
Lorentz space
Banach spaces serving as counterexamples
James' space, a Banach space that has a Schauder basis, but has no unconditional Schauder Basis. Also, James' space is isometrically isomorphic to its double dual, but fails to be reflexive.
Tsirelson space, a reflexive Banach space in which neither nor can be embedded.
W.T. Gowers construction of a space that is isomorphic to but not serves as a counterexample for weakening the premises of the Schroeder–Bernstein theorem
See also
Notes
|
https://en.wikipedia.org/wiki/Paradox%20of%20the%20plankton
|
In aquatic biology, the paradox of the plankton describes the situation in which a limited range of resources supports an unexpectedly wide range of plankton species, apparently flouting the competitive exclusion principle which holds that when two species compete for the same resource, one will be driven to extinction.
Ecological paradox
The paradox of the plankton results from the clash between the observed diversity of plankton and the competitive exclusion principle, also known as Gause's law, which states that, when two species compete for the same resource, ultimately only one will persist and the other will be driven to extinction. Coexistence between two such species is impossible because the dominant one will inevitably deplete the shared resources, thus decimating the inferior population. Phytoplankton life is diverse at all phylogenetic levels despite the limited range of resources (e.g. light, nitrate, phosphate, silicic acid, iron) for which they compete amongst themselves. The paradox of the plankton was originally described in 1961 by G. Evelyn Hutchinson, who proposed that the paradox could be resolved by factors such as vertical gradients of light or turbulence, symbiosis or commensalism, differential predation, or constantly changing environmental conditions.
Later studies found that the paradox can be resolved by factors such as: zooplankton grazing pressure; chaotic fluid motion; size-selective grazing; spatio-temporal heterogeneity; bacterial mediation; or environmental fluctuations. In general, researchers suggest that ecological and environmental factors continually interact such that the planktonic habitat never reaches an equilibrium for which a single species is favoured.
While it was long assumed that turbulence disrupts plankton patches at spatial scales less than a few metres, researchers using small-scale analysis of plankton distribution found that these exhibited patches of aggregation — on the order of 10 cm — that had suffic
|
https://en.wikipedia.org/wiki/Nocturnality
|
Nocturnality is a behavior in some non-human animals characterized by being active during the night and sleeping during the day. The common adjective is "nocturnal", versus diurnal meaning the opposite.
Nocturnal creatures generally have highly developed senses of hearing, smell, and specially adapted eyesight. Some animals, such as cats and ferrets, have eyes that can adapt to both low-level and bright day levels of illumination (see metaturnal). Others, such as bushbabies and (some) bats, can function only at night. Many nocturnal creatures including tarsiers and some owls have large eyes in comparison with their body size to compensate for the lower light levels at night. More specifically, they have been found to have a larger cornea relative to their eye size than diurnal creatures to increase their : in the low-light conditions. Nocturnality helps wasps, such as Apoica flavissima, avoid hunting in intense sunlight.
Diurnal animals, including humans (except for night owls), squirrels and songbirds, are active during the daytime. Crepuscular species, such as rabbits, skunks, tigers and hyenas, are often erroneously referred to as nocturnal. Cathemeral species, such as fossas and lions, are active both in the day and at night.
Origins
While it is difficult to say which came first, nocturnality or diurnality, a hypothesis in evolutionary biology, the nocturnal bottleneck theory, postulates that in the Mesozoic, many ancestors of modern-day mammals evolved nocturnal characteristics in order to avoid contact with the numerous diurnal predators. A recent study attempts to answer the question as to why so many modern day mammals retain these nocturnal characteristics even though they are not active at night. The leading answer is that the high visual acuity that comes with diurnal characteristics is not needed anymore due to the evolution of compensatory sensory systems, such as a heightened sense of smell and more astute auditory systems. In a recent study, rece
|
https://en.wikipedia.org/wiki/Camera%20trap
|
A camera trap is a camera that is automatically triggered by a change in some activity in its vicinity, like the presence of an animal or a human being. It is typically equipped with a motion sensor – usually a passive infrared (PIR) sensor or an active infrared (AIR) sensor using an infrared light beam.
Camera trapping is a method for capturing wild animals on film when researchers are not present, and has been used in ecological research for decades. In addition to applications in hunting and wildlife viewing, research applications include studies of nest ecology, detection of rare species, estimation of population size and species richness, and research on habitat use and occupation of human-built structures.
Camera traps, also known as trail cameras, are used to capture images of wildlife with as little human interference as possible. Since the introduction of commercial infrared-triggered cameras in the early 1990s, their use has increased. With advancements in the quality of camera equipment, this method of field observation has become more popular among researchers. Hunting has played an important role in development of camera traps, since hunters use them to scout for game. These hunters have opened a commercial market for the devices, leading to many improvements over time.
Application
The great advantage of camera traps is that they can record very accurate data without disturbing the photographed animal. These data are superior to human observations because they can be reviewed by other researchers.
They minimally disturb wildlife and can replace the use of more invasive survey and monitoring techniques such as live trap and release. They operate continually and silently, provide proof of species present in an area, can reveal what prints and scats belong to which species, provide evidence for management and policy decisions, and are a cost-effective monitoring tool. Infrared flash cameras have low disturbance and visibility. Besides olfactory and aco
|
https://en.wikipedia.org/wiki/Palmitate%20mediated%20localization
|
Palmitate mediated localization is a biological process that trafficks a palmitoylated protein to ordered lipid domains.
Biological function
One function is thought to cluster proteins to increase the efficiency of protein-protein interactions and facilitate biological processes. In the opposite scenario palmitate mediated localization sequesters proteins away from a non-localized molecule. In theory, disruption of palmitate mediated localization then allows a transient interaction of two molecules through lipid mixing. In the case of an enzyme, palmitate can sequester an enzyme away from its substrate. Disruption of palmitate mediated localization then activates the enzyme by substrate presentation.
Mechanism of sequestration
Palmitate mediated localization utilizes lipid partitioning and the formation of lipid rafts. Sequestration of palmitoylated proteins is regulated by cholesterol. Depletion of cholesterol with methyl-beta cyclodextrin disrupts palmitate mediated localization.
|
https://en.wikipedia.org/wiki/P2PTV
|
P2PTV refers to peer-to-peer (P2P) software applications designed to redistribute video streams in real time on a P2P network; the distributed video streams are typically TV channels from all over the world but may also come from other sources. The draw to these applications is significant because they have the potential to make any TV channel globally available by any individual feeding the stream into the network where each peer joining to watch the video is a relay to other peer viewers, allowing a scalable distribution among a large audience with no incremental cost for the source.
Technology and use
In a P2PTV system, each user, while downloading a video stream, is simultaneously also uploading that stream to other users, thus contributing to the overall available bandwidth. The arriving streams are typically a few minutes time-delayed compared to the original sources. The video quality of the channels usually depends on how many users are watching; the video quality is better if there are more users.
The architecture of many P2PTV networks can be thought of as real-time versions of BitTorrent: if a user wishes to view a certain channel, the P2PTV software contacts a "tracker server" for that channel in order to obtain addresses of peers who distribute that channel; it then contacts these peers to receive the feed. The tracker records the user's address, so that it can be given to other users who wish to view the same channel. In effect, this creates an overlay network on top of the regular internet for the distribution of real-time video content.
The need for a tracker can also be eliminated by the use of distributed hash table technology.
Some applications allow users to broadcast their own streams, whether self-produced, obtained from a video file, or through a TV tuner card or video capture card. Many of the commercial P2PTV applications were developed in China (TVUPlayer, PPLive, QQLive, PPStream). The majority of available applications broadcast mainly
|
https://en.wikipedia.org/wiki/Byte%20addressing
|
Byte addressing in hardware architectures supports accessing individual bytes. Computers with byte addressing are sometimes called byte machines, in contrast to word-addressable architectures, word machines, that access data by word.
Background
The basic unit of digital storage is a bit, storing a single 0 or 1. Many common instruction set architectures can address more than 8 bits of data at a time. For example, 32-bit x86 processors have 32-bit general-purpose registers and can handle 32-bit (4-byte) data in single instructions. However, data in memory may be of various lengths. Instruction sets that support byte addressing supports accessing data in units that are narrower than the word length. An eight-bit processor like the Intel 8008 addresses eight bits, but as this is the full width of the accumulator and other registers, this is could be considered either byte-addressable or word-addressable. 32-bit x86 processors, which address memory in 8-bit units but have 32-bit general-purpose registers and can operate on 32-bit items with a single instruction, are byte-addressable.
The advantage of word addressing is that more memory can be addressed in the same number of bits. The IBM 7094 has 15-bit addresses, so could address 32,768 words of 36 bits. The machines were often built with a full complement of addressable memory. Addressing 32,768 bytes of 6 bits would have been much less useful for scientific and engineering users. Or consider 32-bit x86 processors. Their 32-bit linear addresses can address 4 billion different items. Using word addressing, a 32-bit processor could address 4 Gigawords; or 16 Gigabytes using the modern 8-bit byte. If the 386 and its successors had used word addressing, scientists, engineers, and gamers could all have run programs that were 4x larger on 32-bit machines. However, word processing, rendering HTML, and all other text applications would have run more slowly.
When computers were so costly that they were only or mainly used
|
https://en.wikipedia.org/wiki/Computer-aided%20maintenance
|
Computer-aided maintenance (not to be confused with CAM which usually stands for Computer Aided Manufacturing) refers to systems that utilize software to organize planning, scheduling, and support of maintenance and repair. A common application of such systems is the maintenance of computers, either hardware or software, themselves. It can also apply to the maintenance of other complex systems that require periodic maintenance, such as reminding operators that preventive maintenance is due or even predicting when such maintenance should be performed based on recorded past experience.
Computer aided configuration
The first computer-aided maintenance software came from DEC in the 1980s to configure VAX computers. The software was built using the techniques of artificial intelligence expert systems, because the problem of configuring a VAX required expert knowledge. During the research, the software was called R1 and was renamed XCON when placed in service. Fundamentally, XCON was a rule-based configuration database written as an expert system using forward chaining rules. As one of the first expert systems to be pressed into commercial service it created high expectations, which did not materialize, as DEC lost commercial pre-eminence.
Help Desk software
Help desks frequently use help desk software that captures symptoms of a bug and relates them to fixes, in a fix database. One of the problems with this approach is that the understanding of the problem is embodied in a non-human way, so that solutions are not unified.
Strategies for finding fixes
The bubble-up strategy simply records pairs of symptoms and fixes. The most frequent set of pairs is then presented as a tentative solution, which is then attempted. If the fix works, that fact is further recorded, along with the configuration of the presenting system, into a solutions database.
Oddly enough, shutting down and booting up again manages to 'fix,' or at least 'mask,' a bug in many computer-based systems;
|
https://en.wikipedia.org/wiki/Ayanna%20Williams
|
Ayanna Williams is an American who holds the world record for the longest fingernails ever reached on a single hand for a woman, with a combined length of 576.4 centimeters (181.09 inches). She is also ranked second in the list of having longest fingernails in the world considering both genders, just behind India's Shridhar Chillal who had a combined length of 1000.6 centimeters (358.1 inches). Ayanna was awarded the Guinness World Record in 2018 for being the woman with the longest finger nails in the world.
Biography
Ayanna pursued her interest in growing nails and engaged in nail art during her young age as a kid. She spent over 2 months to grow her nails without cutting them. Although proud of her record-breaking nails, Ayanna has faced increasing difficulties due to the weight of her finger nails. She found difficulties when engaging in day-to-day activities such as washing plates, dishes and putting sheets on bed.
In 2021, she decided to cut her nails. On 9 April 2021, she had her fingernails cut by Allison Readinger of Trinity Vista Dermatology using an electronic rotary power tool at the Ripley's Believe It or Not! museum in New York City, where the nails had been put on display for public.
The nails were measured for one last time in 2021 and the reading marked as 733.55 centimeters (240.7 inches) before cutting them down.
See also
Lee Redmond, who held the record for the longest fingernails on both hands.
|
https://en.wikipedia.org/wiki/Competitive%20exclusion%20principle
|
In ecology, the competitive exclusion principle, sometimes referred to as Gause's law, is a proposition that two species which compete for the same limited resource cannot coexist at constant population values. When one species has even the slightest advantage over another, the one with the advantage will dominate in the long term. This leads either to the extinction of the weaker competitor or to an evolutionary or behavioral shift toward a different ecological niche. The principle has been paraphrased in the maxim "complete competitors can not coexist".
History
The competitive exclusion principle is classically attributed to Georgy Gause, although he actually never formulated it. The principle is already present in Darwin's theory of natural selection.
Throughout its history, the status of the principle has oscillated between a priori ('two species coexisting must have different niches') and experimental truth ('we find that species coexisting do have different niches').
Experimental basis
Based on field observations, Joseph Grinnell formulated the principle of competitive exclusion in 1904: "Two species of approximately the same food habits are not likely to remain long evenly balanced in numbers in the same region. One will crowd out the other". Georgy Gause formulated the law of competitive exclusion based on laboratory competition experiments using two species of Paramecium, P. aurelia and P. caudatum. The conditions were to add fresh water every day and input a constant flow of food. Although P. caudatum initially dominated, P. aurelia recovered and subsequently drove P. caudatum extinct via exploitative resource competition. However, Gause was able to let the P. caudatum survive by differing the environmental parameters (food, water). Thus, Gause's law is valid only if the ecological factors are constant.
Gause also studied competition between two species of yeast, finding that Saccharomyces cerevisiae consistently outcompeted Schizosaccharomyces kefir
|
https://en.wikipedia.org/wiki/Windows%20Vista%20networking%20technologies
|
In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack,
to improve on the previous stack in several ways.
The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.
Architecture
The Next Generation TCP/IP stack connects to NICs via a Network Driver Interface Specification (NDIS) driver. The network stack, implemented in tcpip.sys implements the Transport, Network and Data link layers of the TCP/IP model. The Transport layer includes implementations for TCP, UDP and unformatted RAW protocols. At the Network layer, IPv4 and IPv6 protocols are implemented in a dual-stack architecture. And the Data link layer (also called Framing layer) implements 802.3, 802.1, PPP, Loopback and tunnelling protocols. Each layer can accommodate Windows Filtering Platform (WFP) shims, which allows packets at that layer to be introspected and also host the WFP Callout API. The networking API is exposed via three components:
Winsock A user mode API for abstracting network communication using sockets and ports. Datagram sockets are used for UDP, whereas Stream sockets are for TCP. While Winsock is a user mode library, it uses a kernel mode driver, called Ancillary Function Driver (AFD) to implement certain functionality.
Winsock Kernel (WSK) A kernel-mode API providing the same socket-and-port abstraction as Winsock, while exposing other features such as Asynchronous I/O using I/O request packets.
Transport Driver Interface (TDI) A kernel-mode API which can be used for legacy protocols like NetBIOS. It includes a
|
https://en.wikipedia.org/wiki/Monolithic%20microwave%20integrated%20circuit
|
Monolithic microwave integrated circuit, or MMIC (sometimes pronounced "mimic"), is a type of integrated circuit (IC) device that operates at microwave frequencies (300 MHz to 300 GHz). These devices typically perform functions such as microwave mixing, power amplification, low-noise amplification, and high-frequency switching. Inputs and outputs on MMIC devices are frequently matched to a characteristic impedance of 50 ohms. This makes them easier to use, as cascading of MMICs does not then require an external matching network. Additionally, most microwave test equipment is designed to operate in a 50-ohm environment.
MMICs are dimensionally small (from around 1 mm² to 10 mm²) and can be mass-produced, which has allowed the proliferation of high-frequency devices such as cellular phones. MMICs were originally fabricated using gallium arsenide (GaAs), a III-V compound semiconductor. It has two fundamental advantages over silicon (Si), the traditional material for IC realisation: device (transistor) speed and a semi-insulating substrate. Both factors help with the design of high-frequency circuit functions. However, the speed of Si-based technologies has gradually increased as transistor feature sizes have reduced, and MMICs can now also be fabricated in Si technology. The primary advantage of Si technology is its lower fabrication cost compared with GaAs. Silicon wafer diameters are larger (typically 8" to 12" compared with 4" to 8" for GaAs) and the wafer costs are lower, contributing to a less expensive IC.
Originally, MMICs used metal-semiconductor field-effect transistors (MESFETs) as the active device. More recently high-electron-mobility transistor (HEMTs), pseudomorphic HEMTs and heterojunction bipolar transistors have become common.
Other III-V technologies, such as indium phosphide (InP), have been shown to offer superior performance to GaAs in terms of gain, higher cutoff frequency, and low noise. However, they also tend to be more expensive due to smal
|
https://en.wikipedia.org/wiki/Adam%27s%20apple
|
The Adam's apple or laryngeal prominence is the protrusion in the human neck formed by the angle of the thyroid cartilage surrounding the larynx, typically visible in men, less frequently in women. The prominence of the Adam's apple increases as a secondary male sex characteristic in puberty.
Structure
The topographic structure which is externally visible and colloquially called the "Adam's apple" is caused by an anatomical structure of the thyroid cartilage called the laryngeal prominence or laryngeal protuberance protruding and forming a "bump" under the skin at the front of the throat. All human beings with a normal anatomy have a laryngeal protuberance of the thyroid cartilage. This prominence is typically larger and more externally noticeable in adult males. There are two reasons for this phenomenon. Firstly, the structural size of the thyroid cartilage in males tends to increase during puberty, and the laryngeal protuberance becomes more anteriorly focused. Secondly, the larynx, which the thyroid cartilage partially envelops, increases in size in male subjects during adolescence, moving the thyroid cartilage and its laryngeal protuberance towards the front of the neck. The adolescent development of both the larynx and the thyroid cartilage in males occur as a result of hormonal changes, especially the normal increase in testosterone production in adolescent males. In females, the laryngeal protuberance sits on the upper edge of the thyroid cartilage, and the larynx tends to be smaller in size, and so the "bump" caused by protrusion of the laryngeal protuberance is much less visible or not discernible. Even so, many women display an externally visible protrusion of the thyroid cartilage, an "Adam's apple", to varying degrees which are usually minor, and this should not normally be viewed as a medical disorder.
Function
The Adam's apple, in relation with the thyroid cartilage which forms it, helps protect the walls and the frontal part of the larynx, includin
|
https://en.wikipedia.org/wiki/Language%20of%20mathematics
|
The language of mathematics or mathematical language is an extension of the natural language (for example English) that is used in mathematics and in science for expressing results (scientific laws, theorems, proofs, logical deductions, etc) with concision, precision and unambiguity.
Features
The main features of the mathematical language are the following.
Use of common words with a derived meaning, generally more specific and more precise. For example, "or" means "one, the other or both", while, in common language, "both" is sometimes included and sometimes not. Also, a "line" is straight and has zero width.
Use of common words with a meaning that is completely different from their common meaning. For example, a mathematical ring is not related to any other meaning of "ring". Real numbers and imaginary numbers are two sorts of numbers, none being more real or more imaginary than the others.
Use of neologisms. For example polynomial, homomorphism.
Use of symbols as words or phrases. For example, and are respectively read as " equals " and
Use of formulas as part of sentences. For example: " represents quantitatively the mass–energy equivalence." A formula that is not included in a sentence is generally meaningless, since the meaning of the symbols may depend on the context: in this is the context that specifies that is the energy of a physical body, is its mass, and is the speed of light.
Use of mathematical jargon that consists of phrases that are used for informal explanations or shorthands. For example, "killing" is often used in place of "replacing with zero", and this led to the use of assassinator and annihilator as technical words.
Understanding mathematical text
The consequence of these features is that a mathematical text is generally not understandable without some prerequisite knowledge. For example the sentence "a free module is a module that has a basis" is perfectly correct, although it appears only as a grammatically correct nonsense,
|
https://en.wikipedia.org/wiki/Organography
|
Organography (from Greek , organo, "organ"; and , -graphy) is the scientific description of the structure and function of the organs of living things.
History
Organography as a scientific study starts with Aristotle, who considered the parts of plants as "organs" and began to consider the relationship between different organs and different functions. In the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position.
In the following century Caspar Friedrich Wolff was able to follow the development of organs from the "growing points" or apical meristems. He noted the commonality of development between foliage leaves and floral leaves (e.g. petals) and wrote: "In the whole plant, whose parts we wonder at as being, at the first glance, so extraordinarily diverse, I finally perceive and recognize nothing beyond leaves and stem (for the root may be regarded as a stem). Consequently all parts of the plant, except the stem, are modified leaves."
Similar views were propounded at by Goethe in his well-known treatise. He wrote: "The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operation by which one and the same organ presents itself to us in various forms has been termed Metamorphosis of Plants."
See also
morphology (biology)
|
https://en.wikipedia.org/wiki/Single-core
|
A single-core processor is a microprocessor with a single core on its die. It performs the fetch-decode-execute cycle once per clock-cycle, as it only runs on one thread. A computer using a single core CPU is generally slower than a multi-core system.
Single core processors used to be widespread in desktop computers, but as applications demanded more processing power, the slower speed of single core systems became a detriment to performance. Windows supported single-core processors up until the release of Windows 11, where a dual-core processor is required.
Single core processors are still in use in some niche circumstances. Some older legacy systems like those running antiquated operating systems (e.g. Windows 98) cannot gain any benefit from multi-core processors. Single core processors are also used in hobbyist computers like the Raspberry Pi and Single-board microcontrollers. The production of single-core desktop processors ended in 2013 with the Celeron G470.
Development
The first single core processor was the Intel 4004, which was commercially released on November 15, 1971 by Intel. Since then many improvements have been made to single core processors, going from the 740 KHz of the Intel 4004 to the 2 GHz Celeron G470.
Advantages
Single core processors draw less power than larger, multi-core processors.
Single core processors can be made a lot more cheaply than multi core systems, meaning they can be used in embedded systems.
Disadvantages
Single core processors are generally outperformed by multi-core processors.
Single core processors are more likely to bottleneck with faster peripheral components, as these components have to wait for the CPU to finish its cycle.
Single core processors lack parallelisation, meaning only one application can run at once. This reduces performance as other processes have to wait for processor time, leading to process starvation.
Increasing parallel trend
Single-core one processor on a die. Since about 2012, e
|
https://en.wikipedia.org/wiki/Multi-project%20wafer%20service
|
Multi-project chip (MPC), and multi-project wafer (MPW) semiconductor manufacturing arrangements allow customers to share mask and microelectronics wafer fabrication cost between several designs or projects.
With the MPC arrangement, one chip is a combination of several designs and this combined chip is then repeated all over the wafer during the manufacturing. MPC arrangement produces typically roughly equal number of chip designs per wafer.
With the MPW arrangement, different chip designs are aggregated on a wafer, with perhaps a different number of designs/projects per wafer. This is made possible with novel mask making and exposure systems in photolithography during IC manufacturing. MPW builds upon the older MPC procedures and enables more effective support for different phases and needs of manufacturing volumes of different designs/projects. MPW arrangement support education, research of new circuit architectures and structures, prototyping and even small volume production.
Worldwide, several MPW services are available from companies, semiconductor foundries and from government-supported institutions. Originally both MPC and MPW arrangements were introduced for integrated circuit (IC) education and research; some MPC/MPW services/gateways are aimed for non-commercial use only. Currently MPC/MPW services are effectively used for system on a chip integration. Selecting the right service platform at the prototyping phase ensures gradual scaling up production via MPW services taking into account the rules of the selected service.
MPC/MPW arrangements have also been applied to microelectromechanical systems (MEMS), integrated photonics like silicon photonics fabrication and microfluidics.
A refinement of MPW is multi-layer mask (MLM) arrangement, where a limited number of masks (e.g. 4) are changed during manufacturing at exposure phase. The rest of the masks are the same from the chip to chip on the whole wafer. MLM approach is well suited for several specifi
|
https://en.wikipedia.org/wiki/Jim%20Williams%20%28analog%20designer%29
|
James M. Williams (April 14, 1948 – June 12, 2011) was an analog circuit designer and technical author who worked for the Massachusetts Institute of Technology (1968–1979), Philbrick, National Semiconductor (1979–1982) and Linear Technology Corporation (LTC) (1982–2011). He wrote over 350 publications relating to analog circuit design, including five books, 21 application notes for National Semiconductor, 62 application notes for Linear Technology, and over 125 articles for EDN Magazine.
Williams suffered a stroke on June 10 and died on June 12, 2011.
Bibliography (partial)
For a complete bibliography, see.
See also
Paul Brokaw
Barrie Gilbert
Howard Johnson (electrical engineer)
Bob Pease — analog electronics engineer, technical author, and colleague. Pease died in an automobile accident after leaving Williams' memorial.
Bob Widlar — pioneering analog integrated circuit designer, technical author, early consultant to Linear Technology Corporation
Building 20 — legendary MIT building where Jim Williams had a design lab early in his career
|
https://en.wikipedia.org/wiki/Signal%20compression
|
Signal compression is the use of various techniques to increase the quality or quantity of signal parameters transmitted through a given telecommunications channel.
Types of signal compression include:
Bandwidth compression
Data compression
Dynamic range compression
Gain compression
Image compression
Lossy compression
One-way compression function
Compression
Telecommunications techniques
he:דחיסת אותות
|
https://en.wikipedia.org/wiki/Food%20choice
|
Research into food choice investigates how people select the food they eat. An interdisciplinary topic, food choice comprises psychological and sociological aspects (including food politics and phenomena such as vegetarianism or religious dietary laws), economic issues (for instance, how food prices or marketing campaigns influence choice) and sensory aspects (such as the study of the organoleptic qualities of food).
Factors that guide food choice include taste preference, sensory attributes, cost, availability, convenience, cognitive restraint, and cultural familiarity. In addition, environmental cues and increased portion sizes play a role in the choice and amount of foods consumed.
Food choice is the subject of research in nutrition, food science, food psychology, anthropology, sociology, and other branches of the natural and social sciences. It is of practical interest to the food industry and especially its marketing endeavors. Social scientists have developed different conceptual frameworks of food choice behavior. Theoretical models of behavior incorporate both individual and environmental factors affecting the formation or modification of behaviors. Social cognitive theory examines the interaction of environmental, personal, and behavioral factors.
Taste preference
Researchers have found that consumers cite taste as the primary determinant of food choice. Genetic differences in the ability to perceive bitter taste are believed to play a role in the willingness to eat bitter-tasting vegetables and in the preferences for sweet taste and fat content of foods. Approximately 25 percent of the US population are supertasters and 50 percent are tasters. Epidemiological studies suggest that nontasters are more likely to eat a wider variety of foods and to have a higher body mass index (BMI), a measure of weight in kilograms divided by height in meters squared.
Environmental influences
Many environmental cues influence food choice and intake, although consumers m
|
https://en.wikipedia.org/wiki/List%20of%20conjectures%20by%20Paul%20Erd%C5%91s
|
The prolific mathematician Paul Erdős and his various collaborators made many famous mathematical conjectures, over a wide field of subjects, and in many cases Erdős offered monetary rewards for solving them.
Unsolved
The Erdős–Gyárfás conjecture on cycles with lengths equal to a power of two in graphs with minimum degree 3.
The Erdős–Hajnal conjecture that in a family of graphs defined by an excluded induced subgraph, every graph has either a large clique or a large independent set.
The Erdős–Mollin–Walsh conjecture on consecutive triples of powerful numbers.
The Erdős–Selfridge conjecture that a covering system with distinct moduli contains at least one even modulus.
The Erdős–Straus conjecture on the Diophantine equation 4/n = 1/x + 1/y + 1/z.
The Erdős conjecture on arithmetic progressions in sequences with divergent sums of reciprocals.
The Erdős–Szekeres conjecture on the number of points needed to ensure that a point set contains a large convex polygon.
The Erdős–Turán conjecture on additive bases of natural numbers.
A conjecture on quickly growing integer sequences with rational reciprocal series.
A conjecture with Norman Oler on circle packing in an equilateral triangle with a number of circles one less than a triangular number.
The minimum overlap problem to estimate the limit of M(n).
A conjecture that the ternary expansion of contains at least one digit 2 for every .
Solved
The Erdős–Faber–Lovász conjecture on coloring unions of cliques, proved (for all large n) by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus.
The Erdős sumset conjecture on sets, proven by Joel Moreira, Florian Karl Richter, Donald Robertson in 2018. The proof has appeared in "Annals of Mathematics" in March 2019.
The Burr–Erdős conjecture on Ramsey numbers of graphs, proved by Choongbum Lee in 2015.
A conjecture on equitable colorings proven in 1970 by András Hajnal and Endre Szemerédi and now known as the Hajnal–Szemerédi theorem.
A co
|
https://en.wikipedia.org/wiki/Mechanism%20%28biology%29
|
In the science of biology, a mechanism is a system of causally interacting parts and processes that produce one or more effects. Scientists explain phenomena by describing mechanisms that could produce the phenomena. For example, natural selection is a mechanism of biological evolution; other mechanisms of evolution include genetic drift, mutation, and gene flow. In ecology, mechanisms such as predation and host-parasite interactions produce change in ecological systems. In practice, no description of a mechanism is ever complete because not all details of the parts and processes of a mechanism are fully known. For example, natural selection is a mechanism of evolution that includes countless, inter-individual interactions with other individuals, components, and processes of the environment in which natural selection operates.
Characterizations/ definitions
Many characterizations/definitions of mechanisms in the philosophy of science/biology have been provided in the past decades. For example, one influential characterization of neuro- and molecular biological mechanisms by Peter K. Machamer, Lindley Darden and Carl Craver is as follows: mechanisms are entities and activities organized such that they are productive of regular changes from start to termination conditions. Other characterizations have been proposed by Stuart Glennan (1996, 2002), who articulates an interactionist account of mechanisms, and William Bechtel (1993, 2006), who emphasizes parts and operations.
The characterization by Machemer et al. is as follows: mechanisms are entities and activities organized such that they are productive of changes from start conditions to termination conditions. There are three distinguishable aspects of this characterization:
Ontic aspect
The ontic constituency of biological mechanisms includes entities and activities. Thus, this conception postulates a dualistic ontology of mechanisms, where entities are substantial components, and activities are reified compon
|
https://en.wikipedia.org/wiki/Shell%20theorem
|
In classical mechanics, the shell theorem gives gravitational simplifications that can be applied to objects inside or outside a spherically symmetrical body. This theorem has particular application to astronomy.
Isaac Newton proved the shell theorem and stated that:
A spherically symmetric body affects external objects gravitationally as though all of its mass were concentrated at a point at its center.
If the body is a spherically symmetric shell (i.e., a hollow ball), no net gravitational force is exerted by the shell on any object inside, regardless of the object's location within the shell.
A corollary is that inside a solid sphere of constant density, the gravitational force within the object varies linearly with distance from the center, becoming zero by symmetry at the center of mass. This can be seen as follows: take a point within such a sphere, at a distance from the center of the sphere. Then you can ignore all of the shells of greater radius, according to the shell theorem (2). But the point can be considered to be external to the remaining sphere of radius r, and according to (1) all of the mass of this sphere can be considered to be concentrated at its centre. The remaining mass is proportional to (because it is based on volume). The gravitational force exerted on a body at radius r will be proportional to (the inverse square law), so the overall gravitational effect is proportional to so is linear in
These results were important to Newton's analysis of planetary motion; they are not immediately obvious, but they can be proven with calculus. (Gauss's law for gravity offers an alternative way to state the theorem.)
In addition to gravity, the shell theorem can also be used to describe the electric field generated by a static spherically symmetric charge density, or similarly for any other phenomenon that follows an inverse square law. The derivations below focus on gravity, but the results can easily be generalized to the electrostatic forc
|
https://en.wikipedia.org/wiki/Mathematical%20methods%20in%20electronics
|
Mathematical methods are integral to the study of electronics.
Mathematics in electronics
Electronics engineering careers usually include courses in calculus (single and multivariable), complex analysis, differential equations (both ordinary and partial), linear algebra and probability. Fourier analysis and Z-transforms are also subjects which are usually included in electrical engineering programs. Laplace transform can simplify computing RLC circuit behaviour.
Basic applications
A number of electrical laws apply to all electrical networks. These include
Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be "induced" in the coil.
Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity.
Kirchhoff's current law: the sum of all currents entering a node is equal to the sum of all currents leaving the node or the sum of total current at a junction is zero
Kirchhoff's voltage law: the directed sum of the electrical potential differences around a circuit must be zero.
Ohm's law: the voltage across a resistor is the product of its resistance and the current flowing through it.at constant temperature.
Norton's theorem: any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor.
Thévenin's theorem: any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor.
Millman's theorem: the voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance.
See also Analysis of resistive circuits.
Circuit analysis is the study of methods to solve linear systems for an unknown variable.
Circuit analysis
Components
There are many electronic components currently used and they all have thei
|
https://en.wikipedia.org/wiki/Equalization%20%28communications%29
|
In telecommunication, equalization is the reversal of distortion incurred by a signal transmitted through a channel. Equalizers are used to render the frequency response—for instance of a telephone line—flat from end-to-end. When a channel has been equalized the frequency domain attributes of the signal at the input are faithfully reproduced at the output. Telephones, DSL lines and television cables use equalizers to prepare data signals for transmission.
Equalizers are critical to the successful operation of electronic systems such as analog broadcast television. In this application the actual waveform of the transmitted signal must be preserved, not just its frequency content. Equalizing filters must cancel out any group delay and phase delay between different frequency components.
Analog telecommunications
Audio lines
Early telephone systems used equalization to correct for the reduced level of high frequencies in long cables, typically using Zobel networks. These kinds of equalizers can also be used to produce a circuit with a wider bandwidth than the standard telephone band of 300 Hz to 3.4 kHz. This was particularly useful for broadcasters who needed "music" quality, not "telephone" quality on landlines carrying program material. It is necessary to remove or cancel any loading coils in the line before equalization can be successful. Equalization was also applied to correct the response of the transducers, for example, a particular microphone might be more sensitive to low frequency sounds than to high frequency sounds, so an equalizer would be used to increase the volume of the higher frequencies (boost), and reduce the volume of the low frequency sounds (cut).
Television lines
A similar approach to audio was taken with television landlines with two important additional complications. The first of these is that the television signal is a wide bandwidth covering many more octaves than an audio signal. A television equalizer consequently typically req
|
https://en.wikipedia.org/wiki/Memory%20management
|
Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time.
Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the size of the virtual address space beyond the available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have an extensive effect on overall system performance.
In some operating systems, e.g. OS/360 and successors, memory is managed by the operating system. In other operating systems, e.g. Unix-like operating systems, memory is managed at the application level.
Memory management within an address space is generally categorized as either manual memory management or automatic memory management.
Manual memory management
The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.
In the C language, the function which allocates memory from the heap is called and the function which takes previously allocated memory and marks it as "free" (to be used by future allocations) is called .
Several issues complicate the implementation, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's metadat
|
https://en.wikipedia.org/wiki/Synchronous%20detector
|
In electronics, a synchronous detector is a device that recovers information from a modulated signal by mixing the signal with a replica of the unmodulated carrier. This can be locally generated at the receiver using a phase-locked loop or other techniques. Synchronous detection preserves any phase information originally present in the modulating signal. With the exception of SECAM receivers, synchronous detection is a necessary component of any analog color television receiver, where it allows recovery of the phase information that conveys hue. Synchronous detectors are also found in some shortwave radio receivers used for audio signals, where they provide better performance on signals that may be affected by fading.
See also
Lock-in amplifier
|
https://en.wikipedia.org/wiki/Limiting%20case%20%28mathematics%29
|
In mathematics, a limiting case of a mathematical object is a special case that arises when one or more components of the object take on their most extreme possible values. For example:
In statistics, the limiting case of the binomial distribution is the Poisson distribution. As the number of events tends to infinity in the binomial distribution, the random variable changes from the binomial to the Poisson distribution.
A circle is a limiting case of various other figures, including the Cartesian oval, the ellipse, the superellipse, and the Cassini oval. Each type of figure is a circle for certain values of the defining parameters, and the generic figure appears more like a circle as the limiting values are approached.
Archimedes calculated an approximate value of π by treating the circle as the limiting case of a regular polygon with 3 × 2n sides, as n gets large.
In electricity and magnetism, the long wavelength limit is the limiting case when the wavelength is much larger than the system size.
In economics, two limiting cases of a demand curve or supply curve are those in which the elasticity is zero (the totally inelastic case) or infinity (the infinitely elastic case).
In finance, continuous compounding is the limiting case of compound interest in which the compounding period becomes infinitesimally small, achieved by taking the limit as the number of compounding periods per year goes to infinity.
A limiting case is sometimes a degenerate case in which some qualitative properties differ from the corresponding properties of the generic case. For example:
A point is a degenerate circle, namely one with radius 0.
A parabola can degenerate into two distinct or coinciding parallel lines.
An ellipse can degenerate into a single point or a line segment.
A hyperbola can degenerate into two intersecting lines.
See also
Degeneracy (mathematics)
Limit (mathematics)
|
https://en.wikipedia.org/wiki/Bendix%20Electrojector
|
The Bendix Electrojector is an electronically controlled manifold injection (EFI) system developed and made by Bendix Corporation. In 1957, American Motors (AMC) offered the Electrojector as an option in some of their cars; Chrysler followed in 1958. However, it proved to be an unreliable system that was soon replaced by conventional carburetors. The Electrojector patents were then sold to German car component supplier Bosch, who developed the Electrojector into a functioning system, the Bosch D-Jetronic, introduced in 1967.
Description
The Electrojector is an electronically controlled multi-point injection system that has an analogue engine control unit, the so-called "modulator" that uses the intake manifold vacuum and the engine speed for metering the right amount of fuel. The fuel is injected intermittently, and with a constant pressure of . The injectors are spring-loaded active injectors, actuated by a modulator-controlled electromagnet. Pulse-width modulation is used to change the amount of injected fuel: since the injection pressure is constant, the fuel amount can only be changed by increasing or decreasing the injection pulse duration. The modulator receives the injection pulse from an injection pulse generator that rotates in sync with the ignition distributor. The modulator converts the injection pulse into a correct injection signal for each fuel injector primarily by using the intake manifold and crankshaft speed sensor signals. It uses analogue transistor technology (i. e. no microprocessor) to do so. The system also supports setting the correct idle speed, mixture enrichment, and coolant temperature using additional resistors in the modulator.
History
The Electrojector was first offered by American Motors Corporation (AMC) in 1957. The Rambler Rebel was used to promote AMC's new engine. The Electrojector-injected engine was an option and rated at . It produced peak torque 500 rpm lower than the equivalent carburetor engine The cost of the EFI
|
https://en.wikipedia.org/wiki/Security%20log
|
A security log is used to track security-related information on a computer system. Examples include:
Windows Security Log
Internet Connection Firewall security log
According to Stefan Axelsson, "Most UNIX installations do not run any form of security logging software, mainly because the security logging facilities are expensive in terms of disk storage, processing time, and the cost associated with analyzing the audit trail, either manually or by special software."
See also
Audit trail
Server log
Log management and intelligence
Web log analysis software
Web counter
Data logging
Common Log Format
Syslog
|
https://en.wikipedia.org/wiki/Mathemalchemy
|
Mathemalchemy is a traveling art installation dedicated to a celebration of the intersection of art and mathematics. It is a collaborative work led by Duke University mathematician Ingrid Daubechies and fiber artist Dominique Ehrmann. The cross-disciplinary team of 24 people, who collectively built the installation during the calendar years 2020 and 2021, includes artists, mathematicians, and craftspeople who employed a wide variety of materials to illustrate, amuse, and educate the public on the wonders, mystery, and beauty of mathematics. Including the core team of 24, about 70 people contributed in some way to the realization of Mathemalchemy.
Description
The art installation occupies a footprint approximately , which extends up to in height (in addition, small custom-fabricated tables are arranged around the periphery to protect the more fragile elements). A map shows the 14 or so different zones or regions within the exhibit, which is filled with hundreds of detailed mathematical artifacts, some smaller than ; the entire exhibit comprises more than 1,000 parts which must be packed for shipment. Versions of some of the complex mathematical objects can be purchased through an associated "Mathemalchemy Boutique" website.
The art installation contains puns (such as "Pi" in a bakery) and Easter eggs, such as a miniature model of the Antikythera mechanism hidden on the bottom of "Knotilus Bay". Mathematically sophisticated visitors may enjoy puzzling out and decoding the many mathematical allusions symbolized in the exhibit, while viewers of all levels are invited to enjoy the self-guided tours, detailed explanations, and videos available on the accompanying official website .
A downloadable comic book was created to explore some of the themes of the exhibition, using an independent narrative set in the world of Mathemalchemy.
Themes
The installation features or illustrates mathematical concepts at many different levels. All of the participants regard "recre
|
https://en.wikipedia.org/wiki/Action%20at%20a%20distance
|
In physics, action at a distance is the concept that an object's motion can be affected by another object without being physically contact (as in mechanical contact) by the other object. That is, it is the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity lead to new action at a distance models providing alternative to field theories.
Categories of action
In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.
Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed.
Action at a distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, there is no medium required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".
Roles
The concept of action at a distance acts in multiple roles in physics and it can co-exist with other mode
|
https://en.wikipedia.org/wiki/4000-series%20integrated%20circuits
|
The 4000 series is a CMOS logic family of integrated circuits (ICs) first introduced in 1968 by RCA. It was slowly migrated into the 4000B buffered series after about 1975. It had a much wider supply voltage range than any contemporary logic family (3V to 18V recommended range for "B" series). Almost all IC manufacturers active during this initial era fabricated models for this series. Its naming convention is still in use today.
History
The 4000 series was introduced as the CD4000 COS/MOS series in 1968 by RCA as a lower power and more versatile alternative to the 7400 series of transistor-transistor logic (TTL) chips. The logic functions were implemented with the newly introduced Complementary Metal–Oxide–Semiconductor (CMOS) technology. While initially marketed with "COS/MOS" labeling by RCA (which stood for Complementary Symmetry Metal-Oxide Semiconductor), the shorter CMOS terminology emerged as the industry preference to refer to the technology. The first chips in the series were designed by a group led by Albert Medwin.
Wide adoption was initially hindered by the comparatively lower speeds of the designs compared to TTL based designs. Speed limitations were eventually overcome with newer fabrication methods (such as self aligned gates of polysilicon instead of metal). These CMOS variants performed on par with contemporary TTL. The series was extended in the late 1970s and 1980s with new models that were given 45xx and 45xxx designations, but are usually still regarded by engineers as part of the 4000 series. In the 1990s, some manufacturers (e.g. Texas Instruments) ported the 4000 series to newer HCMOS based designs to provide greater speeds.
Design considerations
The 4000 series facilitates simpler circuit design through relatively low power consumption, a wide range of supply voltages, and vastly increased load-driving capability (fanout) compared to TTL. This makes the series ideal for use in prototyping LSI designs. While TTL ICs are similarly modular
|
https://en.wikipedia.org/wiki/Design%20rule%20checking
|
In electronic design automation, a design rule is a geometric constraint imposed on circuit board, semiconductor device, and integrated circuit (IC) designers to ensure their designs function properly, reliably, and can be produced with acceptable yield. Design rules for production are developed by process engineers based on the capability of their processes to realize design intent. Electronic design automation is used extensively to ensure that designers do not violate design rules; a process called design rule checking (DRC). DRC is a major step during physical verification signoff on the design, which also involves LVS (layout versus schematic) checks, XOR checks, ERC (electrical rule check), and antenna checks. The importance of design rules and DRC is greatest for ICs, which have micro- or nano-scale geometries; for advanced processes, some fabs also insist upon the use of more restricted rules to improve yield.
Design rules
Design rules are a series of parameters provided by semiconductor manufacturers that enable the designer to verify the correctness of a mask set. Design rules are specific to a particular semiconductor manufacturing process. A design rule set specifies certain geometric and connectivity restrictions to ensure sufficient margins to account for variability in semiconductor manufacturing processes, so as to ensure that most of the parts work correctly.
The most basic design rules are shown in the diagram on the right. The first are single layer rules. A width rule specifies the minimum width of any shape in the design. A spacing rule specifies the minimum distance between two adjacent objects. These rules will exist for each layer of semiconductor manufacturing process, with the lowest layers having the smallest rules (typically 100 nm as of 2007) and the highest metal layers having larger rules (perhaps 400 nm as of 2007).
A two layer rule specifies a relationship that must exist between two layers. For example, an enclosure rule might s
|
https://en.wikipedia.org/wiki/Temporal%20resolution
|
Temporal resolution (TR) refers to the discrete resolution of a measurement with respect to time.
Physics
Often there is a trade-off between the temporal resolution of a measurement and its spatial resolution, due to Heisenberg's uncertainty principle. In some contexts, such as particle physics, this trade-off can be attributed to the finite speed of light and the fact that it takes a certain period of time for the photons carrying information to reach the observer. In this time, the system might have undergone changes itself. Thus, the longer the light has to travel, the lower the temporal resolution.
Technology
Computing
In another context, there is often a tradeoff between temporal resolution and computer storage. A transducer may be able to record data every millisecond, but available storage may not allow this, and in the case of 4D PET imaging the resolution may be limited to several minutes.
Electronic displays
In some applications, temporal resolution may instead be equated to the sampling period, or its inverse, the refresh rate, or update frequency in Hertz, of a TV, for example.
The temporal resolution is distinct from temporal uncertainty. This would be analogous to conflating image resolution with optical resolution. One is discrete, the other, continuous.
The temporal resolution is a resolution somewhat the 'time' dual to the 'space' resolution of an image. In a similar way, the sample rate is equivalent to the pixel pitch on a display screen, whereas the optical resolution of a display screen is equivalent to temporal uncertainty.
Note that both this form of image space and time resolutions are orthogonal to measurement resolution, even though space and time are also orthogonal to each other. Both an image or an oscilloscope capture can have a signal-to-noise ratio, since both also have measurement resolution.
Oscilloscopy
An oscilloscope is the temporal equivalent of a microscope, and it is limited by temporal uncertainty the same way a m
|
https://en.wikipedia.org/wiki/Crypsis
|
In ecology, crypsis is the ability of an animal or a plant to avoid observation or detection by other animals. It may be a predation strategy or an antipredator adaptation. Methods include camouflage, nocturnality, subterranean lifestyle and mimicry. Crypsis can involve visual, olfactory (with pheromones) or auditory concealment. When it is visual, the term cryptic coloration, effectively a synonym for animal camouflage, is sometimes used, but many different methods of camouflage are employed by animals or plants.
Overview
There is a strong evolutionary pressure for animals to blend into their environment or conceal their shape, for prey animals to avoid predators and for predators to be able to avoid detection by prey. Exceptions include large herbivores without natural enemies, brilliantly colored birds that rely on flight to escape predators, and venomous or otherwise powerfully armed animals with warning coloration. Cryptic animals include the tawny frogmouth (feather patterning resembles bark), the tuatara (hides in burrows all day; nocturnal), some jellyfish (transparent), the leafy sea dragon, and the flounder (covers itself in sediment).
Methods
Methods of crypsis include (visual) camouflage, nocturnality, and subterranean lifestyle. Camouflage can be achieved by a wide variety of methods, from disruptive coloration to transparency and some forms of mimicry, even in habitats like the open sea where there is no background.
As a strategy, crypsis is used by predators against prey and by prey against predators.
Crypsis also applies to eggs and pheromone production. Crypsis can in principle involve visual, olfactory, or auditory camouflage.
Visual
Many animals have evolved so that they visually resemble their surroundings by using any of the many methods of natural camouflage that may match the color and texture of the surroundings (cryptic coloration) and/or break up the visual outline of the animal itself (disruptive coloration). Such animals, like the
|
https://en.wikipedia.org/wiki/Food%20pairing
|
Food pairing (or flavor pairing or food combination) is a method of identifying which foods go well together from a flavor standpoint, often based on individual tastes, popularity, availability of ingredients, and traditional cultural practices.
From a food science perspective, foods may be said to combine well with one another when they share key flavor components. One such process was trademarked as "Foodpairing" by the company of the same name.
Examples
The two pairings that are globally most commonly (possibly because them being hyperpalatable) used, cited as a response in "your favorite food" or "food that you can eat every day" surveys and seen in recipe videos, websites or books are:
Meat, bread, cheese, tomatoes, onions, and at least one type of green vegetables (including in burgers, sandwiches, shawarmas, tacos and pizzas)
Chicken and rice (or more generally: meat and rice or pasta, in addition some combination of tomatoes, onions, and at least one type of green vegetables)
Other commonly encountered food pairings include:
Bacon and cabbage
Duck à l'orange
Ham and eggs
Hawaiian pizza
Liver and onions
Peanut butter and jelly
Pork chops and applesauce
Food science
Experimenting with salty ingredients and chocolate around the year 2000, Heston Blumenthal, the chef of The Fat Duck, concluded that caviar and white chocolate were a perfect match. To find out why, he contacted a flavor scientist at Firmenich, the flavor manufacturer. By comparing the flavor analysis of both foods, they found that caviar and white chocolate had major flavor components in common. At that time, they formed the hypothesis that different foods would combine well together when they shared major flavor components, and the trademarked concept of "Foodpairing" was created.
This Foodpairing method is asserted to aid recipe design, and it has provided new ideas for food combinations which are asserted to be theoretically sound on the basis of their flavor. It provides possib
|
https://en.wikipedia.org/wiki/Natural%20competence
|
In microbiology, genetics, cell biology, and molecular biology, competence is the ability of a cell to alter its genetics by taking up extracellular ("naked") DNA from its environment in the process called transformation. Competence may be differentiated between natural competence, a genetically specified ability of bacteria which is thought to occur under natural conditions as well as in the laboratory, and induced or artificial competence, which arises when cells in laboratory cultures are treated to make them transiently permeable to DNA. Competence allows for rapid adaptation and DNA repair of the cell. This article primarily deals with natural competence in bacteria, although information about artificial competence is also provided.
History
Natural competence was discovered by Frederick Griffith in 1928, when he showed that a preparation of killed cells of a pathogenic bacterium contained something that could transform related non-pathogenic cells into the pathogenic type. In 1944 Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrated that this 'transforming factor' was pure DNA
. This was the first compelling evidence that DNA carries the genetic information of the cell.
Since then, natural competence has been studied in a number of different bacteria, particularly Bacillus subtilis, Streptococcus pneumoniae (Griffith's "pneumococcus"), Neisseria gonorrhoeae, Haemophilus influenzae and members of the Acinetobacter genus. Areas of active research include the mechanisms of DNA transport, the regulation of competence in different bacteria, and the evolutionary function of competence.
Mechanisms of DNA uptake
In the laboratory, DNA is provided by the researcher, often as a genetically engineered fragment or plasmid. During uptake, DNA is transported across the cell membrane(s), and the cell wall if one is present. Once the DNA is inside the cell it may be degraded to nucleotides, which are reused for DNA replication and other metabolic functions.
|
https://en.wikipedia.org/wiki/Constant%20fraction%20discriminator
|
A constant fraction discriminator (CFD) is an electronic signal processing device, designed to mimic the mathematical operation of finding a maximum of a pulse by finding the zero of its slope. Some signals do not have a sharp maximum, but short rise times .
Typical input signals for CFDs are pulses from plastic scintillation counters, such as those used for lifetime measurement in positron annihilation experiments. The scintillator pulses have identical rise times that are much longer than the desired temporal resolution. This forbids simple threshold triggering, which causes a dependence of the trigger time on the signal's peak height, an effect called time walk (see diagram). Identical rise times and peak shapes permit triggering not on a fixed threshold but on a constant fraction of the total peak height, yielding trigger times independent from peak heights.
From another point of view
A time-to-digital converter assigns timestamps. The time-to-digital converter needs fast rising edges with normed height. The plastic scintillation counter delivers fast rising edge with varying heights. Theoretically, the signal could be split into two parts. One part would be delayed and the other low pass filtered, inverted and then used in a variable-gain amplifier to amplify the original signal to the desired height. Practically, it is difficult to achieve a high dynamic range for the variable-gain amplifier, and analog computers have problems with the inverse value.
Principle of operation
The incoming signal is split into three components.
One component is delayed by a time , with it may be multiplied by a small factor to put emphasis on the leading edge of the pulse and connected to the noninverting input of a comparator. One component is connected to the inverting input of this comparator. One component is connected to the noninverting input of another comparator. A threshold value is connected to the inverting input of the other comparator. The output of both compara
|
https://en.wikipedia.org/wiki/Staling
|
Staling, or "going stale", is a chemical and physical process in bread and similar foods that reduces their palatability. Stale bread is dry and hard, making it suitable for different culinary uses than fresh bread. Countermeasures and destaling techniques may reduce staling.
Mechanism and effects
Staling is a chemical and physical process in bread and similar foods that reduces their palatability. Staling is not simply a drying-out process due to evaporation. One important mechanism is the migration of moisture from the starch granules into the interstitial spaces, degelatinizing the starch; stale bread's leathery, hard texture results from the starch amylose and amylopectin molecules realigning themselves causing recrystallisation.
Stale bread
Stale bread is dry and hard. Bread will stale even in a moist environment, and stales most rapidly at temperatures just above freezing. While bread that has been frozen when fresh may be thawed acceptably, bread stored in a refrigerator will have increased staling rates.
Culinary uses
Many classic dishes rely upon otherwise unpalatable stale bread. Examples include bread sauce, bread dumplings, and flummadiddle, an early American savoury pudding. There are also many types of bread soups such as wodzionka (in Silesian cuisine) and ribollita (in Italian cuisine). An often-sweet dish is bread pudding. Cubes of stale bread can be dipped in cheese fondue, or seasoned and baked in the oven to become croutons, suitable for scattering in salads or on top of soups. Slices of stale bread soaked in an egg and milk mixture and then fried turn into French toast (known in French as pain perdu - lost bread). In Spanish and Portuguese cuisines migas is a breakfast dish using stale bread, and in Tunisian cuisine leblebi is a soup of chickpeas and stale bread.
Stale bread or breadcrumbs made from it can be used to "stretch" meat in dishes such as haslet (a type of meatloaf in British cuisine, or meatloaf itself) and garbure (a stew
|
https://en.wikipedia.org/wiki/Nucleation
|
In thermodynamics, nucleation is the first step in the formation of either a new thermodynamic phase or structure via self-assembly or self-organization within a substance or mixture. Nucleation is typically defined to be the process that determines how long an observer has to wait before the new phase or self-organized structure appears. For example, if a volume of water is cooled (at atmospheric pressure) below 0°C, it will tend to freeze into ice, but volumes of water cooled only a few degrees below 0°C often stay completely free of ice for long periods (supercooling). At these conditions, nucleation of ice is either slow or does not occur at all. However, at lower temperatures nucleation is fast, and ice crystals appear after little or no delay.
Nucleation is a common mechanism which generates first-order phase transitions, and it is the start of the process of forming a new thermodynamic phase. In contrast, new phases at continuous phase transitions start to form immediately.
Nucleation is often very sensitive to impurities in the system. These impurities may be too small to be seen by the naked eye, but still can control the rate of nucleation. Because of this, it is often important to distinguish between heterogeneous nucleation and homogeneous nucleation. Heterogeneous nucleation occurs at nucleation sites on surfaces in the system. Homogeneous nucleation occurs away from a surface.
Characteristics
Nucleation is usually a stochastic (random) process, so even in two identical systems nucleation will occur at different times. A common mechanism is illustrated in the animation to the right. This shows nucleation of a new phase (shown in red) in an existing phase (white). In the existing phase microscopic fluctuations of the red phase appear and decay continuously, until an unusually large fluctuation of the new red phase is so large it is more favourable for it to grow than to shrink back to nothing. This nucleus of the red phase then grows and converts th
|
https://en.wikipedia.org/wiki/Tropical%20vegetation
|
Tropical vegetation is any vegetation in tropical latitudes. Plant life that occurs in climates that are warm year-round is in general more biologically diverse that in other latitudes. Some tropical areas may receive abundant rain the whole year round, but others have long dry seasons which last several months and may vary in length and intensity with geographic location. These seasonal droughts have great impact on the vegetation, such as in the Madagascar spiny forests. Rainforest vegetation is categorized by five layers. The top layer being the upper tree layer. Here you will find the largest and widest trees in all the forest. These trees tend to have very large canopy's so they can be fully exposed to sunlight. A layer below that is the middle tree layer. Here you will find more compact trees and vegetation. These trees tend to be more skinny as they are trying to gain any sunlight they can. The third layer is the lower tree area. These trees tend to be around five to ten meters high and tightly compacted. The trees found in the third layer are young trees trying to grow into the larger canopy trees. The fourth layer is the shrub layer beneath the tree canopy. This layer is mainly populated by sapling trees, shrubs, and seedlings. The fifth and final layer is the herb layer which is the forest floor. The forest floor is mainly bare except for various plants, mosses, and ferns. The forest floor is much more dense than above because of little sunlight and air movement.
Plant species native to the tropics found in tropical ecosystems are known as tropical plants. Some examples of tropical ecosystem are the Guinean Forests of West Africa, the Madagascar dry deciduous forests and the broadleaf forests of the Thai highlands and the El Yunque National Forest in the Puerto Rico.
Description
The term "tropical vegetation" is frequently used in the sense of lush and luxuriant, but not all the vegetation of the areas of the Earth in tropical climates can be de
|
https://en.wikipedia.org/wiki/Frequency%20scaling
|
In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004.
The effect of processor frequency on computer speed can be seen by looking at the equation for computer program runtime:
where instructions per program is the total instructions being executed in a given program, cycles per instruction is a program-dependent, architecture-dependent average value, and time per cycle is by definition the inverse of processor frequency. An increase in frequency thus decreases runtime.
However, power consumption in a chip is given by the equation
where P is power consumption, C is the capacitance being switched per clock cycle, V is voltage, and F is the processor frequency (cycles per second). Increases in frequency thus increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm.
Moore's Law was still in effect when frequency scaling ended. Despite power issues, transistor densities were still doubling every 18 to 24 months. With the end of frequency scaling, new transistors (which are no longer needed to facilitate frequency scaling) are used to add extra hardware, such as additional cores, to facilitate parallel computing - a technique that is being referred to as parallel scaling.
The end of frequency scaling as the dominant cause of processor performance gains has caused an industry-wide shift to parallel computing in the form of multicore processors.
See also
Dynamic frequency scaling
Overclocking
Underclocking
Voltage scaling
|
https://en.wikipedia.org/wiki/Asymptotic%20gain%20model
|
The asymptotic gain model (also known as the Rosenstark method) is a representation of the gain of negative feedback amplifiers given by the asymptotic gain relation:
where is the return ratio with the input source disabled (equal to the negative of the loop gain in the case of a single-loop system composed of unilateral blocks), G∞ is the asymptotic gain and G0 is the direct transmission term. This form for the gain can provide intuitive insight into the circuit and often is easier to derive than a direct attack on the gain.
Figure 1 shows a block diagram that leads to the asymptotic gain expression. The asymptotic gain relation also can be expressed as a signal flow graph. See Figure 2. The asymptotic gain model is a special case of the extra element theorem.
As follows directly from limiting cases of the gain expression, the asymptotic gain G∞ is simply the gain of the system when the return ratio approaches infinity:
while the direct transmission term G0 is the gain of the system when the return ratio is zero:
Advantages
This model is useful because it completely characterizes feedback amplifiers, including loading effects and the bilateral properties of amplifiers and feedback networks.
Often feedback amplifiers are designed such that the return ratio T is much greater than unity. In this case, and assuming the direct transmission term G0 is small (as it often is), the gain G of the system is approximately equal to the asymptotic gain G∞.
The asymptotic gain is (usually) only a function of passive elements in a circuit, and can often be found by inspection.
The feedback topology (series-series, series-shunt, etc.) need not be identified beforehand as the analysis is the same in all cases.
Implementation
Direct application of the model involves these steps:
Select a dependent source in the circuit.
Find the return ratio for that source.
Find the gain G∞ directly from the circuit by replacing the circuit with one corresponding to T = ∞.
Find the ga
|
https://en.wikipedia.org/wiki/Cancer%20selection
|
Cancer selection can be viewed through the lens of natural selection. The animal host's body is the environment which applies the selective pressures upon cancer cells. The most fit cancer cells will have traits that will allow them to out compete other cancer cells which they are related to, but are genetically different from. This genetic diversity of cells within a tumor gives cancer an evolutionary advantage over the host's ability to inhibit and destroy tumors. Therefore, other selective pressures such as clinical treatments and pharmaceutical treatments are needed to help destroy the large amount of genetically diverse cancerous cells within a tumor. It is because of the high genetic diversity between cancer cells within a tumor that makes cancer a formidable foe for the survival of animal hosts. It has also been proposed that cancer selection is a selective force that has driven the evolution of animals. Therefore, cancer and animals have been paired as competitors in co-evolution throughout time.
Natural selection
Evolution, which is driven by natural selection, is the cornerstone for nearly all branches of biology including cancer biology. In 1859, Charles Darwin's book On the Origin of Species was published, in which Darwin proposed his theory of evolution by means of natural selection. Natural selection is the force that drives changes in the phenotypes observed in populations over time, and is therefore responsible for the diversity amongst all living things. It is through the pressures applied by natural selection upon individuals that leads to evolutionary change over time. Natural selection is simply the selective pressures acting upon individuals within a population due to changes in their environment which picks the traits that are best fit for the selective change.
Selection and cancer
These same observations that Darwin proposed for the diversity in phenotypes amongst all living things can also be applied to cancer biology to explai
|
https://en.wikipedia.org/wiki/Food%20technology
|
Food technology is a branch of food science that addresses the production, preservation, quality control and research and development of food products.
Early scientific research into food technology concentrated on food preservation. Nicolas Appert's development in 1810 of the canning process was a decisive event. The process wasn't called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques.
Louis Pasteur's research on the spoilage of wine and his description of how to avoid spoilage in 1864, was an early attempt to apply scientific knowledge to food handling. Besides research into wine spoilage, Pasteur researched the production of alcohol, vinegar, wines and beer, and the souring of milk. He developed pasteurization – the process of heating milk and milk products to destroy food spoilage and disease-producing organisms. In his research into food technology, Pasteur became the pioneer into bacteriology and of modern preventive medicine.
Developments
Developments in food technology have contributed greatly to the food supply and have changed our world. Some of these developments are:
Instantized Milk Powder – Instant milk powder has become the basis for a variety of new products that are rehydratable. This process increases the surface area of the powdered product by partially rehydrating spray-dried milk powder.
Freeze-drying – The first application of freeze drying was most likely in the pharmaceutical industry; however, a successful large-scale industrial application of the process was the development of continuous freeze drying of coffee.
High-Temperature Short Time Processing – These processes, for the most part, are characterized by rapid heating and cooling, holding for a short time at a relatively high temperature and filling aseptically into sterile containers.
Decaffeination of Coffee and Tea – Decaffeinated coffee and tea was first developed on
|
https://en.wikipedia.org/wiki/Bitmain
|
Bitmain Technologies Ltd., is a privately owned company headquartered in Beijing, China, that designs application-specific integrated circuit (ASIC) chips for bitcoin mining.
History
It was founded by Micree Zhan and Jihan Wu in 2013. Prior to founding Bitmain, Zhan was running DivaIP, a startup that allowed users to stream television to a computer screen via a set-top box, and Wu was a financial analyst and private equity fund manager.
By 2018 it had become the world's largest designer of application-specific integrated circuit (ASIC) chips for bitcoin mining. The company also operates BTC.com and Antpool, historically two of the largest mining pools for bitcoin. In an effort to boost Bitcoin Cash (BCH) prices, Antpool "burned" 12% of the BCH they mined by sending them to irrecoverable addresses. Bitmain was reportedly profitable in early 2018, with a net profit of $742.7 million in the first half of 2018, and negative operating cash flow. TechCrunch reported that unsold inventory ballooned to one billion dollars in the second quarter of 2018. Bitmain's first product was the Antminer S1 which is an ASIC Bitcoin miner making 180 gigahashes per second (GH/s) while using 80200 watts of power. Bitmain as of 2018 had 11 mining farms operating in China. Bitmain was involved in the 2018 Bitcoin Cash split, siding with Bitcoin Cash ABC alongside Roger Ver. In December 2018 the company laid off about half of its 3000 staff. The company has since closed its offices in Israel and the Netherlands, while significantly downsizing its Texas mining operation. In February 2019, Bitmain had lost "about $500 million" in the third quarter of 2018. Bitmain issued a statement saying "the rumors are not true and we will make announcements in due course."
In June 2021, suspended spot delivery of sales of machines globally aiming to support local prices following Beijing's crackdown.
Bitmain's attempts at initial public offering
In June 2018, Wu told Bloomberg that Bitmain was conside
|
https://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin%20theorem
|
In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process.
History
Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914.
The case of a continuous-time process
For continuous time, the Wiener–Khinchin theorem says that if is a wide-sense-stationary random process whose autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, exists and is finite at every lag , then there exists a monotone function in the frequency domain , or equivalently a non negative Radon measure on the frequency domain, such that
where the integral is a Riemann–Stieltjes integral. The asterisk denotes complex conjugate, and can be omitted if the random process is real-valued. This is a kind of spectral decomposition of the auto-correlation function. F is called the power spectral distribution function and is a statistical distribution function. It is sometimes called the integrated spectrum.
The Fourier transform of does not exist in general, because stochastic random functions are not generally either square-integrable or absolutely integrable. Nor is assumed to be absolutely integrable, so it need not have a Fourier transform either.
However, if the measure is absolutely continuous, for example, if the process is purely indeterministic, then is differentiable almost everywhere and we can write . In this case, one can determine , the power spectral density of , by taking the averaged derivative of .
|
https://en.wikipedia.org/wiki/Refraction%20networking
|
Refraction networking, also known as decoy routing, is a research anti-censorship approach that would allow users to circumvent a censor without using any individual proxy servers. Instead, it implements proxy functionality at the core of partner networks, such as those of Internet service providers, outside the censored country. These networks would discreetly provide censorship circumvention for "any connection that passes through their networks." This prevents censors from selectively blocking proxy servers and makes censorship more expensive, in a strategy similar to collateral freedom.
The approach was independently invented by teams at the University of Michigan, the University of Illinois, and Raytheon BBN Technologies. There are five existing protocols: Telex, TapDance, Cirripede, Curveball, and Rebound. These teams are now working together to develop and deploy refraction networking with support from the U.S. Department of State.
See also
Domain fronting
|
https://en.wikipedia.org/wiki/NinKi%3A%20Urgency%20of%20Proximate%20Drawing%20Photograph
|
The NinKi: Urgency of Proximate Drawing Photograph (NinKi:UoPDP) was initiated by Bangladeshi visual artist Firoz Mahmud ( ফিরোজ মাহমুদ, フィロズ・マハムド ). This is a drawing photograph project to rhetorically rescue popular icons with geometric structure drawings or make photo image of the people tactically static. His pigeonhole or kind of compartmental examples of doodling were engaged on found images in various printed media and also were found in his sketchbook, books, notebooks and often in his borrowed books. The word 'Ninki' (人気) is a Japanese word which means be Popular or popularity. The Ninki: UoPDP art Project of drawing on photographs consist of numerous archetypal images of popular celebrities in vague appearance. Their career, character, fame, obscurity, activities and character are insurgent and idiosyncratic. Artist Firoz has started on any image and then specifically on Bengal tiger and more significantly on Japanese Sumo Wrestler as artist based in Japan and fascinated by sports, media and interested on humorous aspect of entertainment industries.
About
The `Urgency of Proximate Drawing Photograph` (NinKi:UoPDP) is Firoz Mahmud`s one of art projects, started as anonymously. Gradually with the requests of curators and many of his friends, he started to exhibit in public spaces and major art venues. It was initially created for changing the meaning of visual images from the original photo images which Firoz took, collected or found to experiment that how general people react seeing each one's popular icons.
History
From the inception when Firoz Mahmud exposed these drawing photographs, he focused anonymously without using his name at billboards, undergrounds, signage board or in other exhibition venues in Japan. He created this on-going art project in Tokyo since 2008 as his leisure time drawing doodle on newspapers, magazines, and found images. NinKi: Urgency of Proximate Drawing was first exhibited at the 9th Sharjah Art Biennial in 2009 in Sharjah, UA
|
https://en.wikipedia.org/wiki/Fluorosilicate%20glass
|
Fluorosilicate glass (FSG) is a glass material composed primarily of fluorine, silicon and oxygen. It has a number of uses in industry and manufacturing, especially in semiconductor fabrication where it forms an insulating dielectric. The related fluorosilicate glass-ceramics have good mechanical and chemical properties.
Semiconductor fabrication
FSG has a small relative dielectric constant (low-κ dielectric) and is used in between metal copper interconnect layers during silicon integrated circuit fabrication process. It is widely used by semiconductor fabrication plants on geometries under 0.25 microns (μ). FSG is effectively a fluorine-containing silicon dioxide (κ=3.5, while κ of undoped silicon dioxide is 3.9). FSG is used by IBM. Intel started using Cu metal layers and FSG on its 1.2 GHz Pentium processor at 130 nm complementary metal–oxide–semiconductor (CMOS). Taiwan Semiconductor Manufacturing Company (TSMC) combined FSG and copper in the Altera APEX.
Fluorosilicate glass-ceramics
Fluorosilicate glass-ceramics are crystalline or semi-crystalline solids formed by careful cooling of molten fluorosilicate glass. They have good mechanical properties.
Potassium fluororichterite based materials are composed from tiny interlocked rod-shaped amphibole crystals; they have good resistance to chemicals and can be used in microwave ovens. Richterite glass-ceramics are used for high-performance tableware.
Fluorosilicate glass-ceramics with sheet structure, derived from mica, are strong and machinable. They find a number of uses and can be used in high vacuum and as dielectrics and precision ceramic components. A number of mica and mica-fluoroapatite glass-ceramics were studied as biomaterials.
See also
Fluoride glass
Glass
Silicate
|
https://en.wikipedia.org/wiki/Cheating%20%28biology%29
|
Cheating is a term used in behavioral ecology and ethology to describe behavior whereby organisms receive a benefit at the cost of other organisms. Cheating is common in many mutualistic and altruistic relationships. A cheater is an individual who does not cooperate (or cooperates less than their fair share) but can potentially gain the benefit from others cooperating. Cheaters are also those who selfishly use common resources to maximize their individual fitness at the expense of a group. Natural selection favors cheating, but there are mechanisms to regulate it. The stress gradient hypothesis states that facilitation, cooperation or mutualism should be more common in stressful environments, while cheating, competition or parasitism are common in benign environments (i.e nutrient excess).
Theoretical models
Organisms communicate and cooperate to perform a wide range of behaviors. Mutualism, or mutually beneficial interactions between species, is common in ecological systems. These interactions can be thought of "biological markets" in which species offer partners goods that are relatively inexpensive for them to produce and receive goods that are more expensive or even impossible for them to produce. However, these systems provide opportunities for exploitation by individuals that can obtain resources while providing nothing in return. Exploiters can take on several forms: individuals outside a mutualistic relationship who obtain a commodity in a way that confers no benefit to either mutualist, individuals who receive benefits from a partner but have lost the ability to give any in return, or individuals who have the option of behaving mutualistically towards their partners but chose not to do so.
Cheaters, who do not cooperate but benefit from others who do cooperate gain a competitive edge. In an evolutionary context, this competitive edge refers to a greater ability to survive or to reproduce. If individuals who cheat are able to gain survivorship and reprod
|
https://en.wikipedia.org/wiki/Highly%20accelerated%20life%20test
|
A highly accelerated life test (HALT) is a stress testing methodology for enhancing product reliability in which prototypes are stressed to a much higher degree than expected from actual use in order to identify weaknesses in the design or manufacture of the product. Manufacturing and research and development organizations in the electronics, computer, medical, and military industries use HALT to improve product reliability.
HALT can be effectively used multiple times over a product's life time. During product development, it can find design weakness earlier in the product lifecycle when changes are much less costly to make. By finding weaknesses and making changes early, HALT can lower product development costs and compress time to market. When HALT is used at the time a product is being introduced into the market, it can expose problems caused by new manufacturing processes. When used after a product has been introduced into the market, HALT can be used to audit product reliability caused by changes in components, manufacturing processes, suppliers, etc.
Overview
Highly accelerated life testing (HALT) techniques are important in uncovering many of the weak links of a new product. These discovery tests rapidly find weaknesses using accelerated stress conditions. The goal of HALT is to proactively find weaknesses and fix them, thereby increasing product reliability. Because of its accelerated nature, HALT is typically faster and less expensive than traditional testing techniques.
HALT is a test technique called test-to-fail, where a product is tested until failure. HALT does not help to determine or demonstrate the reliability value or failure probability in field. Many accelerated life tests are test-to-pass, meaning they are used to demonstrate the product life or reliability.
It is highly recommended to perform HALT in the initial phases of product development to uncover weak links in a product, so that there is better chance and more time to modify and imp
|
https://en.wikipedia.org/wiki/Facilitation%20cascade
|
A facilitation cascade is a sequence of ecological interactions that occur when a species benefits a second species that in turn has a positive effect on a third species. These facilitative interactions can take the form of amelioration of environmental stress and/or provision of refuge from predation. Autogenic ecosystem engineering species, structural species, habitat-forming species, and foundation species are associated with the most commonly recognized examples of facilitation cascades, sometimes referred to as a habitat cascades. Facilitation generally is a much broader concept that includes all forms of positive interactions including pollination, seed dispersal, and co-evolved commensalism and mutualistic relationships, such as between cnidarian hosts and symbiodinium in corals, and between algae and fungi in lichens. As such, facilitation cascades are widespread through all of the earth's major biomes with consistently positive effects on the abundance and biodiversity of associated organisms.
Overview
Facilitation cascades occur when prevalent foundation species, or less abundant but ecologically important keystone species, are involved in a hierarchy of positive interactions and consist of a primary facilitator which positively affects one or more secondary facilitators which support a suite of beneficiary species. Facilitation cascades at a minimum have a primary and secondary facilitator, although tertiary, quaternary, etc. facilitators may be found in some systems.
A typical example of facilitation cascades in a tropical coastal ecosystem
Origin of concept and related terms
The term facilitation cascade was coined by Altieri, Silliman, and Bertness during a study on New England cobblestone beaches to explain the chain of positive interactions that allow a diverse community to exist in a habitat that is otherwise characterized by substrate instability, elevated temperatures, and desiccation stress. Cordgrass is able to establish independently, and t
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20series
|
This list of mathematical series contains formulae for finite and infinite sums. It can be used in conjunction with other tools for evaluating sums.
Here, is taken to have the value
denotes the fractional part of
is a Bernoulli polynomial.
is a Bernoulli number, and here,
is an Euler number.
is the Riemann zeta function.
is the gamma function.
is a polygamma function.
is a polylogarithm.
is binomial coefficient
denotes exponential of
Sums of powers
See Faulhaber's formula.
The first few values are:
See zeta constants.
The first few values are:
(the Basel problem)
Power series
Low-order polylogarithms
Finite sums:
, (geometric series)
Infinite sums, valid for (see polylogarithm):
The following is a useful property to calculate low-integer-order polylogarithms recursively in closed form:
Exponential function
(cf. mean of Poisson distribution)
(cf. second moment of Poisson distribution)
where is the Touchard polynomials.
Trigonometric, inverse trigonometric, hyperbolic, and inverse hyperbolic functions relationship
(versine)
(haversine)
Modified-factorial denominators
Binomial coefficients
(see )
, generating function of the Catalan numbers
, generating function of the Central binomial coefficients
Harmonic numbers
(See harmonic numbers, themselves defined , and generalized to the real numbers)
Binomial coefficients
(see Multiset)
(see Vandermonde identity)
Trigonometric functions
Sums of sines and cosines arise in Fourier series.
,
Rational functions
An infinite series of any rational function of can be reduced to a finite series of polygamma functions, by use of partial fraction decomposition, as explained here. This fact can also be applied to finite series of rational functions, allowing the result to be computed in constant time even when the series contains a large number of terms.
Exponential function
(see the Landsberg–Schaar relation)
Numeric series
These numeric series can be found by plugging in
|
https://en.wikipedia.org/wiki/Flex%20links
|
Flex links is a network switch feature in Cisco equipment which enables redundancy and load balancing at the layer 2 level. The feature serves as an alternative to Spanning Tree Protocol or link aggregation. A pair of layer 2 interfaces, such as switch ports or port channels, has one interface configured as a backup to the other. If the primary link fails, the backup link takes over traffic forwarding.
At any point of time, only one interface will be in linkup state and actively forwarding traffic. If the primary link shuts down, the standby link takes up the duty and starts forwarding traffic and becomes the primary link. When the failing link comes back up active, it goes into standby mode and does not participate in traffic forwarding and becomes the backup link. This behaviour can be changed with pre-emption mode which makes the failed link the primary link when it becomes available again.
Load balancing in Flex links work at VLAN level. Both the ports in the Flex link pair can be made to forward traffic simultaneously. One port in the flex links pair can be configured to forward traffic belonging to VLANs 1-50 and the other can forward traffic for VLANs 51-100. Mutually exclusive VLANs are load sharing the traffic between the Flex link pairs. If one of the ports fails, the other active link forwards all the traffic.
|
https://en.wikipedia.org/wiki/Mouthfeel
|
Mouthfeel refers to the physical sensations in the mouth caused by food or drink, making it distinct from taste. It is a fundamental sensory attribute which, along with taste and smell, determines the overall flavor of a food item. Mouthfeel is also sometimes referred to as texture.
It is used in many areas related to the testing and evaluating of foodstuffs, such as wine-tasting and food rheology. It is evaluated from initial perception on the palate, to first bite, through chewing to swallowing and aftertaste. In wine-tasting, for example, mouthfeel is usually used with a modifier (big, sweet, tannic, chewy, etc.) to the general sensation of the wine in the mouth. Research indicates texture and mouthfeel can also influence satiety with the effect of viscosity most significant.
Mouthfeel is often related to a product's water activity—hard or crisp products having lower water activities and soft products having intermediate to high water activities.
Qualities perceived
Chewiness: The sensation of sustained, elastic resistance from food while it is chewed.
Cohesiveness: Degree to which the sample deforms before rupturing when biting with molars.
Crunchiness: The audible grinding of a food when it is chewed.
Density: Compactness of cross section of the sample after biting completely through with the molars.
Dryness: Degree to which the sample feels dry in the mouth.
Exquisiteness: Perceived quality of the item in question.
Fracturability: Force with which the sample crumbles, cracks or shatters. Fracturability encompasses crumbliness, crispiness, crunchiness and brittleness.
Graininess: Degree to which a sample contains small grainy particles.
Gumminess: Energy required to disintegrate a semi-solid food to a state ready for swallowing.
Hardness: Force required to deform the product to a given distance, i.e., force to compress between molars, bite through with incisors, compress between tongue and palate.
Heaviness: Weight of product perceived when fir
|
https://en.wikipedia.org/wiki/Fleming%20Prize%20Lecture
|
The Fleming Prize Lecture was started by the Microbiology Society in 1976 and named after Alexander Fleming, one of the founders of the society. It is for early career researchers, generally within 12 of being awarded their PhD, who have an outstanding independent research record making a distinct contribution to microbiology. Nominations can be made by any member of the society. Nominees do not have to be members.
The award is £1,000 and the awardee is expected to give a lecture based on their research at the Microbiology Society's Annual Conference.
List
The following have been awarded this prize.
1976 Graham Gooday Biosynthesis of the Fungal Wall – Mechanisms and Implications
1977 Peter Newell Cellular Communication During Aggregation of Dictyostelium
1978 George AM Cross Immunochemical Aspects of Antigenic Variation in Trypanosomes
1979 John Beringer The Development of Rhizobium Genetics
1980 Duncan James McGeoch Structural Analysis of Animal Virus Genomes
1981 Dave Sherratt The Maintenance and Propagation of Plasmid Genes in Bacterial Populations
1982 Brian Spratt Penicillin-binding Proteins and the Future of β-Lactam Antibiotics
1983 Ray Dixon The Genetic Complexity of Nitrogen Fixation Herpes Siplex and The Herpes Complex
1984 Paul Nurse Cell Cycle Control in Yeast
1985 Jeffrey Almond Genetic Diversity in Small RNA Viruses
1986 Douglas Kell Forces, Fluxes and Control of Microbial Metabolism
1987 Christopher Higgins Molecular Mechanisms of Membrane Transport: from Microbes to Man
1988 Gordon Dougan An Oral Route to Rational Vaccination
1989 Andrew Davison Varicella-Zoster Virus
1989 Graham J Boulnois Molecular Dissection of the Host-Microbe Interaction in Infection
1990 No award
1991 Lynne Boddy The Ecology of Wood- and Litter-rotting Basidiomycete Fungi
1992 Geoffrey L Smith Vaccinia Virus Glycoproteins and Immune Evasion
1993 Neil Gow Directional Growth and Guidance Systems of Fungal Pathogens
1994 Ian Roberts Bacterial Polysaccharides in Sickness and
|
https://en.wikipedia.org/wiki/Software%20diversity
|
Software diversity is a research field about the comprehension and engineering of diversity in the context of software.
Areas
The different areas of software diversity are discussed in surveys on diversity for fault-tolerance or for security.
The main areas are:
design diversity, n-version programming, data diversity for fault tolerance
randomization
software variability
Techniques
Code transformations
It is possible to amplify software diversity through automated transformation processes that create synthetic diversity. A "multicompiler" is compiler embedding a diversification engine. A multi-variant execution environment (MVEE) is responsible for selecting the variant to execute and compare the output.
Fred Cohen was among the very early promoters of such an approach. He proposed a series of rewriting and code reordering transformations that aim at producing massive quantities of different versions of operating systems functions. These ideas have been developed over the years and have led to the construction of integrated obfuscation schemes to protect key functions in large software systems.
Another approach to increase software diversity of protection consists in adding randomness in certain core processes, such as memory loading. Randomness implies that all versions of the same program run differently from each other, which in turn creates a diversity of program behaviors. This idea was initially proposed and experimented by Stephanie Forrest and her colleagues.
Recent work on automatic software diversity explores different forms of program transformations that slightly vary the behavior of programs. The goal is to evolve one program into a population of diverse programs that all provide similar services to users, but with a different code. This diversity of code enhances the protection of users against one single attack that could crash all programs at the same time.
Transformation operators include:
code layout randomization: reorder functions
|
https://en.wikipedia.org/wiki/Resistor%E2%80%93transistor%20logic
|
Resistor–transistor logic (RTL), sometimes also known as transistor–resistor logic (TRL), is a class of digital circuits built using resistors as the input network and bipolar junction transistors (BJTs) as switching devices. RTL is the earliest class of transistorized digital logic circuit; it was succeeded by diode–transistor logic (DTL) and transistor–transistor logic (TTL).
RTL circuits were first constructed with discrete components, but in 1961 it became the first digital logic family to be produced as a monolithic integrated circuit. RTL integrated circuits were used in the Apollo Guidance Computer, whose design began in 1961 and which first flew in 1966.
Implementation
RTL inverter
A bipolar transistor switch is the simplest RTL gate (inverter or NOT gate) implementing logical negation. It consists of a common-emitter stage with a base resistor connected between the base and the input voltage source. The role of the base resistor is to expand the very small transistor input voltage range (about 0.7 V) to the logical "1" level (about 3.5 V) by converting the input voltage into current. Its resistance is settled by a compromise: it is chosen low enough to saturate the transistor and high enough to obtain high input resistance. The role of the collector resistor is to convert the collector current into voltage; its resistance is chosen high enough to saturate the transistor and low enough to obtain low output resistance (high fan-out).
One-transistor RTL NOR gate
With two or more base resistors (R3 and R4) instead of one, the inverter becomes a two-input RTL NOR gate (see the figure on the right). The logical operation OR is performed by applying consecutively the two arithmetic operations addition and comparison (the input resistor network acts as a parallel voltage summer with equally weighted inputs and the following common-emitter transistor stage as a voltage comparator with a threshold about 0.7 V). The equivalent resistance of all the resistors
|
https://en.wikipedia.org/wiki/Branches%20of%20physics
|
Physics is a scientific discipline that seeks to construct and experimentally test theories of the physical universe. These theories vary in their scope and can be organized into several distinct branches, which are outlined in this article.
Classical mechanics
Classical mechanics is a model of the physics of forces acting upon bodies; includes sub-fields to describe the behaviors of solids, gases, and fluids. It is often referred to as "Newtonian mechanics" after Isaac Newton and his laws of motion. It also includes the classical approach as given by Hamiltonian and Lagrange methods. It deals with the motion of particles and the general system of particles.
There are many branches of classical mechanics, such as: statics, dynamics, kinematics, continuum mechanics (which includes fluid mechanics), statistical mechanics, etc.
Mechanics: A branch of physics in which we study the object and properties of an object in form of a motion under the action of the force.
Thermodynamics and statistical mechanics
The first chapter of The Feynman Lectures on Physics is about the existence of atoms, which Feynman considered to be the most compact statement of physics, from which science could easily result even if all other knowledge was lost. By modeling matter as collections of hard spheres, it is possible to describe the kinetic theory of gases, upon which classical thermodynamics is based.
Thermodynamics studies the effects of changes in temperature, pressure, and volume on physical systems on the macroscopic scale, and the transfer of energy as heat. Historically, thermodynamics developed out of the desire to increase the efficiency of early steam engines.
The starting point for most thermodynamic considerations is the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work. They also postulate the existence of a quantity named entropy, which can be defined for any system. In thermodynamics, interactions between la
|
https://en.wikipedia.org/wiki/Counting%20board
|
The counting board is the precursor of the abacus, and the earliest known form of a counting device (excluding fingers and other very simple methods). Counting boards were made of stone or wood, and the counting was done on the board with beads, pebbles etc. Not many boards survive because of the perishable materials used in their construction, or the impossibility to identify the object as a counting board.The counting board was invented to facilitate and streamline numerical calculations in ancient civilizations. Its inception addressed the need for a practical tool to perform arithmetic operations efficiently. By using counters or tokens on a board with designated sections, people could easily keep track of quantities, trade, and financial transactions. This invention not only enhanced accuracy but also fueled the development of more sophisticated mathematical concepts and systems throughout history.
The counting board does not include a zero as we have come to understand it today. It primarily used Roman Numerals to calculate. The system was based on a base ten or base twenty system, where the lines represented the bases of ten or twenty, and the spaces representing base fives.
The oldest known counting board, the Salamis Tablet () was discovered on the Greek island of Salamis in 1899. It is thought to have been used as more of a gaming board than a calculating device. It is marble, about 150 x 75 x 4.5 cm, and is in the Epigraphical Museum in Athens. It has carved Greek letters and parallel grooves.
The German mathematician Adam Ries described the use of counting boards in .
See also
Abacus
Calculator
|
https://en.wikipedia.org/wiki/Fermentation%20in%20food%20processing
|
In food processing, fermentation is the conversion of carbohydrates to alcohol or organic acids using microorganisms—yeasts or bacteria—under anaerobic (oxygen-free) conditions. Fermentation usually implies that the action of microorganisms is desired. The science of fermentation is known as zymology or zymurgy.
The term "fermentation" sometimes refers specifically to the chemical conversion of sugars into ethanol, producing alcoholic drinks such as wine, beer, and cider. However, similar processes take place in the leavening of bread (CO2 produced by yeast activity), and in the preservation of sour foods with the production of lactic acid, such as in sauerkraut and yogurt.
Other widely consumed fermented foods include vinegar, olives, and cheese. More localised foods prepared by fermentation may also be based on beans, grain, vegetables, fruit, honey, dairy products, and fish.
History and prehistory
Brewing and winemaking
Natural fermentation precedes human history. Since ancient times, humans have exploited the fermentation process. The earliest archaeological evidence of fermentation is 13,000-year-old residues of a beer, with the consistency of gruel, found in a cave near Haifa in Israel. Another early alcoholic drink, made from fruit, rice, and honey, dates from 7000 to 6600 BC, in the Neolithic Chinese village of Jiahu, and winemaking dates from ca. 6000 BC, in Georgia, in the Caucasus area. Seven-thousand-year-old jars containing the remains of wine, now on display at the University of Pennsylvania, were excavated in the Zagros Mountains in Iran. There is strong evidence that people were fermenting alcoholic drinks in Babylon ca. 3000 BC, ancient Egypt ca. 3150 BC, pre-Hispanic Mexico ca. 2000 BC, and Sudan ca. 1500 BC.
Discovery of the role of yeast
The French chemist Louis Pasteur founded zymology, when in 1856 he connected yeast to fermentation.
When studying the fermentation of sugar to alcohol by yeast, Pasteur concluded that the fermentation wa
|
https://en.wikipedia.org/wiki/Photokinesis
|
Photokinesis is a change in the velocity of movement of an organism as a result of changes in light intensity. The alteration in speed is independent of the direction from which the light is shining. Photokinesis is described as positive if the velocity of travel is greater with an increase in light intensity and negative if the velocity is slower. If a group of organisms with a positive photokinetic response is swimming in a partially shaded environment, there will be fewer organisms per unit of volume in the sunlit portion than in the shaded parts. This may be beneficial for the organisms if it is unfavourable to their predators, or it may be propitious to them in their quest for prey.
In photosynthetic prokaryotes, the mechanism for photokinesis appears to be an energetic process. In cyanobacteria, for example, an increase in illumination results in an increase of photophosphorylation which enables an increase in metabolic activity. However the behaviour is also found among eukaryotic microorganisms, including those like Astasia longa which are not photosynthetic, and in these, the mechanism is not fully understood. In Euglena gracilis, the rate of swimming has been shown to speed up with increased light intensity until the light reaches a certain saturation level, beyond which the swimming rate declines.
The sea slug Discodoris boholiensis also displays positive photokinesis; it is nocturnal and moves slowly at night, but much faster when caught in the open during daylight hours. Moving faster in the exposed environment should reduce predation and enable it to conceal itself as soon as possible, but its brain is quite incapable of working this out. Photokinesis is common in tunicate larvae, which accumulate in areas with low light intensity just before settlement, and the behaviour is also present in juvenile fish such as sockeye salmon smolts.
See also
Kinesis (biology)
Phototaxis
Phototropism
|
https://en.wikipedia.org/wiki/Short-time%20Fourier%20transform
|
The short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.
Forward STFT
Continuous-time STFT
Simply, in the continuous-time case, the function to be transformed is multiplied by a window function which is nonzero for only a short period of time. The Fourier transform (a one-dimensional function) of the resulting signal is taken, then the window is slid along the time axis until the end resulting in a two-dimensional representation of the signal. Mathematically, this is written as:
where is the window function, commonly a Hann window or Gaussian window centered around zero, and is the signal to be transformed (note the difference between the window function and the frequency ). is essentially the Fourier transform of , a complex function representing the phase and magnitude of the signal over time and frequency. Often phase unwrapping is employed along either or both the time axis, , and frequency axis, , to suppress any jump discontinuity of the phase result of the STFT. The time index is normally considered to be "slow" time and usually not expressed in as high resolution as time . Given that the STFT is essentially a Fourier transform times a window function, the STFT is also called windowed Fourier transform or time-dependent Fourier transform.
Disc
|
https://en.wikipedia.org/wiki/Fabric%20computing
|
Fabric computing or unified computing involves constructing a computing fabric consisting of interconnected nodes that look like a weave or a fabric when seen collectively from a distance.
Usually the phrase refers to a consolidated high-performance computing system consisting of loosely coupled storage, networking and parallel processing functions linked by high bandwidth interconnects (such as 10 Gigabit Ethernet and InfiniBand) but the term has also been used to describe platforms such as the Azure Services Platform and grid computing in general (where the common theme is interconnected nodes that appear as a single logical unit).
The fundamental components of fabrics are "nodes" (processor(s), memory, and/or peripherals) and "links" (functional connections between nodes). While the term "fabric" has also been used in association with storage area networks and with switched fabric networking, the introduction of compute resources provides a complete "unified" computing system. Other terms used to describe such fabrics include "unified fabric", "data center fabric" and "unified data center fabric".
Ian Foster, director of the Computation Institute at the Argonne National Laboratory and University of Chicago suggested in 2007 that grid computing "fabrics" were "poised to become the underpinning for next-generation enterprise IT architectures and be used by a much greater part of many organizations".
History
While the term has been in use since the mid to late 1990s the growth of cloud computing and Cisco's evangelism of unified data center fabrics followed by unified computing (an evolutionary data center architecture whereby blade servers are integrated or unified with supporting network and storage infrastructure) starting March 2009 has renewed interest in the technology.
There have been mixed reactions to Cisco's architecture, particularly from rivals who claim that these proprietary systems will lock out other vendors. Analysts claim that this "ambitiou
|
https://en.wikipedia.org/wiki/Ace%20Stream
|
Ace Stream is a peer-to-peer multimedia streaming protocol, built using BitTorrent technology. Ace Stream has been recognized by sources as a potential method for broadcasting and viewing bootlegged live video streams. The protocol functions as both a client and a server. When users stream a video feed using Ace Stream, they are simultaneously downloading from peers and uploading the same video to other peers.
History
Ace Stream began under the name TorrentStream as a pilot project to use BitTorrent technology to stream live video. In 2013 TorrentStream, was re-released under the name ACE Stream.
|
https://en.wikipedia.org/wiki/Volyn%20biota
|
The Volyn biota are fossilized microorganisms found in rock samples from miarolitic cavities of igneous rocks collected in Zhytomyr Oblast, Ukraine. It is within the historical region of Volyn, hence the name of the find. Exceptionally well-preserved, they were dated to 1.5 Ga, within the "Boring Billion" period of the Proterozoic geological eon.
History of the discovery
The samples of Volyn biota were found in samples from miarolitic pegmatites ("chamber pegmatites") collected from the of the Ukrainian Shield. They were described as early as in 1987, but interpreted as abiogenic formations. In 2000, these formations were reinterpreted as the fossilized cyanobacteria from geyser-type deposits. Until very recently the origin of the Korosten pegmatites was not fully understood, but they were dated to 1.8-1.7 Ga.
Franz et al. (2022, 2023), investigating newly recovered samples they date to 1.5 Ga, described the morphology and the internal structure of Volyn biota and reported the presence of different types of filaments, of varying diameters, shapes and branching in the studied organisms, and provided evidence of the presence of fungi-like organisms and Precambrian continental deep biosphere. Some fossils give evidence of sessility, while others of free-living lifestyle.
Usually Precambrian fossils are not well preserved, but the Volyn biota had exceptional conditions for fossilization in cavities with silicon tetrafluoride-rich fluids. The cavities also preserved them from further diagenetic-metamorphic overprint.
Volyn biota is an additional support of the claim that filamentous fossils dated to 2.4 Ga from the Ongeluk Formation (Griqualand West, South Africa) were also fungi-like organisms.
|
https://en.wikipedia.org/wiki/Canonical%20map
|
In mathematics, a canonical map, also called a natural map, is a map or morphism between objects that arises naturally from the definition or the construction of the objects. Often, it is a map which preserves the widest amount of structure. A choice of a canonical map sometimes depends on a convention (e.g., a sign convention).
A closely related notion is a structure map or structure morphism; the map or morphism that comes with the given structure on the object. These are also sometimes called canonical maps.
A canonical isomorphism is a canonical map that is also an isomorphism (i.e., invertible). In some contexts, it might be necessary to address an issue of choices of canonical maps or canonical isomorphisms; for a typical example, see prestack.
For a discussion of the problem of defining a canonical map see Kevin Buzzard's talk at the 2022 Grothendieck conference.
Examples
If N is a normal subgroup of a group G, then there is a canonical surjective group homomorphism from G to the quotient group G/N, that sends an element g to the coset determined by g.
If I is an ideal of a ring R, then there is a canonical surjective ring homomorphism from R onto the quotient ring R/I, that sends an element r to its coset I+r.
If V is a vector space, then there is a canonical map from V to the second dual space of V, that sends a vector v to the linear functional fv defined by fv(λ) = λ(v).
If is a homomorphism between commutative rings, then S can be viewed as an algebra over R. The ring homomorphism f is then called the structure map (for the algebra structure). The corresponding map on the prime spectra is also called the structure map.
If E is a vector bundle over a topological space X, then the projection map from E to X is the structure map.
In topology, a canonical map is a function f mapping a set X → X/R (X modulo R), where R is an equivalence relation on X, that takes each x in X to the equivalence class [x] modulo R.
|
https://en.wikipedia.org/wiki/Biot%E2%80%93Tolstoy%E2%80%93Medwin%20diffraction%20model
|
In applied mathematics, the Biot–Tolstoy–Medwin (BTM) diffraction model describes edge diffraction. Unlike the uniform theory of diffraction (UTD), BTM does not make the high frequency assumption (in which edge lengths and distances from source and receiver are much larger than the wavelength). BTM sees use in acoustic simulations.
Impulse response
The impulse response according to BTM is given as follows:
The general expression for sound pressure is given by the convolution integral
where represents the source signal, and represents the impulse response at the receiver position. The BTM gives the latter in terms of
the source position in cylindrical coordinates where the -axis is considered to lie on the edge and is measured from one of the faces of the wedge.
the receiver position
the (outer) wedge angle and from this the wedge index
the speed of sound
as an integral over edge positions
where the summation is over the four possible choices of the two signs, and are the distances from the point to the source and receiver respectively, and is the Dirac delta function.
where
See also
Uniform theory of diffraction
Notes
|
https://en.wikipedia.org/wiki/List%20of%20conversion%20factors
|
This article gives a list of conversion factors for several physical quantities. A number of different units (some only of historical interest) are shown and expressed in terms of the corresponding SI unit.
Conversions between units in the metric system are defined by their prefixes (for example, 1 kilogram = 1000 grams, 1 milligram = 0.001 grams) and are thus not listed in this article. Exceptions are made if the unit is commonly known by another name (for example, 1 micron = 10−6 metre). Within each table, the units are listed alphabetically, and the SI units (base or derived) are highlighted.
The following quantities are considered: length, area, volume, plane angle, solid angle, mass, density, time, frequency, velocity, volumetric flow rate, acceleration, force, pressure (or mechanical stress), torque (or moment of force), energy, power (or heat flow rate), action, dynamic viscosity, kinematic viscosity, electric current, electric charge, electric dipole, electromotive force (or electric potential difference), electrical resistance, capacitance, magnetic flux, magnetic flux density, inductance, temperature, information entropy, luminous intensity, luminance, luminous flux, illuminance, radiation.
Length
Area
Volume
Plane angle
Solid angle
Mass
Notes:
See Weight for detail of mass/weight distinction and conversion.
Avoirdupois is a system of mass based on a pound of 16 ounces, while Troy weight is the system of mass where 12 troy ounces equals one troy pound.
The symbol is used to denote standard gravity in order to avoid confusion with the (upright) g symbol for gram.
Density
Time
Frequency
Speed or velocity
A velocity consists of a speed combined with a direction; the speed part of the velocity takes units of speed.
Flow (volume)
Acceleration
Force
Pressure or mechanical stress
Torque or moment of force
Energy
Power or heat flow rate
Action
Dynamic viscosity
Kinematic viscosity
Electric current
Electric charge
Electric dipole
Elec
|
https://en.wikipedia.org/wiki/Thrifty%20phenotype
|
Thrifty phenotype refers to the correlation between low birth weight of neonates and the increased risk of developing metabolic syndromes later in life, including type 2 diabetes and cardiovascular diseases. Although early life undernutrition is thought to be the key driving factor to the hypothesis, other environmental factors have been explored for their role in susceptibility, such as physical inactivity. Genes may also play a role in susceptibility of these diseases, as they may make individuals predisposed to factors that lead to increased disease risk.
Historical overview
The term thrifty phenotype was first coined by Charles Nicholas Hales and David Barker in a study published in 1992. In their study, the authors reviewed the literature up to and addressed five central questions regarding role of different factors in type 2 diabetes on which they based their hypothesis. These questions included the following:
The role of beta cell deficiency in type 2 diabetes.
The extent to which beta cell deficiency contributes to insulin intolerance.
The role of major nutritional elements in fetal growth.
The role of abnormal amino acid supply in growth limited neonates.
The role of malnutrition in irreversibly defective beta cell growth.
From the review of the existing literature, they posited that poor nutritional status in fetal and early neonatal stages could hamper the development and proper functioning of the pancreatic beta cells by impacting structural features of islet anatomy, which could consequently make the individual more susceptible to the development of type 2 diabetes in later life. However, they did not exclude other causal factors such as obesity, ageing and physical inactivity as determining factors of type 2 diabetes.
In a later study, Barker et al. analyzed living patient data from Hertfordshire, UK, and found that men in their sixties having low birthweight (2.95 kg or less) were 10 times more likely to develop syndrome X (type 2 diabetes,
|
https://en.wikipedia.org/wiki/Warazan
|
was a system of record-keeping using knotted straw at the time of the Ryūkyū Kingdom. In the dialect of the Sakishima Islands it was known as barasan and on Okinawa Island as warazani or warazai. Formerly used in particular in relation to the "head tax", it is still to be found in connection with the annual , to record the amount of miki or sacred sake dedicated.
See also
Kaidā glyphs
Naha Tug-of-war
Quipu
|
https://en.wikipedia.org/wiki/Omega%20constant
|
The omega constant is a mathematical constant defined as the unique real number that satisfies the equation
It is the value of , where is Lambert's function. The name is derived from the alternate name for Lambert's function, the omega function. The numerical value of is given by
.
.
Properties
Fixed point representation
The defining identity can be expressed, for example, as
or
as well as
Computation
One can calculate iteratively, by starting with an initial guess , and considering the sequence
This sequence will converge to as approaches infinity. This is because is an attractive fixed point of the function .
It is much more efficient to use the iteration
because the function
in addition to having the same fixed point, also has a derivative that vanishes there. This guarantees quadratic convergence; that is, the number of correct digits is roughly doubled with each iteration.
Using Halley's method, can be approximated with cubic convergence (the number of correct digits is roughly tripled with each iteration): (see also ).
Integral representations
An identity due to Victor Adamchik is given by the relationship
Other relations due to Mező
and Kalugin-Jeffrey-Corless
are:
The latter two identities can be extended to other values of the function (see also ).
Transcendence
The constant is transcendental. This can be seen as a direct consequence of the Lindemann–Weierstrass theorem. For a contradiction, suppose that is algebraic. By the theorem, is transcendental, but , which is a contradiction. Therefore, it must be transcendental.
|
https://en.wikipedia.org/wiki/Call%20setup
|
In telecommunication, call setup is the process of establishing a virtual circuit across a telecommunications network. Call setup is typically accomplished using a signaling protocol.
The term call set-up time has the following meanings:
The overall length of time required to establish a circuit-switched call between users.
For data communication, the overall length of time required to establish a circuit-switched call between terminals; i.e., the time from the initiation of a call request to the beginning of the call message.
Note: Call set-up time is the summation of: (a) call request time—the time from initiation of a calling signal to the delivery to the caller of a proceed-to-select signal; (b) selection time—the time from the delivery of the proceed-to-select signal until all the selection signals have been transmitted; and (c) post selection time—the time from the end of the transmission of the selection signals until the delivery of the call-connected signal to the originating terminal.
Success rate
In telecommunications, the call setup success rate (CSSR) is the fraction of the attempts to make a call that result in a connection to the dialled number (due to various reasons not all call attempts end with a connection to the dialled number). This fraction is usually measured as a percentage of all call attempts made.
In telecommunications a call attempt invokes a call setup procedure, which, if successful, results in a connected call. A call setup procedure may fail due to a number of technical reasons. Such calls are classified as failed call attempts. In many practical cases, this definition needs to be further expanded with a number of detailed specifications describing which calls exactly are counted as successfully set up and which not. This is determined to a great degree by the stage of the call setup procedure at which a call is counted as connected. In modern communications systems, such as cellular (mobile) networks, the call setup procedu
|
https://en.wikipedia.org/wiki/List%20of%20prime%20numbers
|
This is a list of articles about prime numbers. A prime number (or prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself. By Euclid's theorem, there are an infinite number of prime numbers. Subsets of the prime numbers may be generated with various formulas for primes. The first 1000 primes are listed below, followed by lists of notable types of prime numbers in alphabetical order, giving their respective first terms. 1 is neither prime nor composite.
The first 1000 prime numbers
The following table lists the first 1000 primes, with 20 columns of consecutive primes in each of the 50 rows.
.
The Goldbach conjecture verification project reports that it has computed all primes below 4×10. That means 95,676,260,903,887,607 primes (nearly 10), but they were not stored. There are known formulae to evaluate the prime-counting function (the number of primes below a given value) faster than computing the primes. This has been used to compute that there are 1,925,320,391,606,803,968,923 primes (roughly 2) below 10. A different computation found that there are 18,435,599,767,349,200,867,866 primes (roughly 2) below 10, if the Riemann hypothesis is true.
Lists of primes by type
Below are listed the first prime numbers of many named forms and types. More details are in the article for the name. n is a natural number (including 0) in the definitions.
Balanced primes
Primes with equal-sized prime gaps above and below them, so that they are equal to the arithmetic mean of the nearest primes above and below.
5, 53, 157, 173, 211, 257, 263, 373, 563, 593, 607, 653, 733, 947, 977, 1103, 1123, 1187, 1223, 1367, 1511, 1747, 1753, 1907, 2287, 2417, 2677, 2903, 2963, 3307, 3313, 3637, 3733, 4013, 4409, 4457, 4597, 4657, 4691, 4993, 5107, 5113, 5303, 5387, 5393 ().
Bell primes
Primes that are the number of partitions of a set with n members.
2, 5, 877, 27644437, 35742549198872617291353508656626642567, 3593340859686228310419601885980
|
https://en.wikipedia.org/wiki/%CE%94P
|
ΔP (Delta P) is a mathematical term symbolizing a change (Δ) in pressure (P).
Uses
Young–Laplace equation
Darcy–Weisbach equation
Given that the head loss hf expresses the pressure loss Δp as the height of a column of fluid,
where ρ is the density of the fluid. The Darcy–Weisbach equation can also be written in terms of pressure loss:
Lung compliance
In general, compliance is defined by the change in volume (ΔV) versus the associated change in pressure (ΔP), or ΔV/ΔP:
During mechanical ventilation, compliance is influenced by three main physiologic factors:
Lung compliance
Chest wall compliance
Airway resistance
Lung compliance is influenced by a variety of primary abnormalities of lung parenchyma, both chronic and acute. Airway resistance is typically increased by bronchospasm and airway secretions. Chest wall compliance can be decreased by fixed abnormalities (e.g. kyphoscoliosis, morbid obesity) or more variable problems driven by patient agitation while intubated.
Calculating compliance on minute volume (VE: ΔV is always defined by tidal volume (VT), but ΔP is different for the measurement of dynamic vs. static compliance.
Dynamic compliance (Cdyn)
where PIP = peak inspiratory pressure (the maximum pressure during inspiration), and PEEP = positive end expiratory pressure. Alterations in airway resistance, lung compliance and chest wall compliance influence Cdyn.
Static compliance (Cstat)
where Pplat = plateau pressure. Pplat is measured at the end of inhalation and prior to exhalation using an inspiratory hold maneuver. During this maneuver, airflow is transiently (~0.5 sec) discontinued, which eliminates the effects of airway resistance. Pplat is never > PIP and is typically < 3-5 cmH2O lower than PIP when airway resistance is normal.
See also
Pressure measurement
Pressure drop
Head loss
|
https://en.wikipedia.org/wiki/List%20of%20graph%20theory%20topics
|
This is a list of graph theory topics, by Wikipedia page.
See glossary of graph theory terms for basic terminology
Examples and types of graphs
Graph coloring
Paths and cycles
Trees
Terminology
Node
Child node
Parent node
Leaf node
Root node
Root (graph theory)
Operations
Tree structure
Tree data structure
Cayley's formula
Kőnig's lemma
Tree (set theory) (need not be a tree in the graph-theory sense, because there may not be a unique path between two vertices)
Tree (descriptive set theory)
Euler tour technique
Graph limits
Graphon
Graphs in logic
Conceptual graph
Entitative graph
Existential graph
Laws of Form
Logical graph
Mazes and labyrinths
Labyrinth
Maze
Maze generation algorithm
Algorithms
Ant colony algorithm
Breadth-first search
Depth-first search
Depth-limited search
FKT algorithm
Flood fill
Graph exploration algorithm
Matching (graph theory)
Max flow min cut theorem
Maximum-cardinality search
Shortest path
Dijkstra's algorithm
Bellman–Ford algorithm
A* algorithm
Floyd–Warshall algorithm
Topological sorting
Pre-topological order
Other topics
Networks, network theory
See list of network theory topics
Hypergraphs
Helly family
Intersection (Line) Graphs of hypergraphs
Graph theory
Graph theory
Graph theory
|
https://en.wikipedia.org/wiki/Mathematical%20maturity
|
In mathematics, mathematical maturity is an informal term often used to refer to the quality of having a general understanding and mastery of the way mathematicians operate and communicate. It pertains to a mixture of mathematical experience and insight that cannot be directly taught. Instead, it comes from repeated exposure to mathematical concepts. It is a gauge of mathematics students' erudition in mathematical structures and methods, and can overlap with other related concepts such as mathematical intuition and mathematical competence. The topic is occasionally also addressed in literature in its own right.
Definitions
Mathematical maturity has been defined in several different ways by various authors, and is often tied to other related concepts such as comfort and competence with mathematics, mathematical intuition and mathematical beliefs.
One definition has been given as follows:
A broader list of characteristics of mathematical maturity has been given as follows:
Finally, mathematical maturity has also been defined as an ability to do the following:
It is sometimes said that the development of mathematical maturity requires a deep reflection on the subject matter for a prolonged period of time, along with a guiding spirit which encourages exploration.
Progression
Mathematician Terence Tao has proposed a three-stage model of mathematics education that can be interpreted as a general framework of mathematical maturity progression. The stages are summarized in the following table:
See also
Logical intuition
Four stages of competence
|
https://en.wikipedia.org/wiki/Injury
|
Injury is physiological damage to the living tissue of any organism, whether in humans, in other animals, or in plants. Injuries can be caused in many ways, such as mechanically with penetration by sharp objects such as teeth or with blunt objects, by heat or cold, or by venoms and biotoxins. Injury prompts an inflammatory response in many taxa of animals; this prompts wound healing. In both plants and animals, substances are often released to help to occlude the wound, limiting loss of fluids and the entry of pathogens such as bacteria. Many organisms secrete antimicrobial chemicals which limit wound infection; in addition, animals have a variety of immune responses for the same purpose. Both plants and animals have regrowth mechanisms which may result in complete or partial healing over the injury.
Taxonomic range
Animals
Injury in animals is sometimes defined as mechanical damage to anatomical structure, but it has a wider connotation of physical damage with any cause, including drowning, burns, and poisoning. Such damage may result from attempted predation, territorial fights, falls, and abiotic factors.
Injury prompts an inflammatory response in animals of many different phyla; this prompts coagulation of the blood or body fluid, followed by wound healing, which may be rapid, as in the cnidaria. Arthropods are able to repair injuries to the cuticle that forms their exoskeleton to some extent.
Animals in several phyla, including annelids, arthropods, cnidaria, molluscs, nematodes, and vertebrates are able to produce antimicrobial peptides to fight off infection following an injury.
Humans
Injury in humans has been studied extensively for its importance in medicine. Much of medical practice including emergency medicine and pain management is dedicated to the treatment of injuries. The World Health Organization has developed a classification of injuries in humans by categories including mechanism, objects/substances producing injury, place of occurrence,
|
https://en.wikipedia.org/wiki/Mining%20software%20repositories
|
Within software engineering, the mining software repositories (MSR) field analyzes the rich data available in software repositories, such as version control repositories, mailing list archives, bug tracking systems, issue tracking systems, etc. to uncover interesting and actionable information about software systems, projects and software engineering.
Definition
Herzig and Zeller define ”mining software archives” as a process to ”obtain lots of initial evidence” by extracting data from software repositories. Further they define ”data sources” as product-based artifacts like source code, requirement artefacts or version archives and claim that these sources are unbiased, but noisy and incomplete.
Techniques
Coupled Change Analysis
The idea in coupled change analysis is that developers change code entities (e.g. files) together frequently for fixing defects or introducing new features. These couplings between the entities are often not made explicit in the code or other documents. Especially developers new on the project do not know which entities need to be changed together. Coupled change analysis aims to extract the coupling out of the version control system for a project. By the commits and the timing of changes, we might be able to identify which entities frequently change together. This information could then be presented to developers about to change one of the entities to support them in their further changes.
Commit Analysis
There are many different kinds of commits in version control systems, e.g. bug fix commits, new feature commits, documentation commits, etc. To take data-driven decisions based on past commits, one needs to select subsets of commits that meet a given criterion. That can be done based on the commit message.
Documentation generation
It is possible to generate useful documentation from mining software repositories. For instance, Jadeite computes usage statistics and helps newcomers to quickly identify commonly used classes.
Data
|
https://en.wikipedia.org/wiki/Food%20grading
|
Food grading involves the inspection, assessment and sorting of various foods regarding quality, freshness, legal conformity and market value. Food grading is often done by hand, in which foods are assessed and sorted. Machinery is also used to grade foods, and may involve sorting products by size, shape and quality. For example, machinery can be used to remove spoiled food from fresh product.
By food type
Beef
Beef grading in the United States is performed by the United States Department of Agriculture's (USDA) Agricultural and Marketing Service. There are eight beef quality grades, with U.S. Prime being the highest grade and U.S. Canner being the lowest grade. Beef grading is a complex process.
Beer
In beer grading, the letter "X" is used on some beers, and was traditionally a mark of beer strength, with the more Xs the greater the strength. Some sources suggest that the origin of the mark was in the breweries of medieval monasteries Another plausible explanation is contained in a treatise entitled "The Art of Brewing" published in London in 1829. It says; "The duties on ale and beer, which were first imposed in 1643... at a certain period, in distinguishing between small beer and strong, all ale or beer, sold at or above ten shillings per barrel, was reckoned to be strong ''and was, therefore, subjected to a higher duty. The cask which contained this strong beer was then first marked with an X signifying ten; and hence the present quack-like denominations of XX (double X) and XXX (treble X) on the casks and accounts of the strong-ale brewers".
In mid-19th century England, the use of "X" and other letters had evolved into a standardised grading system for the strength of beer. Today, it is used as a trade mark by a number of brewers in the United Kingdom, the Commonwealth and the United States.
European Bitterness Units scale, often abbreviated as EBU, is a scale for measuring the perceived bitterness of beer, with lower values being generally "less bitter"
|
https://en.wikipedia.org/wiki/Impulse%20generator
|
An impulse generator is an electrical apparatus which produces very short high-voltage or high-current surges. Such devices can be classified into two types: impulse voltage generators and impulse current generators. High impulse voltages are used to test the strength of electric power equipment against lightning and switching surges. Also, steep-front impulse voltages are sometimes used in nuclear physics experiments. High impulse currents are needed not only for tests on equipment such as lightning arresters and fuses but also for many other technical applications such as lasers, thermonuclear fusion, and plasma devices.
Jedlik's tubular voltage generator
In 1863 Hungarian physicist Ányos Jedlik discovered the possibility of voltage multiplication and in 1868 demonstrated it with a "tubular voltage generator", which was successfully displayed at the Vienna World Exposition in 1873. It was an early form of the impulse generators now applied in nuclear research.
The jury of the World Exhibition of 1873 in Vienna awarded his voltage multiplying condenser of cascade connection with prize "For Development". Through this condenser, Jedlik framed the principle of surge generator of cascaded connection. (The Cascade connection was another important invention of Ányos Jedlik.)
Marx generator
One form is the Marx generator, named after Erwin Otto Marx, who first proposed it in 1923. This consists of multiple capacitors that are first charged in parallel through charging resistors as by a high-voltage, direct-current source and then connected in series and discharged through a test object by a simultaneous spark-over of the spark gaps. The impulse current generator comprises many capacitors that are also charged in parallel by a high-voltage, low-current, direct-current source, but it is discharged in parallel through resistances, inductances, and a test object by a spark gap.
See also
Pulsed power
Pulse-forming network
Marx generator
Cockcroft–Walton generator
|
https://en.wikipedia.org/wiki/Lanstar
|
LANStar (Lanstar) was a 2.56 Mbit/s twisted-pair local area network created by Northern Telecom in the mid '80s. Because NT's PBX systems already owned a building's twisted pair plant (for voice), it made sense to use the same wiring for data as well. LANStar was originally to be a component of NT's PTE (Packet Transport Equipment) product, which was a sort of minicomputer arrangement with dumb (VT220) terminals on the desktop and the CPUs in an intelligent rack (the PTE) in the PBX room (alongside the PBX). The PTE was to have several basic office automation apps: word processing, database, etc. Just as NT was doing Beta testing of the PTE, PCs and PC networking took off, effectively killing the PTE before it completed Beta.
Given the investment already sunk into the product, NT attempted to repackage the PTE as a small (dorm-room-refrigerator sized) cabinet (the PTE-S, 'S' for 'small') containing only LANStar controllers and supporting up to 112 nodes. LANStar had cards for the PC/XT, PC/AT and MacII and supported NetBIOS, Banyan, Novell, and AppleTalk.
LANStar was discontinued in 1990.
The name "LANStar" was coined by NT Product Marketing manager Paul Masters: he heard of AT&T's proposed StarLAN product and created a similar name in order to piggyback on all the publicity surrounding AT&T's product.
See also
Meridian Mail - The voicemail system that also used the PTE
|
https://en.wikipedia.org/wiki/Game%20without%20a%20value
|
In the mathematical theory of games, in particular the study of zero-sum continuous games, not every game has a minimax value. This is the expected value to one of the players when both play a perfect strategy (which is to choose from a particular PDF).
This article gives an example of a zero-sum game that has no value. It is due to Sion and Wolfe.
Zero-sum games with a finite number of pure strategies are known to have a minimax value (originally proved by John von Neumann) but this is not necessarily the case if the game has an infinite set of strategies. There follows a simple example of a game with no minimax value.
The existence of such zero-sum games is interesting because many of the results of game theory become inapplicable if there is no minimax value.
The game
Players I and II choose numbers and respectively, between 0 and 1. The payoff to player I is
That is, after the choices are made, player II pays to player I (so the game is zero-sum).
If the pair is interpreted as a point on the unit square, the figure shows the payoff to player I. Player I may adopt a mixed strategy, choosing a number according to a probability density function (pdf) , and similarly player II chooses from a pdf . Player I seeks to maximize the payoff , player II to minimize the payoff, and each player is aware of the other's objective.
Game value
Sion and Wolfe show that
but
These are the maximal and minimal expectations of the game's value of player I and II respectively.
The and respectively take the supremum and infimum over pdf's on the unit interval (actually Borel probability measures). These represent player I and player II's (mixed) strategies. Thus, player I can assure himself of a payoff of at least 3/7 if he knows player II's strategy, and player II can hold the payoff down to 1/3 if he knows player I's strategy.
There is no epsilon equilibrium for sufficiently small , specifically, if . Dasgupta and Maskin assert that the game values are
|
https://en.wikipedia.org/wiki/Packaging%20gas
|
A packaging gas is used to pack sensitive materials such as food into a modified atmosphere environment. The gas used is usually inert, or of a nature that protects the integrity of the packaged goods, inhibiting unwanted chemical reactions such as food spoilage or oxidation. Some may also serve as a propellant for aerosol sprays like cans of whipped cream. For packaging food, the use of various gases is approved by regulatory organisations.
Their E numbers are included in the following lists in parentheses.
Inert gases
These gas types do not cause a chemical change to the substance that they protect.
argon (E938), used for canned products
helium (E939), used for canned products
nitrogen (E941), also propellant
carbon dioxide (E290), also propellant
Propellant gases
Specific kinds of packaging gases are aerosol propellants. These process and assist the ejection of the product from its container.
chlorofluorocarbons known as CFC (E940 and E945), now rarely used because of the damage that they do to the ozone layer:
dichlorodifluoromethane (E940)
chloropentafluoroethane (E945)
nitrous oxide (E942), used for aerosol whipped cream canisters (see Nitrous oxide: Aerosol propellant)
octafluorocyclobutane (E946)
Reactive gases
These must be used with caution as they may have adverse effects when exposed to certain chemicals. They will cause oxidisation or contamination to certain types of materials.
oxygen (E948), used e.g. for packaging of vegetables
hydrogen (E949)
Volatile gases
Hydrocarbon gases approved for use with food need to be used with extreme caution as they are highly combustible, when combined with oxygen they burn very rapidly and may cause explosions in confined spaces. Special precautions must be taken when transporting these gases.
butane (E943a)
isobutane (E943b)
propane (E944)
See also
Shielding gas
|
https://en.wikipedia.org/wiki/Power%2C%20root-power%2C%20and%20field%20quantities
|
A power quantity is a power or a quantity directly proportional to power, e.g., energy density, acoustic intensity, and luminous intensity. Energy quantities may also be labelled as power quantities in this context.
A root-power quantity is a quantity such as voltage, current, sound pressure, electric field strength, speed, or charge density, the square of which, in linear systems, is proportional to power. The term root-power quantity refers to the square root that relates these quantities to power. The term was introduced in ; it replaces and deprecates the term field quantity.
Implications
It is essential to know which category a measurement belongs to when using decibels (dB) for comparing the levels of such quantities. A change of one bel in the level corresponds to a 10× change in power, so when comparing power quantities x and y, the difference is defined to be 10×log10(y/x) decibel. With root-power quantities, however the difference is defined as 20×log10(y/x) dB.
In the analysis of signals and systems using sinusoids, field quantities and root-power quantities may be complex-valued, as in the propagation constant.
"Root-power quantity" vs. "field quantity"
In justifying the deprecation of the term "field quantity" and instead using "root-power quantity" in the context of levels, ISO 80000 draws attention to the conflicting use of the former term to mean a quantity that depends on the position, which in physics is called a field. Such a field is often called a field quantity in the literature, but is called a field here for clarity. Several types of field (such as the electromagnetic field) meet the definition of a root-power quantity, whereas others (such as the Poynting vector and temperature) do not. Conversely, not every root-power quantity is a field (such as the voltage on a loudspeaker).
See also
Level (logarithmic quantity)
Fresnel reflection field and power equations
Sound level, defined for each of several quantities associated with
|
https://en.wikipedia.org/wiki/Thermal%20conductance%20and%20resistance
|
In heat transfer, thermal engineering, and thermodynamics, thermal conductance and thermal resistance are fundamental concepts that describe the ability of materials or systems to conduct heat and the opposition they offer to the heat current. The ability to manipulate these properties allows engineers to control temperature gradient, prevent thermal shock, and maximize the efficiency of thermal systems. Furthermore, these principles find applications in a multitude of fields, including materials science, mechanical engineering, electronics, and energy management. Knowledge of these principles is crucial in various scientific, engineering, and everyday applications, from designing efficient temperature control, thermal insulation, and thermal management in industrial processes to optimizing the performance of electronic devices.
Thermal conductance (C) measures the ability of a material or system to conduct heat. It provides insights into the ease with which heat can pass through a particular system. It is measured in units of watts per kelvin (W/K). It is essential in the design of heat exchangers, thermally efficient materials, and various engineering systems where the controlled movement of heat is vital.
Conversely, thermal resistance (R) measures the opposition to the heat current in a material or system. It is measured in units of kelvins per watt (K/W) and indicates how much temperature difference (in kelvins) is required to transfer a unit of heat current (in watts) through the material or object. It is essential to optimize the building insulation, evaluate the efficiency of electronic devices, and enhance the performance of heat sinks in various applications.
Objects made of insulators like rubber tend to have very high resistance and low conductance, while objects made of conductors like metals tend to have very low resistance and high conductance. This relationship is quantified by resistivity or conductivity. However, the nature of a material is no
|
https://en.wikipedia.org/wiki/Apotome%20%28mathematics%29
|
In the historical study of mathematics, an apotome is a line segment formed from a longer line segment by breaking it into two parts, one of which is commensurable only in power to the whole; the other part is the apotome. In this definition, two line segments are said to be "commensurable only in power" when the ratio of their lengths is an irrational number but the ratio of their squared lengths is rational.
Translated into modern algebraic language, an apotome can be interpreted as a quadratic irrational number formed by subtracting one square root of a rational number from another.
This concept of the apotome appears in Euclid's Elements beginning in book X, where Euclid defines two special kinds of apotomes. In an apotome of the first kind, the whole is rational, while in an apotome of the second kind, the part subtracted from it is rational; both kinds of apotomes also satisfy an additional condition. Euclid Proposition XIII.6 states that, if a rational line segment is split into two pieces in the golden ratio, then both pieces may be represented as apotomes.
|
https://en.wikipedia.org/wiki/STREAMS
|
In computer networking, STREAMS is the native framework in Unix System V for implementing character device drivers, network protocols, and inter-process communication. In this framework, a stream is a chain of coroutines that pass messages between a program and a device driver (or between a pair of programs). STREAMS originated in Version 8 Research Unix, as Streams (not capitalized).
STREAMS's design is a modular architecture for implementing full-duplex I/O between kernel and device drivers. Its most frequent uses have been in developing terminal I/O (line discipline) and networking subsystems. In System V Release 4, the entire terminal interface was reimplemented using STREAMS. An important concept in STREAMS is the ability to push drivers custom code modules which can modify the functionality of a network interface or other device together to form a stack. Several of these drivers can be chained together in order.
History
STREAMS was based on the Streams I/O subsystem introduced in the Eighth Edition Research Unix (V8) by Dennis Ritchie, where it was used for the terminal I/O subsystem and the Internet protocol suite. This version, not yet called STREAMS in capitals, fit the new functionality under the existing device I/O system calls (open, close, read, write, and ioctl), and its application was limited to terminal I/O and protocols providing pipe-like I/O semantics.
This I/O system was ported to System V Release 3 by Robert Israel, Gil McGrath, Dave Olander, Her-Daw Che, and Maury Bach as part of a wider framework intended to support a variety of transport protocols, including TCP, ISO Class 4 transport, SNA LU 6.2, and the AT&T NPACK protocol (used in RFS). It was first released with the Network Support Utilities (NSU) package of UNIX System V Release 3. This port added the putmsg, getmsg, and poll system calls, which are nearly equivalent in purpose to the send, recv, and select calls from Berkeley sockets. The putmsg and getmsg system calls were orig
|
https://en.wikipedia.org/wiki/Ruler
|
A ruler, sometimes called a rule, scale or a line gauge, is an instrument used to make length measurements, whereby a user estimates a length by reading from a series of markings called "rules" along an edge of the device. Commonly the instrument is rigid and the edge itself is a straightedge ("ruled straightedge"), which additionally allows one to draw straight lines. Some rulers, such as cloth or paper tape measures, are non-rigid. Specialty rulers exist that have flexible edges that retain a chosen shape; these find use in sewing, arts, and crafts.
Rulers have been used since ancient times. They are commonly made from metal, wood, fabric, paper, and plastic. They are important tools in the design and construction of buildings. Their ability to quickly and easily measure lengths makes them important in the textile industry and in the retail trade, where lengths of string, fabric, and paper goods can be cut to size. Children learn the basic use of rulers at the elementary school level, and they are often part of a student's school supplies. At the high school level rulers are often used as straightedges for geometric constructions in Euclidean geometry. Rulers are ubiquitous in the engineering and construction industries, often in the form of a tape measure, and are used for making and reading technical drawings. Since much technical work is now done on computer, many software programs implement virtual rulers to help the user estimate virtual distances.
Variants
Rulers have long been made from different materials and in multiple sizes. Historically they were mainly wooden; but plastics have also been used since they were invented; they can be molded with length markings instead of being scribed. Metal is used for more durable rulers for use in the workshop; sometimes a metal edge is embedded into a wooden desk ruler to preserve the edge when used for straight-line cutting. in length is useful for a ruler to be kept on a desk to help in drawing. Shorter rulers
|
https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20Riemann%20zeta%20function
|
In mathematics, the Riemann zeta function is a function in complex analysis, which is also important in number theory. It is often denoted and is named after the mathematician Bernhard Riemann. When the argument is a real number greater than one, the zeta function satisfies the equation
It can therefore provide the sum of various convergent infinite series, such as Explicit or numerically efficient formulae exist for at integer arguments, all of which have real values, including this example. This article lists these formulae, together with tables of values. It also includes derivatives and some series composed of the zeta function at integer arguments.
The same equation in above also holds when is a complex number whose real part is greater than one, ensuring that the infinite sum still converges. The zeta function can then be extended to the whole of the complex plane by analytic continuation, except for a simple pole at . The complex derivative exists in this more general region, making the zeta function a meromorphic function. The above equation no longer applies for these extended values of , for which the corresponding summation would diverge. For example, the full zeta function exists at (and is therefore finite there), but the corresponding series would be whose partial sums would grow indefinitely large.
The zeta function values listed below include function values at the negative even numbers (, ), for which and which make up the so-called trivial zeros. The Riemann zeta function article includes a colour plot illustrating how the function varies over a continuous rectangular region of the complex plane. The successful characterisation of its non-trivial zeros in the wider plane is important in number theory, because of the Riemann hypothesis.
The Riemann zeta function at 0 and 1
At zero, one has
At 1 there is a pole, so ζ(1) is not finite but the left and right limits are:
Since it is a pole of first order, it has a complex residue
Positiv
|
https://en.wikipedia.org/wiki/Routing%20domain
|
In computer networking, a routing domain is a collection of networked systems that operate common routing protocols and are under the control of a single administration. For example, this might be a set of routers under the control of a single organization, some of them operating a corporate network, some others a branch office network, and the rest the data center network.
A given autonomous system can contain multiple routing domains, or a set of routing domains can be coordinated without being an Internet-participating autonomous system.
|
https://en.wikipedia.org/wiki/Mathematics%20and%20fiber%20arts
|
Ideas from mathematics have been used as inspiration for fiber arts including quilt making, knitting, cross-stitch, crochet, embroidery and weaving. A wide range of mathematical concepts have been used as inspiration including topology, graph theory, number theory and algebra. Some techniques such as counted-thread embroidery are naturally geometrical; other kinds of textile provide a ready means for the colorful physical expression of mathematical concepts.
Quilting
The IEEE Spectrum has organized a number of competitions on quilt block design, and several books have been published on the subject. Notable quiltmakers include Diana Venters and Elaine Ellison, who have written a book on the subject Mathematical Quilts: No Sewing Required. Examples of mathematical ideas used in the book as the basis of a quilt include the golden rectangle, conic sections, Leonardo da Vinci's Claw, the Koch curve, the Clifford torus, San Gaku, Mascheroni's cardioid, Pythagorean triples, spidrons, and the six trigonometric functions.
Knitting and crochet
Knitted mathematical objects include the Platonic solids, Klein bottles and Boy's surface.
The Lorenz manifold and the hyperbolic plane have been crafted using crochet. Knitted and crocheted tori have also been constructed depicting toroidal embeddings of the complete graph K7 and of the Heawood graph. The crocheting of hyperbolic planes has been popularized by the Institute For Figuring; a book by Daina Taimina on the subject, Crocheting Adventures with Hyperbolic Planes, won the 2009 Bookseller/Diagram Prize for Oddest Title of the Year.
Embroidery
Embroidery techniques such as counted-thread embroidery including cross-stitch and some canvas work methods such as Bargello make use of the natural pixels of the weave, lending themselves to geometric designs.
Weaving
Ada Dietz (1882 – 1950) was an American weaver best known for her 1949 monograph Algebraic Expressions in Handwoven Textiles, which defines weaving patterns based on
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.