source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Formula
|
In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities.
The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin).
In mathematics
In mathematics, a formula generally refers to an equation relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius:
Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form.
In a general context, formulas are often a manifestation of mathematical model to real world phenomena, and as such can be used to provide solution (or approximated solution) to real world problems, with some being more general than others. For example, the formula
is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations.
Expr
|
https://en.wikipedia.org/wiki/EEG%20analysis
|
EEG analysis is exploiting mathematical signal analysis methods and computer technology to extract information from electroencephalography (EEG) signals. The targets of EEG analysis are to help researchers gain a better understanding of the brain; assist physicians in diagnosis and treatment choices; and to boost brain-computer interface (BCI) technology. There are many ways to roughly categorize EEG analysis methods. If a mathematical model is exploited to fit the sampled EEG signals, the method can be categorized as parametric, otherwise, it is a non-parametric method. Traditionally, most EEG analysis methods fall into four categories: time domain, frequency domain, time-frequency domain, and nonlinear methods. There are also later methods including deep neural networks (DNNs).
Methods
Frequency domain methods
Frequency domain analysis, also known as spectral analysis, is the most conventional yet one of the most powerful and standard methods for EEG analysis. It gives insight into information contained in the frequency domain of EEG waveforms by adopting statistical and Fourier Transform methods. Among all the spectral methods, power spectral analysis is the most commonly used, since the power spectrum reflects the 'frequency content' of the signal or the distribution of signal power over frequency.
Time domain methods
There are two important methods for time domain EEG analysis: Linear Prediction and Component Analysis. Generally, Linear Prediction gives the estimated value equal to a linear combination of the past output value with the present and past input value. And Component Analysis is an unsupervised method in which the data set is mapped to a feature set. Notably, the parameters in time domain methods are entirely based on time, but they can also be extracted from statistical moments of the power spectrum. As a result, time domain method builds a bridge between physical time interpretation and conventional spectral analysis. Besides, time domain met
|
https://en.wikipedia.org/wiki/Model-driven%20security
|
Model-driven security (MDS) means applying model-driven approaches (and especially the concepts behind model-driven software development) to security.
Development of the concept
The general concept of Model-driven security in its earliest forms has been around since the late 1990s (mostly in university research), and was first commercialized around 2002. There is also a body of later scientific research in this area, which continues to this day.
A more specific definition of Model-driven security specifically applies model-driven approaches to automatically generate technical security implementations from security requirements models. In particular, "Model driven security (MDS) is the tool supported process of modelling security requirements at a high level of abstraction, and using other information sources available about the system (produced by other stakeholders). These inputs, which are expressed in Domain Specific Languages (DSL), are then transformed into enforceable security rules with as little human intervention as possible. MDS explicitly also includes the run-time security management (e.g. entitlements/authorisations), i.e. run-time enforcement of the policy on the protected IT systems, dynamic policy updates and the monitoring of policy violations."
Model-driven security is also well-suited for automated auditing, reporting, documenting, and analysis (e.g. for compliance and accreditation), because the relationships between models and technical security implementations are traceably defined through the model-transformations.
Opinions of industry analysts
Several industry analyst sources state that MDS "will have a significant impact as information security infrastructure is required to become increasingly real-time, automated and adaptive to changes in the organisation and its environment". Many information technology architectures today are built to support adaptive changes (e.g. Service Oriented Architectures (SOA) and so-called Platform-as-a-
|
https://en.wikipedia.org/wiki/Security%20domain
|
A security domain is the determining factor in the classification of an enclave of servers/computers. A network with a different security domain is kept separate from other networks. For example, NIPRNet, SIPRNet, JWICS, and NSANet are all kept separate.
A security domain is considered to be an application or collection of applications that all trust a common security token for authentication, authorization or session management. Generally speaking, a security token is issued to a user after the user has actively authenticated with a user ID and password to the security domain.
Examples of a security domain include:
All the web applications that trust a session cookie issued by a Web Access Management product
All the Windows applications and services that trust a Kerberos ticket issued by Active Directory
In an identity federation that spans two different organizations that share a business partner, customer or business process outsourcing relation – a partner domain would be another security domain with which users and applications (from the local security domain) interact.
Computer networking
|
https://en.wikipedia.org/wiki/Masreliez%27s%20theorem
|
Masreliez theorem describes a recursive algorithm within the technology of extended Kalman filter, named after the Swedish-American physicist John Masreliez, who is its author. The algorithm estimates the state of a dynamic system with the help of often incomplete measurements marred by distortion.
Masreliez's theorem produces estimates that are quite good approximations to the exact conditional mean in non-Gaussian additive outlier (AO) situations. Some evidence for this can be had by Monte Carlo simulations.
The key approximation property used to construct these filters is that the state prediction density is approximately Gaussian. Masreliez discovered in 1975 that this approximation yields an intuitively appealing non-Gaussian filter recursions, with data dependent covariance (unlike the Gaussian case) this derivation also provides one of the nicest ways of establishing the standard Kalman filter recursions. Some theoretical justification for use of the Masreliez approximation is provided by the "continuity of state prediction densities" theorem in Martin (1979).
See also
Control engineering
Hidden Markov model
Bayes' theorem
Robust optimization
Probability theory
Nyquist–Shannon sampling theorem
|
https://en.wikipedia.org/wiki/Quasistatic%20approximation
|
Quasistatic approximation(s) refers to different domains and different meanings. In the most common acceptance, quasistatic approximation refers to equations that keep a static form (do not involve time derivatives) even if some quantities are allowed to vary slowly with time. In electromagnetism it refers to mathematical models that can be used to describe devices that do not produce significant amounts of electromagnetic waves. For instance the capacitor and the coil in electrical networks.
Overview
The quasistatic approximation can be understood through the idea that the sources in the problem change sufficiently slowly that the system can be taken to be in equilibrium at all times. This approximation can then be applied to areas such as classical electromagnetism, fluid mechanics, magnetohydrodynamics, thermodynamics, and more generally systems described by hyperbolic partial differential equations involving both spatial and time derivatives. In simple cases, the quasistatic approximation is allowed when the typical spatial scale divided by the typical temporal scale is much smaller than the characteristic velocity with which information is propagated. The problem gets more complicated when several length and time scales are involved. In the strict acceptance of the term the quasistatic case corresponds to a situation where all time derivatives can be neglected. However some equations can be considered as quasistatic while others are not, leading to a system still being dynamic. There is no general consensus in such cases.
Fluid dynamics
In fluid dynamics, only quasi-hydrostatics (where no time derivative term is present) is considered as a quasi-static approximation. Flows are usually considered as dynamic as well as acoustic waves propagation.
Thermodynamics
In thermodynamics, a distinction between quasistatic regimes and dynamic ones is usually made in terms of equilibrium thermodynamics versus non-equilibrium thermodynamics. As in electromagnetism
|
https://en.wikipedia.org/wiki/Frame%20%28linear%20algebra%29
|
In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.
Definition and motivation
Motivating example: computing a basis from a linearly dependent set
Suppose we have a set of vectors in the vector space V and we want to express an arbitrary element as a linear combination of the vectors , that is, we want to find coefficients such that
If the set does not span , then such coefficients do not exist for every such . If spans and also is linearly independent, this set forms a basis of , and the coefficients are uniquely determined by . If, however, spans but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if is of infinite dimension.
Given that spans and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan:
Removing arbitrary vectors from the set may cause it to be unable to span before it becomes linearly independent.
Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become unfeasible in practice if the set is large or infinite.
In some applications, it may be an advantage to use more vectors than necessary to represent . This means that we want to find the coefficients without removing elements in . The coefficients will no longer be uniquely determined by . Therefore, the vector can be represented as a linear combination of in more than one way.
Formal definition
Let V be an inner product space and be a set of vectors in . Th
|
https://en.wikipedia.org/wiki/Mathematics%20and%20art
|
Mathematics and art are related in a variety of ways. Mathematics has itself been described as an art motivated by beauty. Mathematics can be discerned in arts such as music, dance, painting, architecture, sculpture, and textiles. This article focuses, however, on mathematics in the visual arts.
Mathematics and art have a long historical relationship. Artists have used mathematics since the 4th century BC when the Greek sculptor Polykleitos wrote his Canon, prescribing proportions conjectured to have been based on the ratio 1: for the ideal male nude. Persistent popular claims have been made for the use of the golden ratio in ancient art and architecture, without reliable evidence. In the Italian Renaissance, Luca Pacioli wrote the influential treatise De divina proportione (1509), illustrated with woodcuts by Leonardo da Vinci, on the use of the golden ratio in art. Another Italian painter, Piero della Francesca, developed Euclid's ideas on perspective in treatises such as De Prospectiva Pingendi, and in his paintings. The engraver Albrecht Dürer made many references to mathematics in his work Melencolia I. In modern times, the graphic artist M. C. Escher made intensive use of tessellation and hyperbolic geometry, with the help of the mathematician H. S. M. Coxeter, while the De Stijl movement led by Theo van Doesburg and Piet Mondrian explicitly embraced geometrical forms. Mathematics has inspired textile arts such as quilting, knitting, cross-stitch, crochet, embroidery, weaving, Turkish and other carpet-making, as well as kilim. In Islamic art, symmetries are evident in forms as varied as Persian girih and Moroccan zellige tilework, Mughal jali pierced stone screens, and widespread muqarnas vaulting.
Mathematics has directly influenced art with conceptual tools such as linear perspective, the analysis of symmetry, and mathematical objects such as polyhedra and the Möbius strip. Magnus Wenninger creates colourful stellated polyhedra, originally as models for te
|
https://en.wikipedia.org/wiki/Plant%20litter
|
Plant litter (also leaf litter, tree litter, soil litter, litterfall or duff) is dead plant material (such as leaves, bark, needles, twigs, and cladodes) that have fallen to the ground. This detritus or dead organic material and its constituent nutrients are added to the top layer of soil, commonly known as the litter layer or O horizon ("O" for "organic"). Litter is an important factor in ecosystem dynamics, as it is indicative of ecological productivity and may be useful in predicting regional nutrient cycling and soil fertility.
Characteristics and variability
Litterfall is characterized as fresh, undecomposed, and easily recognizable (by species and type) plant debris. This can be anything from leaves, cones, needles, twigs, bark, seeds/nuts, logs, or reproductive organs (e.g. the stamen of flowering plants). Items larger than 2 cm diameter are referred to as coarse litter, while anything smaller is referred to as fine litter or litter. The type of litterfall is most directly affected by ecosystem type.
For example, leaf tissues account for about 70 percent of litterfall in forests, but woody litter tends to increase with forest age. In grasslands, there is very little aboveground perennial tissue so the annual litterfall is very low and quite nearly equal to the net primary production.
In soil science, soil litter is classified in three layers, which form on the surface of the O Horizon. These are the L, F, and H layers:
The litter layer is quite variable in its thickness, decomposition rate and nutrient content and is affected in part by seasonality, plant species, climate, soil fertility, elevation, and latitude. The most extreme variability of litterfall is seen as a function of seasonality; each individual species of plant has seasonal losses of certain parts of its body, which can be determined by the collection and classification of plant litterfall throughout the year, and in turn affects the thickness of the litter layer. In tropical environments,
|
https://en.wikipedia.org/wiki/Graphics%20processing%20unit
|
A graphics processing unit (GPU) is a specialized electronic circuit initially designed to accelerate computer graphics and image processing (either on a video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles). After their initial design, GPUs were found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. Other non-graphical uses include the training of neural networks and cryptocurrency mining.
History
1970s
Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor.
A specialized barrel shifter circuit helped the CPU animate the framebuffer graphics for various 1970s arcade video games from Midway and Taito, such as Gun Fight (1975), Sea Wolf (1976), and Space Invaders (1978). The Namco Galaxian arcade system in 1979 used specialized graphics hardware that supported RGB color, multi-colored sprites, and tilemap backgrounds. The Galaxian hardware was widely used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega, and Taito.
The Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor. Atari 8-bit computers (1979) had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer). 6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU.
1980s
The NEC µPD7220 was the first implementation of a personal computer graphics display processor as a single large
|
https://en.wikipedia.org/wiki/Logic%20block
|
In computing, a logic block or configurable logic block (CLB) is a fundamental building block of field-programmable gate array (FPGA) technology. Logic blocks can be configured by the engineer to provide reconfigurable logic gates.
Logic blocks are the most common FPGA architecture, and are usually laid out within a logic block array. Logic blocks require I/O pads (to interface with external signals), and routing channels (to interconnect logic blocks).
Programmable logic blocks were invented by David W. Page and LuVerne R. Peterson, and defined within their 1985 patents.
Applications
An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing tracks needed may vary considerably even among designs with the same amount of logic.
For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs.
FPGAs are also widely used for systems validation including pre-silicon validation, post-silicon validation, and firmware development. This allows chip companies to validate their design before the chip is produced in the factory, reducing the time-to-market.
Architecture
In general, a logic block consists of a few logic cells (each cell is called an adaptive logic module (ALM), a logic element (LE), slice, etc.). A typical cell consists of a 4-input LUT, a full adder (FA), and a D-type flip-flop (DFF), as shown to the right. The LUTs are in this figure split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT th
|
https://en.wikipedia.org/wiki/Z-value%20%28temperature%29
|
"F0" is defined as the number of equivalent minutes of steam sterilization at temperature 121.1 °C (250 °F) delivered to a container or unit of product calculated using a z-value of 10 °C. The term F-value or "FTref/z" is defined as the equivalent number of minutes to a certain reference temperature (Tref) for a certain control microorganism with an established Z-value.
Z-value is a term used in microbial thermal death time calculations. It is the number of degrees the temperature has to be increased to achieve a tenfold (i.e. 1 log10) reduction in the D-value. The D-value of an organism is the time required in a given medium, at a given temperature, for a ten-fold reduction in the number of organisms. It is useful when examining the effectiveness of thermal inactivations under different conditions, for example in food cooking and preservation. The z-value is a measure of the change of the D-value with varying temperature, and is a simplified version of an Arrhenius equation and it is equivalent to z=2.303 RT Tref/E.
The z-value of an organism in a particular medium is the temperature change required for the D-value to change by a factor of ten, or put another way, the temperature required for the thermal destruction curve to move one log cycle. It is the reciprocal of the slope resulting from the plot of the logarithm of the D-value versus the temperature at which the D-value was obtained. While the D-value gives the time needed at a certain temperature to kill 90% of the organisms, the z-value relates the resistance of an organism to differing temperatures. The z-value allows calculation of the equivalency of two thermal processes, if the D-value and the z-value are known.
Example: if it takes an increase of 10 °C (18 °F) to move the curve one log, then our z-value is 10. Given a D-value of 4.5 minutes at 150 °C, the D-value can be calculated for 160 °C by reducing the time by 1 log. The new D-value for 160 °C given the z-value is 0.45 minutes. This means
|
https://en.wikipedia.org/wiki/Rhythm%20of%20Structure
|
Rhythm of Structure is a multimedia interdisciplinary project founded in 2003. It features a series of exhibitions, performances, and academic projects that explore the interconnecting structures and process of mathematics and art, and language, as way to advance a movement of mathematical expression across the arts, across creative collaborative communities celebrating the rhythm and patterns of both ideas of the mind and the physical reality of nature.
Introduction
Rhythm of Structure, as an expanding series of art exhibitions, performances, videos/films and publications created and curated by multimedia mathematical artist and writer John Sims, explores and celebrates the intersecting structures of mathematics, art, community, and nature. Sims also created Recoloration Proclamation featuring the installation, The Proper Way to Hang a Confederate Flag (2004).
From his catalog essay from the Rhythm of Structure: Mathematics, Art and Poetic exhibition, Sims sets the curatorial theme where he writes: "Mathematics, as a parameter of human consciousness in an indispensable conceptual technology, essential is seeing beyond the retinal and knowing beyond the intuitive. The language and process of mathematics, as elements of, foundation for art, inform an analytic expressive condition that inspires a visual reckoning for a convergence: from the illustrative to the metaphysical to the poetic. And in the dialectic of visual art call and text performative response, there is an inter-dimensional conversation where the twisting structures of language, vision and human ways give birth to the spiritual lattice of a social geometry, a community constructivism -- a place of connections, where emotional calculations meet spirited abstraction."
First premiering at the Fire Patrol No.5 Gallery in 2003, with the show Rhythm of Structure: MathArt in Harlem. This interdisciplinary project has featured numerous exhibitions around the country collaborating with many notable artists, wr
|
https://en.wikipedia.org/wiki/Networked%20Robotics%20Corporation
|
Networked Robotics Corporation is an American scientific automation company that designs and manufactures electronic devices that monitor scientific instruments, scientific processes, and environmental conditions via the internet.
Networked Robotics is an Illinois company but is now based largely out of Pleasanton, California, the company is focused on the collection and integration of scientific data from FDA-regulated sources such as freezers, incubators, liquid nitrogen cryopreservation freezers, rooms, shakers, clean rooms, and scales. Monitored parameters include temperature, gas concentrations, liquid levels, voltages, pressure, rotation, humidity, weight, and many others.
Scientific instruments speak different data languages. The company integrates data collection by using unique network hardware that speaks the unique digital and electronic languages of scientific instruments and sensors from different vendors and converting those individual languages to a common one on the network. Networked Robotics produces their own line of digital sensors for scientific data sources where digital outputs are not available.
The company can be considered to be an Internet of Things provider.
Networked Robotics technology is used in the biotechnologies industry—including stem cell automation, medical industry, academia, food industry in efforts to enhance U.S. Food and Drug Administration (FDA) regulatory compliance, quality, and loss prevention for their operations.
The company sells a series of proprietary hardware products for network data collection. The NTMS4 networking hardware is their flagship product which serves as a data collection and "automation hub". The company's data collection and monitoring software, the Tempurity™ System, is free to customers. The company also provides regulatory services for companies that are performing regulated, especially FDA-regulated scientific research.
History
Networked Robotics was founded in 2004 at the Northwestern
|
https://en.wikipedia.org/wiki/Monokub
|
Monokub () is a computer motherboard based on the Russian Elbrus 2000 computer architecture, which form the basis for the Monoblock PC office workstation.
The motherboard has a miniITX formfactor and contains a single Elbrus-2C+ microprocessor with a clock frequency of 500 MHz. The memory controller provides a dual-channel memory mode. The board has two DDR2-800 memory slots, which enables up to 16 GB of RAM memory (using ECC modules). It also supports expansion boards using PCI Express x16 bus. In addition there is an on-board Gigabit Ethernet interface, 4 USB 2.0, RS-232 interface, DVI connector and audio input/output ports.
|
https://en.wikipedia.org/wiki/Quipu
|
Quipu (also spelled khipu) are recording devices fashioned from strings historically used by a number of cultures in the region of Andean South America.
A quipu usually consisted of cotton or camelid fiber strings. The Inca people used them for collecting data and keeping records, monitoring tax obligations, collecting census records, calendrical information, and for military organization. The cords stored numeric and other values encoded as knots, often in a base ten positional system. A quipu could have only a few or thousands of cords. The configuration of the quipus has been "compared to string mops." Archaeological evidence has also shown the use of finely carved wood as a supplemental, and perhaps sturdier, base to which the color-coded cords would be attached. A relatively small number have survived.
Objects that can be identified unambiguously as quipus first appear in the archaeological record in the first millennium AD (though debated quipus are much earlier). They subsequently played a key part in the administration of the Kingdom of Cusco and later the Inca Empire, flourishing across the Andes from c. 1100 to 1532 AD. As the region was subsumed under the Spanish Empire, quipus were mostly replaced by European writing and numeral systems, and most quipu were identified as idolatrous and destroyed, but some Spaniards promoted the adaptation of the quipu recording system to the needs of the colonial administration, and some priests advocated the use of quipus for ecclesiastical purposes. In several modern villages, quipus have continued to be important items for the local community. It is unclear how many intact quipus still exist and where, as many have been stored away in mausoleums.
Knotted strings unrelated to quipu have been used to record information by the ancient Chinese, Tibetans and Japanese.
Quipu is the Spanish spelling and the most common spelling in English. Khipu (pronounced , plural: khipukuna) is the word for "knot" in Cusco Quechua. I
|
https://en.wikipedia.org/wiki/The%20Chemical%20Basis%20of%20Morphogenesis
|
"The Chemical Basis of Morphogenesis" is an article that the English mathematician Alan Turing wrote in 1952. It describes how patterns in nature, such as stripes and spirals, can arise naturally from a homogeneous, uniform state. The theory, which can be called a reaction–diffusion theory of morphogenesis, has become a basic model in theoretical biology. Such patterns have come to be known as Turing patterns. For example, it has been postulated that the protein VEGFC can form Turing patterns to govern the formation of lymphatic vessels in the zebrafish embryo.
Reaction–diffusion systems
Reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. Patterns such as fronts, spirals, targets, hexagons, stripes and dissipative solitons are found in various types of reaction-diffusion systems in spite of large discrepancies e.g. in the local reaction terms. Such patterns have been dubbed "Turing patterns".
Reaction–diffusion processes form one class of explanation for the embryonic development of animal coats and skin pigmentation. Another reason for the interest in reaction-diffusion systems is that although they represent nonlinear partial differential equations, there are often possibilities for an analytical treatment.
See also
Evolutionary developmental biology
Turing pattern
Symmetry breaking
|
https://en.wikipedia.org/wiki/Computer-on-module
|
A computer-on-module (COM) is a type of single-board computer (SBC), a subtype of an embedded computer system. An extension of the concept of system on chip (SoC) and system in package (SiP), COM lies between a full-up computer and a microcontroller in nature. It is very similar to a system on module (SOM).
Design
COMs are complete embedded computers built on a single circuit board. The design is centered on a microprocessor with RAM, input/output controllers and all other features needed to be a functional computer on the one board. However, unlike a single-board computer, the COM usually lacks the standard connectors for any input/output peripherals to be attached directly to the board.
The module usually needs to be mounted on a carrier board (or "baseboard") which breaks the bus out to standard peripheral connectors. Some COMs also include peripheral connectors. Some can be used without a carrier.
A COM solution offers a dense package computer system for use in small or specialized applications requiring low power consumption or small physical size as is needed in embedded systems. As a COM is very compact and highly integrated, even complex CPUs, including multi-core technology, can be realized on a COM.
Some devices also incorporate field-programmable gate array (FPGA) components. FPGA-based functions can be added as IP cores to the COM itself or to the carrier card. Using FPGA IP cores adds to the modularity of a COM concept because I/O functions can be adapted to special needs without extensive rewiring on the printed circuit board.
A "computer-on-module" is also called a "system-on-module" (SOM).
History
The terms "Computer-on-Module" and "COM" were coined by VDC Research Group, Inc. (formerly Venture Development Corporation) to describe this class of embedded computer boards.
Dr. Gordon Kruberg, founder and CEO of Gumstix, is credited for creating the first COM, predating the next recognizable COM entries by almost 18 months.
Gumstix ARM Linux
|
https://en.wikipedia.org/wiki/Open%20security
|
Open security is the use of open source philosophies and methodologies to approach computer security and other information security challenges. Traditional application security is based on the premise that any application or service (whether it is malware or desirable) relies on security through obscurity.
Open source approaches have created technology such as Linux (and to some extent, the Android operating system). Additionally, open source approaches applied to documents have inspired wikis and their largest example, Wikipedia. Open security suggests that security breaches and vulnerabilities can be better prevented or ameliorated when users facing these problems collaborate using open source philosophies.
This approach requires that users be legally allowed to collaborate, so relevant software would need to be released under a license that is widely accepted to be open source; examples include the Massachusetts Institute of Technology (MIT) license, the Apache 2.0 license, the GNU Lesser General Public License (LGPL), and the GNU General Public License (GPL). Relevant documents would need to be under a generally accepted "open content" license; these include Creative Commons Attribution (CC-BY) and Attribution Share Alike (CC-BY-SA) licenses, but not Creative Commons "non-commercial" licenses or "no-derivative" licenses.
On the developer side, legitimate software and service providers can have independent verification and testing of their source code. On the information technology side, companies can aggregate common threats, patterns, and security solutions to a variety of security issues.
See also
Kerckhoffs's Principle
OASIS (organization) (Organization for the Advancement of Structured Information Standards)
OWASP (Open Web Application Security Project)
Open government
Homeland Open Security Technology
Open source
Open source software
Open-source hardware
|
https://en.wikipedia.org/wiki/Imbibition
|
Imbibition is a special type of diffusion that takes place when liquid is absorbed by solids-colloids causing an increase in volume. Water surface potential movement takes place along a concentration gradient; some dry materials absorb water. A gradient between the absorbent and the liquid is essential for imbibition. For a substance to imbibe a liquid, there must first be some attraction between them. Imbibition occurs when a wetting fluid displaces a non-wetting fluid, the opposite of drainage in which a non-wetting phase displaces the wetting fluid. The two processes are governed by different mechanisms. Imbibition is also a type of diffusion since water movement is along the concentration gradient. Seeds and other such materials have almost no water hence they absorb water easily. Water potential gradient between the absorbent and liquid imbibed is essential for imbibition.
Examples
One example of imbibition in nature is the absorption of water by hydrophilic colloids. Matrix potential contributes significantly to water in such substances. Dry seeds germinate in part by imbibition. Imbibition can also control circadian rhythms in Arabidopsis thaliana and (probably) other plants. The Amott test employs imbibition.
Proteins have high imbibition capacities, so proteinaceous pea seeds swell more than starchy wheat seeds.
Imbibition of water increases imbibant volume, which results in imbibitional pressure (IP). The magnitude of such pressure can be demonstrated by the splitting of rocks by inserting dry wooden stalks in their crevices and soaking them in water, a technique used by early Egyptians to cleave stone blocks.
Skin grafts (split thickness and full thickness) receive oxygenation and nutrition via imbibition, maintaining cellular viability until the processes of inosculation and revascularisation have re-established a new blood supply within these tissues.
Germination
Examples include the absorption of water by seeds and dry wood. If there is no pre
|
https://en.wikipedia.org/wiki/Carrier%20frequency%20offset
|
Carrier frequency offset (CFO) is one of many non-ideal conditions that may affect in baseband receiver design. In designing a baseband receiver, we should notice not only the degradation invoked by non-ideal channel and noise, we should also regard RF and analog parts as the main consideration. Those non-idealities include sampling clock offset, IQ imbalance, power amplifier, phase noise and carrier frequency offset nonlinearity.
Carrier frequency offset often occurs when the local oscillator signal for down-conversion in the receiver does not synchronize with the carrier signal contained in the received signal. This phenomenon can be attributed to two important factors: frequency mismatch in the transmitter and the receiver oscillators; and the Doppler effect as the transmitter or the receiver is moving.
When this occurs, the received signal will be shifted in frequency. For an OFDM system, the orthogonality among sub carriers is maintained only if the receiver uses a local oscillation signal that is synchronous with the carrier signal contained in the received signal. Otherwise, mismatch in carrier frequency can result in inter-carrier interference (ICI). The oscillators in the transmitter and the receiver can never be oscillating at identical frequency. Hence, carrier frequency offset always exists even if there is no Doppler effect.
In a standard-compliant communication system, such as the IEEE 802.11 WLAN the oscillator precision tolerance is specified to be less than ±20 ppm, so that CFO is in the range from - 40 ppm to +40 ppm.
Example
If the TX oscillator runs at a frequency that is 20 ppm above the nominal frequency and if the RX oscillator is running at 20 ppm below, then the received baseband signal will have a CFO of 40 ppm. With a carrier frequency of 5.2 GHz in this standard, the CFO is up to ±208 kHz. In addition, if the transmitter or the receiver is moving, the Doppler effect adds some hundreds of hertz in frequency spreading.
Compared to th
|
https://en.wikipedia.org/wiki/Index%20of%20fractal-related%20articles
|
This is a list of fractal topics, by Wikipedia page, See also list of dynamical systems and differential equations topics.
1/f noise
Apollonian gasket
Attractor
Box-counting dimension
Cantor distribution
Cantor dust
Cantor function
Cantor set
Cantor space
Chaos theory
Coastline
Constructal theory
Dimension
Dimension theory
Dragon curve
Fatou set
Fractal
Fractal antenna
Fractal art
Fractal compression
Fractal flame
Fractal landscape
Fractal transform
Fractint
Graftal
Iterated function system
Horseshoe map
How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension
Julia set
Koch snowflake
L-system
Lebesgue covering dimension
Lévy C curve
Lévy flight
List of fractals by Hausdorff dimension
Lorenz attractor
Lyapunov fractal
Mandelbrot set
Menger sponge
Minkowski–Bouligand dimension
Multifractal analysis
Olbers' paradox
Perlin noise
Power law
Rectifiable curve
Scale-free network
Self-similarity
Sierpinski carpet
Sierpiński curve
Sierpinski triangle
Space-filling curve
T-square (fractal)
Topological dimension
Fractals
|
https://en.wikipedia.org/wiki/Timeline%20of%20discovery%20of%20Solar%20System%20planets%20and%20their%20moons
|
The timeline of discovery of Solar System planets and their natural satellites charts the progress of the discovery of new bodies over history. Each object is listed in chronological order of its discovery (multiple dates occur when the moments of imaging, observation, and publication differ), identified through its various designations (including temporary and permanent schemes), and the discoverer(s) listed.
Historically the naming of moons did not always match the times of their discovery. Traditionally, the discoverer enjoys the privilege of naming the new object; however, some neglected to do so (E. E. Barnard stated he would "defer any suggestions as to a name" [for Amalthea] "until a later paper" but never got around to picking one from the numerous suggestions he received) or actively declined (S. B. Nicholson stated "Many have asked what the new satellites [Lysithea and Carme] are to be named. They will be known only by the numbers X and XI, written in Roman numerals, and usually prefixed by the letter J to identify them with Jupiter."). The issue arose nearly as soon as planetary satellites were discovered: Galileo referred to the four main satellites of Jupiter using numbers while the names suggested by his rival Simon Marius gradually gained universal acceptance. The International Astronomical Union (IAU) eventually started officially approving names in the late 1970s. With the explosion of discoveries in the 21st century, new moons have once again started to be left unnamed even after their numbering, beginning with Jupiter LI and Jupiter LII in 2010.
Key info
In the following tables, planetary satellites are indicated in bold type (e.g. Moon) while planets and dwarf planets, which directly circle the Sun, are in italic type (e.g. Earth). The Sun itself is indicated in roman type. The tables are sorted by publication/announcement date. Dates are annotated with the following symbols:
i: for date of first imaging (photography, etc.);
o: for date of fir
|
https://en.wikipedia.org/wiki/Mathematical%20Models%20%28Fischer%29
|
Mathematical Models: From the Collections of Universities and Museums – Photograph Volume and Commentary is a book on the physical models of concepts in mathematics that were constructed in the 19th century and early 20th century and kept as instructional aids at universities. It credits Gerd Fischer as editor, but its photographs of models are also by Fischer. It was originally published by Vieweg+Teubner Verlag for their bicentennial in 1986, both in German (titled Mathematische Modelle. Aus den Sammlungen von Universitäten und Museen. Mit 132 Fotografien. Bildband und Kommentarband) and (separately) in English translation, in each case as a two-volume set with one volume of photographs and a second volume of mathematical commentary. Springer Spektrum reprinted it in a second edition in 2017, as a single dual-language volume.
Topics
The work consists of 132 full-page photographs of mathematical models, divided into seven categories, and seven chapters of mathematical commentary written by experts in the topic area of each category.
These categories are:
Wire and thread models, of hypercubes of various dimensions, and of hyperboloids, cylinders, and related ruled surfaces, described as "elementary analytic geometry" and explained by Fischer himself.
Plaster and wood models of cubic and quartic algebraic surfaces, including Cayley's ruled cubic surface, the Clebsch surface, Fresnel's wave surface, the Kummer surface, and the Roman surface, with commentary by W. Barth and H. Knörrer.
Wire and plaster models illustrating the differential geometry and curvature of curves and surfaces, including surfaces of revolution, Dupin cyclides, helicoids, and minimal surfaces including the Enneper surface, with commentary by M. P. do Carmo, G. Fischer, U. Pinkall, H. and Reckziegel.
Surfaces of constant width including the surface of rotation of the Reuleaux triangle and the Meissner bodies, described by J. Böhm.
Uniform star polyhedra, described by E. Quaisser.
Models of the
|
https://en.wikipedia.org/wiki/Favard%20constant
|
In mathematics, the Favard constant, also called the Akhiezer–Krein–Favard constant, of order r is defined as
This constant is named after the French mathematician Jean Favard, and after the Soviet mathematicians Naum Akhiezer and Mark Krein.
Particular values
Uses
This constant is used in solutions of several extremal problems, for example
Favard's constant is the sharp constant in Jackson's inequality for trigonometric polynomials
the sharp constants in the Landau–Kolmogorov inequality are expressed via Favard's constants
Norms of periodic perfect splines.
|
https://en.wikipedia.org/wiki/Umbilic%20torus
|
The umbilic torus or umbilic bracelet is a single-edged 3-dimensional shape. The lone edge goes three times around the ring before returning to the starting point. The shape also has a single external face. A cross section of the surface forms a deltoid.
The umbilic torus occurs in the mathematical subject of singularity theory, in particular in the classification of umbilical points which are determined by real cubic forms . The equivalence classes of such cubics form a three-dimensional real projective space and the subset of parabolic forms define a surface – the umbilic torus. Christopher Zeeman named this set the umbilic bracelet in 1976.
The torus is defined by the following set of parametric equations.
In sculpture
John Robinson created a sculpture Eternity based on the shape in 1989, this had a triangular cross-section rather than a deltoid of a true Umbilic bracelet. This appeared on the cover of Geometric Differentiation by Ian R. Porteous.
Helaman Ferguson has created a 27-inch (69 centimeters) bronze sculpture, Umbilic Torus, and it is his most widely known piece of art. In 2010, it was announced that Jim Simons had commissioned an Umbilic Torus sculpture to be constructed outside the Math and Physics buildings at Stony Brook University, in proximity to the Simons Center for Geometry and Physics. The torus is made out of cast bronze, and is mounted on a stainless steel column. The total weight of the sculpture is 65 tonnes, and has a height of . The torus has a diameter of , the same diameter as the granite base. Various mathematical formulas defining the torus are inscribed on the base. Installation was completed in September, 2012.
In literature
In the short story What Dead Men Tell by Theodore Sturgeon, the main action takes place in a seemingly endless corridor with the cross section of an equilateral triangle. At the end the protagonist speculates that the corridor is actually a triangular shape twisted back on itself like a Möbius strip but
|
https://en.wikipedia.org/wiki/Chip%20creep
|
Chip creep refers to the problem of an integrated circuit (chip) working its way out of its socket over time. This was mainly an issue in early PCs.
Chip creep occurs due to thermal expansion, which is expansion and contraction as the system heats up and cools down. It can also occur due to vibration. While chip creep was most common with older memory modules, it was also a problem with CPUs and other main chips that were inserted into sockets. An example is the Apple III, where its CPU would be dislodged and the user would need to reseat the chips.
To fix chip creep, users of older systems would often have to remove the case cover and push the loose chip back into the socket. Today's computer systems are not as affected by chip creep, since chips are more securely held, either by various types of retainer clips or by being soldered into place, and since system cooling has improved.
|
https://en.wikipedia.org/wiki/National%20Documentation%20Centre%20%28Greece%29
|
The National Documentation Centre (EKT; ) is a Greek public organisation that promotes knowledge, research, innovation and digital transformation. It was established in 1980 with funding from the United Nations Development Programme with the aim to strengthen the collection and distribution of research-related material, and to ensure full accessibility to it. It has been designated as a National Scientific Infrastructure, a National Authority of the Hellenic Statistical System, and National Contact Point for European Research and Innovation Programmes. Since August 2019, it has been established as a discrete public-interest legal entity under private law, and is supervised by the Ministry of Digital Governance (Article 59 / Law 4623/2019). The management bodies of EKT are the Administrative Board and the Director who, since 2013, has been Dr. Evi Sachini.
Goals
EKT's institutional role is the collection, organisation, documentation, digital preservation and dissemination of scientific, research and cultural information, content and data produced in Greece. EKT’s specific objectives, as stated on its official website, focus, amongst others, on:
Ensuring the dissemination of the country's scientific output.
Meeting the needs of academia, policymakers and research and business communities for information and reliable data.
Increasing the digital scientific and cultural content that is available in a user-friendly form and with legitimate rights of use for different target groups
Promoting open access to publications and data in the academic and research communities.
Collaboration with academic libraries for the standardization in organising and distributing metadata and digital scientific content.
Collaboration and joint actions with libraries, archives, museums, scientific and cultural institutions which produce and manage content, focusing on the establishment of common interoperability standards and the availability of metadata and digital content.
Provi
|
https://en.wikipedia.org/wiki/Neurochip
|
A neurochip is an integrated circuit chip (such as a microprocessor) that is designed for interaction with neuronal cells.
Formation
It is made of silicon that is doped in such a way that it contains EOSFETs (electrolyte-oxide-semiconductor field-effect transistors) that can sense the electrical activity of the neurons (action potentials) in the above-standing physiological electrolyte solution. It also contains capacitors for the electrical stimulation of the neurons. The University of Calgary, Faculty of Medicine scientists led by Pakistani-born Canadian scientist Naweed Syed who proved it is possible to cultivate a network of brain cells that reconnect on a silicon chip—or the brain on a microchip—have developed new technology that monitors brain cell activity at a resolution never achieved before.
Developed with the National Research Council Canada (NRC), the new silicon chips are also simpler to use, which will help future understanding of how brain cells work under normal conditions and permit drug discoveries for a variety of neurodegenerative diseases, such as Alzheimer's and Parkinson's.
Naweed Syed's lab cultivated brain cells on a microchip.
The new technology from the lab of Naweed Syed, in collaboration with the NRC, was published online in August 2010, in the journal, Biomedical Devices. It is the world's first neurochip. It is based on Syed's earlier experiments on neurochip technology dating back to 2003.
"This technical breakthrough means we can track subtle changes in brain activity at the level of ion channels and synaptic potentials, which are also the most suitable target sites for drug development in neurodegenerative diseases and neuropsychological disorders," says Syed, professor and head of the Department of Cell Biology and Anatomy, member of the Hotchkiss Brain Institute and advisor to the Vice President Research on Biomedical Engineering Initiative of the University of Chicago.
The new neurochips are also automated, meaning that an
|
https://en.wikipedia.org/wiki/Non-Quasi%20Static%20model
|
Non-Quasi Static model (NQS) is a transistor model used in analogue/mixed signal IC design. It becomes necessary to use an NQS model when the operational frequency of the device is in the range of its transit time. Normally, in a quasi-static (QS) model, voltage changes in the MOS transistor channel are assumed to be instantaneous. However, in an NQS model voltage changes relating to charge carriers are delayed.
|
https://en.wikipedia.org/wiki/Trust%20boundary
|
Trust boundary is a term used in computer science and security which describes a boundary where program data or execution changes its level of "trust," or where two principals with different capabilities exchange data or commands. The term refers to any distinct boundary where within a system all sub-systems (including data) have equal trust. An example of an execution trust boundary would be where an application attains an increased privilege level (such as root). A data trust boundary is a point where data comes from an untrusted source--for example, user input or a network socket.
A "trust boundary violation" refers to a vulnerability where computer software trusts data that has not been validated before crossing a boundary.
|
https://en.wikipedia.org/wiki/Range%20state
|
Range state is a term generally used in zoogeography and conservation biology to refer to any nation that exercises jurisdiction over any part of a range which a particular species, taxon or biotope inhabits, or crosses or overflies at any time on its normal migration route. The term is often expanded to also include, particularly in international waters, any nation with vessels flying their flag that engage in exploitation (e.g. hunting, fishing, capturing) of that species. Countries in which a species occurs only as a vagrant or ‘accidental’ visitor outside of its normal range or migration route are not usually considered range states.
Because governmental conservation policy is often formulated on a national scale, and because in most countries, both governmental and private conservation organisations are also organised at the national level, the range state concept is often used by international conservation organizations in formulating their conservation and campaigning policy.
An example of one such organization is the Convention on the Conservation of Migratory Species of Wild Animals (CMS, or the “Bonn Convention”). It is a multilateral treaty focusing on the conservation of critically endangered and threatened migratory species, their habitats and their migration routes. Because such habitats and/or migration routes may span national boundaries, conservation efforts are less likely to succeed without the cooperation, participation, and coordination of each of the range states.
External links
Bonn Convention (CMS) — Text of Convention Agreement
Bonn Convention (CMS): List of Range States for Critically Endangered Migratory Species
|
https://en.wikipedia.org/wiki/List%20of%20abstract%20algebra%20topics
|
Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract algebra was coined at the turn of the 20th century to distinguish this area from what was normally referred to as algebra, the study of the rules for manipulating formulae and algebraic expressions involving unknowns and real or complex numbers, often now called elementary algebra. The distinction is rarely made in more recent writings.
Basic language
Algebraic structures are defined primarily as sets with operations.
Algebraic structure
Subobjects: subgroup, subring, subalgebra, submodule etc.
Binary operation
Closure of an operation
Associative property
Distributive property
Commutative property
Unary operator
Additive inverse, multiplicative inverse, inverse element
Identity element
Cancellation property
Finitary operation
Arity
Structure preserving maps called homomorphisms are vital in the study of algebraic objects.
Homomorphisms
Kernels and cokernels
Image and coimage
Epimorphisms and monomorphisms
Isomorphisms
Isomorphism theorems
There are several basic ways to combine algebraic objects of the same type to produce a third object of the same type. These constructions are used throughout algebra.
Direct sum
Direct limit
Direct product
Inverse limit
Quotient objects: quotient group, quotient ring, quotient module etc.
Tensor product
Advanced concepts:
Category theory
Category of groups
Category of abelian groups
Category of rings
Category of modules (over a fixed ring)
Morita equivalence, Morita duality
Category of vector spaces
Homological algebra
Filtration (algebra)
Exact sequence
Functor
Zorn's lemma
Semigroups and monoids
Semigroup
Subsemigroup
Free semigroup
Green's relations
Inverse semigroup (or inversion semigroup, cf. )
Krohn–Rhodes theory
Semigroup algebra
Transformation semigroup
Monoid
Aperiodic monoid
Free monoid
Monoid (category theory)
Monoid factorisation
Syntacti
|
https://en.wikipedia.org/wiki/High%20throughput%20biology
|
High throughput biology (or high throughput cell biology) is the use of automation equipment with classical cell biology techniques to address biological questions that are otherwise unattainable using conventional methods. It may incorporate techniques from optics, chemistry, biology or image analysis to permit rapid, highly parallel research into how cells function, interact with each other and how pathogens exploit them in disease.
High throughput cell biology has many definitions, but is most commonly defined by the search for active compounds in natural materials like in medicinal plants. This is also known as high throughput screening (HTS) and is how most drug discoveries are made today, many cancer drugs, antibiotics, or viral antagonists have been discovered using HTS. The process of HTS also tests substances for potentially harmful chemicals that could be potential human health risks. HTS generally involves hundreds of samples of cells with the model disease and hundreds of different compounds being tested from a specific source. Most often a computer is used to determine when a compound of interest has a desired or interesting effect on the cell samples.
Using this method has contributed to the discovery of the drug Sorafenib (Nexavar). Sorafenib is used as medication to treat multiple types of cancers, including renal cell carcinoma (RCC, cancer in the kidneys), hepatocellular carcinoma (liver cancer), and thyroid cancer. It helps stop cancer cells from reproducing by blocking the abnormal proteins present. In 1994, high throughput screening for this particular drug was completed. It was initially discovered by Bayer Pharmaceuticals in 2001. By using a RAF kinase biochemical assay, 200,000 compounds were screened from medicinal chemistry directed synthesis or combinatorial libraries to identify active molecules against activeRAF kinase. Following three trials of testing, it was found to have anti-angiogenic effects on the cancers, which stops the proc
|
https://en.wikipedia.org/wiki/Geothrix%20fermentans
|
Geothrix fermentans is a rod-shaped, anaerobic bacterium. It is about 0.1 µm in diameter and ranges from 2-3 µm in length. Cell arrangement occurs singly and in chains. Geothrix fermentans can normally be found in aquatic sediments such as in aquifers. As an anaerobic chemoorganotroph, this organism is best known for its ability to use electron acceptors Fe(III), as well as other high potential metals. It also uses a wide range of substrates as electron donors. Research on metal reduction by G. fermentans has contributed to understanding more about the geochemical cycling of metals in the environment.
Taxonomy history
Geothrix fermentans was isolated from metal-contaminated waters of an aquifer in 1999 by John D. Coates from Southern Illinois University and by others from the University of Massachusetts. The novel strain was originally named "Strain H-5T ". After classifying metabolism and confirming the presence and number of c-type cytochromes, Coates et al. proposed that the novel organism belongs to the newly recognized (1991) Halophoga-Acidobacterium phylum. Coates et al. also proposed a new name for the organism: "Geothrix"- Greek for hair-like cell that comes from the Earth and "fermentans"- Latin for "fermenting."
Phylogeny
Approaches based on 16s rRNA gene sequence comparison have allowed for detailed analyses of the affiliations of many bacterial groups. The phylogenetic affiliation of Geothrix fermentans as well as other soil bacteria such as Acidobacterium capsulatum and Holophoga foetida had not been established at the time of their initial isolation. More recent analysis 16s rRNA sequence data showed moderate similarity between these three genera supporting the likelihood that they may have differentiated from a common ancestor.
Biology
Geothrix fermentans is a rod-shaped strict anaerobe that can be found in aquatic soils in the Fe(III) reduction zone. As a strict anaerobe G. fermentans cannot grow in the presence of atmospheric oxygen that may be
|
https://en.wikipedia.org/wiki/Matriphagy
|
Matriphagy is the consumption of the mother by her offspring. The behavior generally takes place within the first few weeks of life and has been documented in some species of insects, nematode worms, pseudoscorpions, and other arachnids as well as in caecilian amphibians.
The specifics of how matriphagy occurs varies among different species. However, the process is best described in the Desert spider, Stegodyphus lineatus, where the mother harbors nutritional resources for her young through food consumption. The mother can regurgitate small portions of food for her growing offspring, but between 1–2 weeks after hatching the progeny capitalize on this food source by eating her alive. Typically, offspring only feed on their biological mother as opposed to other females in the population. In other arachnid species, matriphagy occurs after the ingestion of nutritional eggs known as trophic eggs (e.g. Black lace-weaver Amaurobius ferox, Crab spider Australomisidia ergandros). It involves different techniques for killing the mother, such as transfer of poison via biting and sucking to cause a quick death (e.g. Black lace-weaver) or continuous sucking of the hemolymph, resulting in a more gradual death (e.g. Crab spider). The behavior is less well described but follows a similar pattern in species such as the Hump earwig, pseudoscorpions, and caecilians.
Spiders that engage in matriphagy produce offspring with higher weights, shorter and earlier moulting time, larger body mass at dispersal, and higher survival rates than clutches deprived of matriphagy. In some species, matriphagous offspring were also more successful at capturing large prey items and had a higher survival rate at dispersal. These benefits to offspring outweigh the cost of survival to the mothers and help ensure that her genetic traits are passed to the next generation, thus perpetuating the behavior.
Overall, matriphagy is an extreme form of parental care but is highly related to extended care in the F
|
https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20electromagnetism
|
The study of electromagnetism in higher education, as a fundamental part of both physics and engineering, is typically accompanied by textbooks devoted to the subject. The American Physical Society and the American Association of Physics Teachers recommend a full year of graduate study in electromagnetism for all physics graduate students. A joint task force by those organizations in 2006 found that in 76 of the 80 US physics departments surveyed, a course using John David Jackson's Classical Electrodynamics was required for all first year graduate students. For undergraduates, there are several widely used textbooks, including David Griffiths' Introduction to Electrodynamics and Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. Also at an undergraduate level, Richard Feynman's classic The Feynman Lectures on Physics is available online to read for free.
Undergraduate
There are several widely used undergraduate textbooks in electromagnetism, including David Griffiths' Introduction to Electrodynamics as well as Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. The Feynman Lectures on Physics also include a volume on electromagnetism that is available to read online for free, through the California Institute of Technology. In addition, there are popular physics textbooks that include electricity and magnetism among the material they cover, such as David Halliday and Robert Resnick's Fundamentals of Physics.
Graduate
A 2006 report by a joint taskforce between the American Physical Society and the American Association of Physics Teachers found that 76 of the 80 physics departments surveyed require a first-year graduate course in John David Jackson's Classical Electrodynamics. This made Jackson's book the most popular textbook in any field of graduate-level physics, with Herbert Goldstein's Classical Mechanics as the second most popular with adoption at 48 universities. In a 2015 review of Andrew Zangwill's Modern Electrodynamics in
|
https://en.wikipedia.org/wiki/Iverson%20bracket
|
In mathematics, the Iverson bracket, named after Kenneth E. Iverson, is a notation that generalises the Kronecker delta, which is the Iverson bracket of the statement . It maps any statement to a function of the free variables in that statement. This function is defined to take the value 1 for the values of the variables for which the statement is true, and takes the value 0 otherwise. It is generally denoted by putting the statement inside square brackets:
In other words, the Iverson bracket of a statement is the indicator function of the set of values for which the statement is true.
The Iverson bracket allows using capital-sigma notation without restriction on the summation index. That is, for any property of the integer , one can rewrite the restricted sum in the unrestricted form . With this convention, does not need to be defined for the values of for which the Iverson bracket equals ; that is, a summand must evaluate to 0 regardless of whether is defined.
The notation was originally introduced by Kenneth E. Iverson in his programming language APL, though restricted to single relational operators enclosed in parentheses, while the generalisation to arbitrary statements, notational restriction to square brackets, and applications to summation, was advocated by Donald Knuth to avoid ambiguity in parenthesized logical expressions.
Properties
There is a direct correspondence between arithmetic on Iverson brackets, logic, and set operations. For instance, let A and B be sets and any property of integers; then we have
Examples
The notation allows moving boundary conditions of summations (or integrals) as a separate factor into the summand, freeing up space around the summation operator, but more importantly allowing it to be manipulated algebraically.
Double-counting rule
We mechanically derive a well-known sum manipulation rule using Iverson brackets:
Summation interchange
The well-known rule is likewise easily derived:
Counting
For instance, the
|
https://en.wikipedia.org/wiki/Khinchin%27s%20constant
|
In number theory, Aleksandr Yakovlevich Khinchin proved that for almost all real numbers x, coefficients ai of the continued fraction expansion of x have a finite geometric mean that is independent of the value of x and is known as Khinchin's constant.
That is, for
it is almost always true that
where is Khinchin's constant
(with denoting the product over all sequence terms).
Although almost all numbers satisfy this property, it has not been proven for any real number not specifically constructed for the purpose. Among the numbers whose continued fraction expansions apparently do have this property (based on numerical evidence) are π, the Euler-Mascheroni constant γ, Apéry's constant ζ(3), and Khinchin's constant itself. However, this is unproven.
Among the numbers x whose continued fraction expansions are known not to have this property are rational numbers, roots of quadratic equations (including the golden ratio Φ and the square roots of integers), and the base of the natural logarithm e.
Khinchin is sometimes spelled Khintchine (the French transliteration of Russian Хинчин) in older mathematical literature.
Sketch of proof
The proof presented here was arranged by Czesław Ryll-Nardzewski and is much simpler than Khinchin's original proof which did not use ergodic theory.
Since the first coefficient a0 of the continued fraction of x plays no role in Khinchin's theorem and since the rational numbers have Lebesgue measure zero, we are reduced to the study of irrational numbers in the unit interval, i.e., those in . These numbers are in bijection with infinite continued fractions of the form [0; a1, a2, ...], which we simply write [a1, a2, ...], where a1, a2, ... are positive integers. Define a transformation T:I → I by
The transformation T is called the Gauss–Kuzmin–Wirsing operator. For every Borel subset E of I, we also define the Gauss–Kuzmin measure of E
Then μ is a probability measure on the σ-algebra of Borel subsets of I. The measure μ is equ
|
https://en.wikipedia.org/wiki/Mason%27s%20invariant
|
In electronics, Mason's invariant, named after Samuel Jefferson Mason, is a measure of the quality of transistors.
"When trying to solve a seemingly difficult problem, Sam said to concentrate on the easier ones first; the rest, including the hardest ones, will follow," recalled Andrew Viterbi, co-founder and former vice-president of Qualcomm. He had been a thesis advisee under Samuel Mason at MIT, and this was one lesson he especially remembered from his professor. A few years earlier, Mason had heeded his own advice when he defined a unilateral power gain for a linear two-port device, or U. After concentrating on easier problems with power gain in feedback amplifiers, a figure of merit for all three-terminal devices followed that is still used today as Mason's Invariant.
Origin
In 1953, transistors were only five years old, and they were the only successful solid-state three-terminal active device. They were beginning to be used for RF applications, and they were limited to VHF frequencies and below. Mason wanted to find a figure of merit to compare transistors, and this led him to discover that the unilateral power gain of a linear two-port device was an invariant figure of merit.
In his paper Power Gain in Feedback Amplifiers published in 1953, Mason stated in his introduction, "A vacuum tube, very often represented as a simple transconductance driving a passive impedance, may lead to relatively simple amplifier designs in which the input impedance (and hence the power gain) is effectively infinite, the voltage gain is the quantity of interest, and the input circuit is isolated from the load. The transistor, however, usually cannot be characterized so easily." He wanted to find a metric to characterize and measure the quality of transistors since up until then, no such measure existed. His discovery turned out to have applications beyond transistors.
Derivation of U
Mason first defined the device being studied with the three constraints listed below.
T
|
https://en.wikipedia.org/wiki/Viewpoints%3A%20Mathematical%20Perspective%20and%20Fractal%20Geometry%20in%20Art
|
Viewpoints: Mathematical Perspective and Fractal Geometry in Art is a textbook on mathematics and art. It was written by mathematicians Marc Frantz and Annalisa Crannell, and published in 2011 by the Princeton University Press (). The Basic Library List Committee of the Mathematical Association of America has recommended it for inclusion in undergraduate mathematics libraries.
Topics
The first seven chapters of the book concern perspectivity, while its final two concern fractals and their geometry. Topics covered within the chapters on perspectivity include coordinate systems for the plane and for Euclidean space, similarity, angles, and orthocenters, one-point and multi-point perspective, and anamorphic art. In the fractal chapters, the topics include self-similarity, exponentiation, and logarithms, and fractal dimension. Beyond this mathematical material, the book also describes methods for artists to depict scenes in perspective, and for viewers of art to understand the perspectives in the artworks they see, for instance by finding the optimal point from which to view an artwork. The chapters are ordered by difficulty, and begin with experiments that the students can perform on their own to motivate the material in each chapter.
The book is heavily illustrated by artworks and photography (such as the landscapes of Ansel Adams) and includes a series of essays or interviews by contemporary artists on the mathematical content of their artworks.
An appendix contains suggestions aimed at teachers of this material.
Audience and reception
Viewpoints is intended as a textbook for mathematics classes aimed at undergraduate liberal arts students, as a way to show these students how geometry can be used in their everyday life. However, it could even be used for high school art students,
and reviewer Paul Kelley writes that "it will be of value to anyone interested in an elementary introduction to the mathematics and practice of perspective drawing". It differs from many
|
https://en.wikipedia.org/wiki/List%20of%20undecidable%20problems
|
In computability theory, an undecidable problem is a type of computational problem that requires a yes/no answer, but where there cannot possibly be any computer program that always gives the correct answer; that is, any possible program would sometimes give the wrong answer or run forever without giving any answer. More formally, an undecidable problem is a problem whose language is not a recursive set; see the article Decidable language. There are uncountably many undecidable problems, so the list below is necessarily incomplete. Though undecidable languages are not recursive languages, they may be subsets of Turing recognizable languages: i.e., such undecidable languages may be recursively enumerable.
Many, if not most, undecidable problems in mathematics can be posed as word problems: determining when two distinct strings of symbols (encoding some mathematical concept or object) represent the same object or not.
For undecidability in axiomatic mathematics, see List of statements undecidable in ZFC.
Problems in logic
Hilbert's Entscheidungsproblem.
Type inference and type checking for the second-order lambda calculus (or equivalent).
Determining whether a first-order sentence in the logic of graphs can be realized by a finite undirected graph.
Trakhtenbrot's theorem - Finite satisfiability is undecidable.
Satisfiability of first order Horn clauses.
Problems about abstract machines
The halting problem (determining whether a Turing machine halts on a given input) and the mortality problem (determining whether it halts for every starting configuration).
Determining whether a Turing machine is a busy beaver champion (i.e., is the longest-running among halting Turing machines with the same number of states and symbols).
Rice's theorem states that for all nontrivial properties of partial functions, it is undecidable whether a given machine computes a partial function with that property.
The halting problem for a Minsky machine: a finite-state automaton w
|
https://en.wikipedia.org/wiki/Upsampling
|
In digital signal processing, upsampling, expansion, and interpolation are terms associated with the process of resampling in a multi-rate digital signal processing system. Upsampling can be synonymous with expansion, or it can describe an entire process of expansion and filtering (interpolation). When upsampling is performed on a sequence of samples of a signal or other continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a higher rate (or density, as in the case of a photograph). For example, if compact disc audio at 44,100 samples/second is upsampled by a factor of 5/4, the resulting sample-rate is 55,125.
Upsampling by an integer factor
Rate increase by an integer factor L can be explained as a 2-step process, with an equivalent implementation that is more efficient:
Expansion: Create a sequence, comprising the original samples, separated by L − 1 zeros. A notation for this operation is:
Interpolation: Smooth out the discontinuities with a lowpass filter, which replaces the zeros.
In this application, the filter is called an interpolation filter, and its design is discussed below. When the interpolation filter is an FIR type, its efficiency can be improved, because the zeros contribute nothing to its dot product calculations. It is an easy matter to omit them from both the data stream and the calculations. The calculation performed by a multirate interpolating FIR filter for each output sample is a dot product:
where the h[•] sequence is the impulse response of the interpolation filter, and K is the largest value of k for which h[j + kL] is non-zero. In the case L = 2, h[•] can be designed as a half-band filter, where almost half of the coefficients are zero and need not be included in the dot products. Impulse response coefficients taken at intervals of L form a subsequence, and there are L such subsequences (called phases) multiplexed together. Each of L phases of the impulse respons
|
https://en.wikipedia.org/wiki/Food%20Weekly%20News
|
Food Weekly News is a weekly food science and agricultural newspaper reporting on the latest developments in research in food production. It is published by Vertical News, an imprint of NewsRx, LLC.
External links
Articles on HighBeam Research
Food science
Newspapers published in Atlanta
Agricultural magazines
Weekly newspapers published in the United States
|
https://en.wikipedia.org/wiki/Random%20Fibonacci%20sequence
|
In mathematics, the random Fibonacci sequence is a stochastic analogue of the Fibonacci sequence defined by the recurrence relation , where the signs + or − are chosen at random with equal probability , independently for different . By a theorem of Harry Kesten and Hillel Furstenberg, random recurrent sequences of this kind grow at a certain exponential rate, but it is difficult to compute the rate explicitly. In 1999, Divakar Viswanath showed that the growth rate of the random Fibonacci sequence is equal to 1.1319882487943... , a mathematical constant that was later named Viswanath's constant.
Description
A random Fibonacci sequence is an integer random sequence given by the numbers for natural numbers , where and the subsequent terms are chosen randomly according to the random recurrence relation
An instance of the random Fibonacci sequence starts with 1,1 and the value of the each subsequent term is determined by a fair coin toss: given two consecutive elements of the sequence, the next element is either their sum or their difference with probability 1/2, independently of all the choices made previously. If in the random Fibonacci sequence the plus sign is chosen at each step, the corresponding instance is the Fibonacci sequence (Fn),
If the signs alternate in minus-plus-plus-minus-plus-plus-... pattern, the result is the sequence
However, such patterns occur with vanishing probability in a random experiment. In a typical run, the terms will not follow a predictable pattern:
Similarly to the deterministic case, the random Fibonacci sequence may be profitably described via matrices:
where the signs are chosen independently for different n with equal probabilities for + or −. Thus
where (Mk) is a sequence of independent identically distributed random matrices taking values A or B with probability 1/2:
Growth rate
Johannes Kepler discovered that as n increases, the ratio of the successive terms of the Fibonacci sequence (Fn) approaches the golden ratio wh
|
https://en.wikipedia.org/wiki/Jacobian
|
In mathematics, a Jacobian, named for Carl Gustav Jacob Jacobi, may refer to:
Jacobian matrix and determinant
Jacobian elliptic functions
Jacobian variety
Intermediate Jacobian
Mathematical terminology
|
https://en.wikipedia.org/wiki/Comparison%20of%20instruction%20set%20architectures
|
An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost (among other things); because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.
An ISA defines everything a machine language programmer needs to know in order to program a computer. What an ISA defines differs between ISAs; in general, ISAs define the supported data types, what state there is (such as the main memory and registers) and their semantics (such as the memory consistency and addressing modes), the instruction set (the set of machine instructions that comprises a computer's machine language), and the input/output model.
Base
In the early decades of computing, there were computers that used binary, decimal and even ternary. Contemporary computers are almost exclusively binary.
Bits
Computer architectures are often described as n-bit architectures. In the 20th century, n is often 8, 16, or 32, and in the 21st century, n is often 16, 32 or 64, but other sizes have been used (including 6, 12, 18, 24, 30, 36, 39, 48, 60, 128). This is actually a simplification as computer architecture often has a few more or less "natural" data sizes in the instruction set, but the hardware implementation of these may be very different. Many instruction set architectures have instructions that, on some implementations of that instruction set architecture, operat
|
https://en.wikipedia.org/wiki/Open%20JTAG
|
The Open JTAG project is an open source project released under GNU License.
It is a complete hardware and software JTAG reference design, based on a simple hardware composed by a FTDI FT245 USB front-end and an Altera EPM570 MAX II CPLD. The capabilities of this hardware configuration make the Open JTAG device able to output TCK signals at 24 MHz using macro-instructions sent from the host end.
The scope is to give the community a JTAG device not based on the PC parallel port: Open JTAG uses the USB channel to communicate with the internal CPLD, sending macro-instructions as fast as possible. The complete project (Beta version) is available at OpenCores.org and the Open JTAG project official site.
|
https://en.wikipedia.org/wiki/List%20of%20environmental%20sampling%20techniques
|
Environmental sampling techniques are used in biology, ecology and conservation as part of scientific studies to learn about the flora and fauna of a particular area and establish a habitat's biodiversity, the abundance of species and the conditions in which these species live amongst other information. Where species are caught, researchers often then take the trapped organisms for further study in a lab or are documented by a researcher in the field before the animal is released. This information can then be used to better understand the environment, its ecology, the behaviour of species and how organisms interact with one another and their environment. Here is a list of some sampling techniques and equipment used in environmental sampling:
Quadrats - used for plants and slow moving animals
Techniques for Birds and/or Flying Invertebrates and/or Bats
Malaise Trap
Flight Interception Trap
Harp Trap
Robinson Trap
Butterfly Net
Mist Net
Techniques for Terrestrial Animals
Transect
Tullgren Funnel - used for soil-living arthropods
Pitfall Trap - used for small terrestrial animals like insects and amphibians
Netting techniques for terrestrial animals
Beating Net - used for insects dwelling in trees and shrubs
Sweep Netting - used for insects in grasses
Aspirator/Pooter - used for insects
Camera Trap - used for larger animals
Sherman Trap - used for small mammals
See also
Insect Collecting
Wildlife Biology
Sampling
Sources
Scientific method
Survey methodology
Scientific observation
Biological techniques and tools
|
https://en.wikipedia.org/wiki/Network%20simulation
|
In computer network research, network simulation is a technique whereby a software program replicates the behavior of a real network. This is achieved by calculating the interactions between the different network entities such as routers, switches, nodes, access points, links, etc. Most simulators use discrete event simulation in which the modeling of systems in which state variables change at discrete points in time. The behavior of the network and the various applications and services it supports can then be observed in a test lab; various attributes of the environment can also be modified in a controlled manner to assess how the network/protocols would behave under different conditions.
Network simulator
A network simulator is a software program that can predict the performance of a computer network or a wireless communication network. Since communication networks have become too complex for traditional analytical methods to provide an accurate understanding of system behavior, network simulators are used. In simulators, the computer network is modeled with devices, links, applications, etc., and the network performance is reported. Simulators come with support for the most popular technologies and networks in use today such as 5G, Internet of Things (IoT), Wireless LANs, mobile ad hoc networks, wireless sensor networks, vehicular ad hoc networks, cognitive radio networks, LTE etc.
Simulations
Most of the commercial simulators are GUI driven, while some network simulators are CLI driven. The network model/configuration describes the network (nodes, routers, switches, links) and the events (data transmissions, packet error, etc.). Output results would include network-level metrics, link metrics, device metrics etc. Further, drill down in terms of simulations trace files would also be available. Trace files log every packet, every event that occurred in the simulation and is used for analysis. Most network simulators use discrete event simulation, in which a lis
|
https://en.wikipedia.org/wiki/Porter%27s%20constant
|
In mathematics, Porter's constant C arises in the study of the efficiency of the Euclidean algorithm. It is named after J. W. Porter of University College, Cardiff.
Euclid's algorithm finds the greatest common divisor of two positive integers and . Hans Heilbronn proved that the average number of iterations of Euclid's algorithm, for fixed and averaged over all choices of relatively prime integers ,
is
Porter showed that the error term in this estimate is a constant, plus a polynomially-small correction, and Donald Knuth evaluated this constant to high accuracy. It is:
where
is the Euler–Mascheroni constant
is the Riemann zeta function
is the Glaisher–Kinkelin constant
See also
Lochs' theorem
Lévy's constant
|
https://en.wikipedia.org/wiki/Biomolecular%20engineering
|
Biomolecular engineering is the application of engineering principles and practices to the purposeful manipulation of molecules of biological origin. Biomolecular engineers integrate knowledge of biological processes with the core knowledge of chemical engineering in order to focus on molecular level solutions to issues and problems in the life sciences related to the environment, agriculture, energy, industry, food production, biotechnology and medicine.
Biomolecular engineers purposefully manipulate carbohydrates, proteins, nucleic acids and lipids within the framework of the relation between their structure (see: nucleic acid structure, carbohydrate chemistry, protein structure,), function (see: protein function) and properties and in relation to applicability to such areas as environmental remediation, crop and livestock production, biofuel cells and biomolecular diagnostics. The thermodynamics and kinetics of molecular recognition in enzymes, antibodies, DNA hybridization, bio-conjugation/bio-immobilization and bioseparations are studied. Attention is also given to the rudiments of engineered biomolecules in cell signaling, cell growth kinetics, biochemical pathway engineering and bioreactor engineering.
Timeline
History
During World War II, the need for large quantities of penicillin of acceptable quality brought together chemical engineers and microbiologists to focus on penicillin production. This created the right conditions to start a chain of reactions that lead to the creation of the field of biomolecular engineering. Biomolecular engineering was first defined in 1992 by the U.S. National Institutes of Health as research at the interface of chemical engineering and biology with an emphasis at the molecular level". Although first defined as research, biomolecular engineering has since become an academic discipline and a field of engineering practice. Herceptin, a humanized Mab for breast cancer treatment, became the first drug designed by a biomolecula
|
https://en.wikipedia.org/wiki/Datasource
|
DataSource is a name given to the connection set up to a database from a server. The name is commonly used when creating a query to the database. The data source name (DSN) need not be the same as the filename for the database. For example, a database file named friends.mdb could be set up with a DSN of school. Then DSN school would be used to refer to the database when performing a query.
Sun's version of DataSource
A factory for connections to the physical data source that this DataSource object represents. An alternative to the DriverManager facility, a DataSource object is the preferred means of getting a connection. An object that implements the DataSource interface will typically be registered with a naming service based on the Java Naming and Directory Interface (JNDI) API.
The DataSource interface is implemented by a driver vendor. There are three types of implementations:
Basic implementation — produces a standard Connection object
Connection pooling implementation — produces a Connection object that will automatically participate in connection pooling. This implementation works with a middle-tier connection pooling manager.
Distributed transaction implementation — produces a Connection object that may be used for distributed transactions and almost always participates in connection pooling. This implementation works with a middle-tier transaction manager and almost always with a connection pooling manager.
A DataSource object has properties that can be modified when necessary. For example, if the data source is moved to a different server, the property for the server can be changed. The benefit is that because the data source's properties can be changed, any code accessing that data source does not need to be changed.
A driver that is accessed via a DataSource object does not register itself with the DriverManager. Rather, a DataSource object is retrieved through a lookup operation and then used to create a Connection object. With a basic implement
|
https://en.wikipedia.org/wiki/Mechanical%20calculator
|
A mechanical calculator, or calculating machine, is a mechanical device used to perform the basic operations of arithmetic automatically, or (historically) a simulation such as an analog computer or a slide rule. Most mechanical calculators were comparable in size to small desktop computers and have been rendered obsolete by the advent of the electronic calculator and the digital computer.
Surviving notes from Wilhelm Schickard in 1623 reveal that he designed and had built the earliest of the modern attempts at mechanizing calculation. His machine was composed of two sets of technologies: first an abacus made of Napier's bones, to simplify multiplications and divisions first described six years earlier in 1617, and for the mechanical part, it had a dialed pedometer to perform additions and subtractions. A study of the surviving notes shows a machine that would have jammed after a few entries on the same dial, and that it could be damaged if a carry had to be propagated over a few digits (like adding 1 to 999). Schickard abandoned his project in 1624 and never mentioned it again until his death 11 years later in 1635.
Two decades after Schickard's supposedly failed attempt, in 1642, Blaise Pascal decisively solved these particular problems with his invention of the mechanical calculator. Co-opted into his father's labour as tax collector in Rouen, Pascal designed the calculator to help in the large amount of tedious arithmetic required; it was called Pascal's Calculator or Pascaline.
In 1672, Gottfried Leibniz started designing an entirely new machine called the Stepped Reckoner. It used a stepped drum, built by and named after him, the Leibniz wheel, was the first two-motion calculator, the first to use cursors (creating a memory of the first operand) and the first to have a movable carriage. Leibniz built two Stepped Reckoners, one in 1694 and one in 1706. The Leibniz wheel was used in many calculating machines for 200 years, and into the 1970s with the Curta h
|
https://en.wikipedia.org/wiki/POKEY
|
POKEY, an acronym for Pot Keyboard Integrated Circuit, is a digital I/O chip designed by Doug Neubauer at Atari, Inc. for the Atari 8-bit family of home computers. It was first released with the Atari 400 and Atari 800 in 1979 and is included in all later models and the Atari 5200 console. POKEY combines functions for reading paddle controllers (potentiometers) and computer keyboards as well as sound generation and a source for pseudorandom numbers. It produces four voices of distinctive square wave audio, either as clear tones or modified with distortion settings. Neubauer also developed the Atari 8-bit killer application Star Raiders which makes use of POKEY features.
POKEY chips are used for audio in many arcade video games of the 1980s including Centipede, Missile Command, Asteroids Deluxe, and Gauntlet. Some of Atari's arcade systems use multi-core versions with 2 or 4 POKEYs in a single package for more audio channels. The Atari 7800 console allows a game cartridge to contain a POKEY, providing better sound than the system's audio chip. Only two licensed games make use of this: the ports of Ballblazer and Commando.
The LSI chip has 40 pins and is identified as C012294. The USPTO granted U.S. Patent 4,314,236 to Atari on February 2, 1982 for an "Apparatus for producing a plurality of audio sound effects". The inventors listed are Steven T. Mayer and Ronald E. Milner.
No longer manufactured, POKEY is emulated in software by arcade and Atari 8-bit emulators and also via the Atari SAP music format and associated player.
Features
Audio
4 semi-independent audio channels
Channels may be configured as one of:
Four 8-bit channels
Two 16-bit channels
One 16-bit channel and two 8-bit channels
Per-channel volume, frequency, and waveform (square wave with variable duty cycle or pseudorandom noise)
15 kHz or 64 kHz frequency divider.
Two channels may be driven at the CPU clock frequency.
High-pass filter
Keyboard scan (up to 64 keys) + 2 modifier bits (Shift
|
https://en.wikipedia.org/wiki/Look-and-say%20sequence
|
In mathematics, the look-and-say sequence is the sequence of integers beginning as follows:
1, 11, 21, 1211, 111221, 312211, 13112221, 1113213211, 31131211131221, ... .
To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example:
1 is read off as "one 1" or 11.
11 is read off as "two 1s" or 21.
21 is read off as "one 2, one 1" or 1211.
1211 is read off as "one 1, one 2, two 1s" or 111221.
111221 is read off as "three 1s, two 2s, one 1" or 312211.
The look-and-say sequence was analyzed by John Conway
after he was introduced to it by one of his students at a party.
The idea of the look-and-say sequence is similar to that of run-length encoding.
If started with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For any d other than 1, the sequence starts as follows:
d, 1d, 111d, 311d, 13211d, 111312211d, 31131122211d, …
Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence . (for d = 2, see )
Basic properties
Growth
The sequence grows indefinitely. In fact, any variant defined by starting with a different integer seed number will (eventually) also grow indefinitely, except for the degenerate sequence: 22, 22, 22, 22, ...
Digits presence limitation
No digits other than 1, 2, and 3 appear in the sequence, unless the seed number contains such a digit or a run of more than three of the same digit.
Cosmological decay
Conway's cosmological theorem asserts that every sequence eventually splits ("decays") into a sequence of "atomic elements", which are finite subsequences that never again interact with their neighbors. There are 92 elements containing the digits 1, 2, and 3 only, which John Conway named after the 92 naturally-occurring chemical elements up to uranium, calling the sequence audioactive. There are also two "transuranic" elements (Np and Pu) for each digit other t
|
https://en.wikipedia.org/wiki/Zero%20crossing
|
A zero-crossing is a point where the sign of a mathematical function changes (e.g. from positive to negative), represented by an intercept of the axis (zero value) in the graph of the function. It is a commonly used term in electronics, mathematics, acoustics, and image processing.
In electronics
In alternating current, the zero-crossing is the instantaneous point at which there is no voltage present. In a sine wave or other simple waveform, this normally occurs twice during each cycle. It is a device for detecting the point where the voltage crosses zero in either direction.
The zero-crossing is important for systems that send digital data over AC circuits, such as modems, X10 home automation control systems, and Digital Command Control type systems for Lionel and other AC model trains.
Counting zero-crossings is also a method used in speech processing to estimate the fundamental frequency of speech.
In a system where an amplifier with digitally controlled gain is applied to an input signal, artifacts in the non-zero output signal occur when the gain of the amplifier is abruptly switched between its discrete gain settings. At audio frequencies, such as in modern consumer electronics like digital audio players, these effects are clearly audible, resulting in a 'zipping' sound when rapidly ramping the gain or a soft 'click' when a single gain change is made. Artifacts are disconcerting and clearly not desirable. If changes are made only at zero-crossings of the input signal, then no matter how the amplifier gain setting changes, the output also remains at zero, thereby minimizing the change. (The instantaneous change in gain will still produce distortion, but it will not produce a click.)
If electrical power is to be switched, no electrical interference is generated if switched at an instant when there is no current—a zero crossing. Early light dimmers and similar devices generated interference; later versions were designed to switch at the zero crossing.
In
|
https://en.wikipedia.org/wiki/PLL%20multibit
|
A PLL multibit or multibit PLL is a phase-locked loop (PLL) which achieves improved performance compared to a unibit PLL by using more bits. Unibit PLLs use only the most significant bit (MSB) of each counter's output bus to measure the phase, while multibit PLLs use more bits. PLLs are an essential component in telecommunications.
Multibit PLLs achieve improved efficiency and performance: better utilization of the frequency spectrum, to serve more users at a higher quality of service (QoS), reduced RF transmit power, and reduced power consumption in cellular phones and other wireless devices.
Concepts
A phase-locked loop is an electronic component or system comprising a closed loop for controlling the phase of an oscillator while comparing it with the phase of an input or reference signal. An indirect frequency synthesizer uses a PLL. In an all-digital PLL, a voltage-controlled oscillator (VCO) is controlled using a digital, rather than analog, control signal. The phase detector gives a signal proportional to the phase difference between two signals; in a PLL, one signal is the reference, and the other is the output of the controlled oscillator (or a divider driven by the oscillator).
In a unibit phase-locked loop, the phase is measured using only one bit of the reference and output counters, the most significant bit (MSB). In a multibit phase-locked loop, the phase is measured using more than one bit of the reference and output counters, usually including the most significant bit.
Unibit PLL
In unibit PLLs, the output frequency is defined by the input frequency and the modulo count of the two counters. In each counter, only the most significant bit (MSB) is used. The other output lines of the counters are ignored; this is wasted information.
PLL structure and performance
A PLL includes a phase detector, filter and oscillator connected in a closed loop, so the oscillator frequency follows (equals) the input frequency. Although the average output frequency equ
|
https://en.wikipedia.org/wiki/Abstraction%20%28computer%20science%29
|
In software engineering and computer science, abstraction is the process of generalizing concrete details, such as attributes, away from the study of objects and systems to focus attention on details of greater importance. Abstraction is a fundamental concept in computer science and software engineering, especially within the object-oriented programming paradigm. Examples of this include:
the usage of abstract data types to separate usage from working representations of data within programs;
the concept of functions or subroutines which represent a specific way of implementing control flow;
the process of reorganizing common behavior from groups of non-abstract classes into abstract classes using inheritance and sub-classes, as seen in object-oriented programming languages.
Rationale
Computing mostly operates independently of the concrete world. The hardware implements a model of computation that is interchangeable with others. The software is structured in architectures to enable humans to create the enormous systems by concentrating on a few issues at a time. These architectures are made of specific choices of abstractions. Greenspun's Tenth Rule is an aphorism on how such an architecture is both inevitable and complex.
A central form of abstraction in computing is language abstraction: new artificial languages are developed to express specific aspects of a system. Modeling languages help in planning. Computer languages can be processed with a computer. An example of this abstraction process is the generational development of programming languages from the machine language to the assembly language and the high-level language. Each stage can be used as a stepping stone for the next stage. The language abstraction continues for example in scripting languages and domain-specific programming languages.
Within a programming language, some features let the programmer create new abstractions. These include subroutines, modules, polymorphism, and software componen
|
https://en.wikipedia.org/wiki/Forensic%20biology
|
Forensic biology is the use of biological principles and techniques in the context of law enforcement investigations.
Forensic biology mainly focuses on DNA sequencing of biological matter found at crime scenes. This assists investigators in identifying potential suspects or unidentified bodies.
Forensic biology has many sub-branches, such as forensic anthropology, forensic entomology, forensic odontology, forensic pathology, and forensic toxicology.
Disciplines
History
The first known briefings of forensic procedures still used today are recorded as far back as the 7th century through the concept of utilizing fingerprints as a means of identification.
By the 7th century, forensic procedures were used to account criminals of guilt charges among other things.
Nowadays, the practice of autopsies and forensic investigations has seen a significant surge in both public interest and technological advancements. One of the early pioneers in employing these methods, which would later evolve into the field of forensics, was Alphonse Bertillon, who is also known as the "father of criminal identification". In 1879, he introduced a scientific approach to personal identification by developing the science of anthropometry. This method involved a series of body measurements for distinguishing one human individual from another.
Karl Landsteiner later made further significant discoveries in forensics. In 1901, he found out that blood could be categorized into different groups: A, B, AB, and O, and thus blood typing was introduced to the world of crime-solving. This development led to further studies and eventually, a whole new spectrum of criminology was added in the fields of medicine and forensics.
Dr Leone Lattes, a professor at the Institute of Forensic Medicine in Turin, Italy, has made significant additions into forensics as well. In 1915, he discovered a method to determine the blood group of dried bloodstains, which marked a significant advancement from prior techn
|
https://en.wikipedia.org/wiki/The%20Equidistribution%20of%20Lattice%20Shapes%20of%20Rings%20of%20Integers%20of%20Cubic%2C%20Quartic%2C%20and%20Quintic%20Number%20Fields
|
The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: An Artist's Rendering is a mathematics book by Piper Harron (also known as Piper H), based on her Princeton University doctoral thesis of the same title. It has been described as "feminist", "unique", "honest", "generous", and "refreshing".
Thesis and reception
Harron was advised by Fields Medalist Manjul Bhargava, and her thesis deals with the properties of number fields, specifically the shape of their rings of integers. Harron and Bhargava showed that, viewed as a lattice in real vector space, the ring of integers of a random number field does not have any special symmetries. Rather than simply presenting the proof, Harron intended for the thesis and book to explain both the mathematics and the process (and struggle) that was required to reach this result.
The writing is accessible and informal, and the book features sections targeting three different audiences: laypeople, people with general mathematical knowledge, and experts in number theory. Harron intentionally departs from the typical academic format as she is writing for a community of mathematicians who "do not feel that they are encouraged to be themselves". Unusually for a mathematics thesis, Harron intersperses her rigorous analysis and proofs with cartoons, poetry, pop-culture references, and humorous diagrams. Science writer Evelyn Lamb, in Scientific American, expresses admiration for Harron for explaining the process behind the mathematics in a way that is accessible to non-mathematicians, especially "because as a woman of color, she could pay a higher price for doing it." Mathematician Philp Ording calls her approach to communicating mathematical abstractions "generous".
Her thesis went viral in late 2015, especially within the mathematical community, in part because of the prologue which begins by stating that "respected research math is dominated by men of a certain attitude". Harron had
|
https://en.wikipedia.org/wiki/List%20of%20American%20Physical%20Society%20prizes%20and%20awards
|
The American Physical Society gives out a number of awards for research excellence and conduct; topics include outstanding leadership, computational physics, lasers, mathematics, and more.
Prizes
David Adler Lectureship Award in the Field of Materials Physics
The David Adler Lectureship Award in the Field of Materials Physics is a prize that has been awarded annually by the American Physical Society since 1988. The recipient is chosen for being "an outstanding contributor to the field of materials physics, who is noted for the quality of his/her research, review articles and lecturing." The prize is named after physicist David Adler with contributions to the endowment by friends of David Adler and Energy Conversion Devices, Inc. The winner receives a $5,000 honorarium.
Will Allis Prize for the Study of Ionized Gases
Will Allis Prize for the Study of Ionized Gases is awarded biannually "for outstanding contributions to understanding the physics of partially ionized plasmas and gases" in honour of Will Allis. The $10000 prize was founded in 1989 by contributions from AT&T, General Electric, GTE, International Business Machines, and Xerox Corporations.
Early Career Award for Soft Matter Research
This award recognizes outstanding and sustained contributions by an early-career researcher to the soft matter field.
LeRoy Apker Award
The LeRoy Apker Award was established in 1978 to recognize outstanding achievements in physics by undergraduate students. Two awards are presented each year, one to a student from a Ph.D. granting institution, and one to a student from a non-Ph.D. granting institution.
APS Medal for Exceptional Achievement in Research
The APS Medal for Exceptional Achievement in Research was established in 2016 to recognize contributions of the highest level that advance our knowledge and understanding of the physical universe. The medal carries with it a prize of $50,000 and is the largest APS prize to recognize the achievement of researchers from acro
|
https://en.wikipedia.org/wiki/High%20Precision%20Event%20Timer
|
The High Precision Event Timer (HPET) is a hardware timer available in modern x86-compatible personal computers. Compared to older types of timers available in the x86 architecture, HPET allows more efficient processing of highly timing-sensitive applications, such as multimedia playback and OS task switching. It was developed jointly by Intel and Microsoft and has been incorporated in PC chipsets since 2005. Formerly referred to by Intel as a Multimedia Timer, the term HPET was selected to avoid confusion with the software multimedia timers introduced in the MultiMedia Extensions to Windows 3.0.
Older operating systems that do not support a hardware HPET device can only use older timing facilities, such as the programmable interval timer (PIT) or the real-time clock (RTC). Windows XP, when fitted with the latest hardware abstraction layer (HAL), can also use the processor's Time Stamp Counter (TSC), or ACPI Power Management Timer (ACPI PMTIMER), together with the RTC to provide operating system features that would, in later Windows versions, be provided by the HPET hardware. Confusingly, such Windows XP systems quote "HPET" connectivity in the device driver manager even though the Intel HPET device is not being used.
Features
An HPET chip consists of a 64-bit up-counter (main counter) counting at a frequency of at least 10 MHz, and a set of (at least three, up to 256) comparators. These comparators are 32- or 64-bit-wide. The HPET is programmed via a memory mapped I/O window that is discoverable via ACPI. The HPET circuit in modern PCs is integrated into the southbridge chip.
Each comparator can generate an interrupt when the least significant bits are equal to the corresponding bits of the 64-bit main counter value. The comparators can be put into one-shot mode or periodic mode, with at least one comparator supporting periodic mode and all of them supporting one-shot mode. In one-shot mode the comparator fires an interrupt once when the main counter reaches the
|
https://en.wikipedia.org/wiki/Supersymmetry
|
Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. This symmetry has not been observed in nature. If confirmed, it could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics.
A supersymmetric theory is a theory in which the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. In theory, supersymmetry is a type of spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics.
In supersymmetry, each particle from the class of fermions would have an associated particle in the class of bosons, and vice versa, known as a superpartner. The spin of a particle's superpartner is different by a half-integer. For example, if the electron exists in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron. In the simplest supersymmetry theories, with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass.
Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been appli
|
https://en.wikipedia.org/wiki/Proportionality%20%28mathematics%29
|
In mathematics, two sequences of numbers, often experimental data, are proportional or directly proportional if their corresponding elements have a constant ratio. The ratio is called coefficient of proportionality (or proportionality constant) and its reciprocal is known as constant of normalization (or normalizing constant). Two sequences are inversely proportional if corresponding elements have a constant product, also called the coefficient of proportionality.
This definition is commonly extended to related varying quantities, which are often called variables. This meaning of variable is not the common meaning of the term in mathematics (see variable (mathematics)); these two different concepts share the same name for historical reasons.
Two functions and are proportional if their ratio is a constant function.
If several pairs of variables share the same direct proportionality constant, the equation expressing the equality of these ratios is called a proportion, e.g., (for details see Ratio).
Proportionality is closely related to linearity.
Direct proportionality
Given an independent variable x and a dependent variable y, y is directly proportional to x if there is a non-zero constant k such that:
The relation is often denoted using the symbols "∝" (not to be confused with the Greek letter alpha) or "~":
(or )
For the proportionality constant can be expressed as the ratio:
It is also called the constant of variation or constant of proportionality.
A direct proportionality can also be viewed as a linear equation in two variables with a y-intercept of and a slope of k. This corresponds to linear growth.
Examples
If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, with the speed being the constant of proportionality.
The circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to .
On a map of a sufficiently small
|
https://en.wikipedia.org/wiki/List%20of%20continuity-related%20mathematical%20topics
|
In mathematics, the terms continuity, continuous, and continuum are used in a variety of related ways.
Continuity of functions and measures
Continuous function
Absolutely continuous function
Absolute continuity of a measure with respect to another measure
Continuous probability distribution: Sometimes this term is used to mean a probability distribution whose cumulative distribution function (c.d.f.) is (simply) continuous. Sometimes it has a less inclusive meaning: a distribution whose c.d.f. is absolutely continuous with respect to Lebesgue measure. This less inclusive sense is equivalent to the condition that every set whose Lebesgue measure is 0 has probability 0.
Geometric continuity
Parametric continuity
Continuum
Continuum (set theory), the real line or the corresponding cardinal number
Linear continuum, any ordered set that shares certain properties of the real line
Continuum (topology), a nonempty compact connected metric space (sometimes a Hausdorff space)
Continuum hypothesis, a conjecture of Georg Cantor that there is no cardinal number between that of countably infinite sets and the cardinality of the set of all real numbers. The latter cardinality is equal to the cardinality of the set of all subsets of a countably infinite set.
Cardinality of the continuum, a cardinal number that represents the size of the set of real numbers
See also
Continuous variable
Mathematical analysis
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Index%20notation
|
In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program.
In mathematics
It is frequently helpful in mathematics to refer to the elements of an array using subscripts. The subscripts can be integers or variables. The array takes the form of tensors in general, since these can be treated as multi-dimensional arrays. Special (and more familiar) cases are vectors (1d arrays) and matrices (2d arrays).
The following is only an introduction to the concept: index notation is used in more detail in mathematics (particularly in the representation and manipulation of tensor operations). See the main article for further details.
One-dimensional arrays (vectors)
A vector treated as an array of numbers by writing as a row vector or column vector (whichever is used depends on convenience or context):
Index notation allows indication of the elements of the array by simply writing ai, where the index i is known to run from 1 to n, because of n-dimensions.
For example, given the vector:
then some entries are
.
The notation can be applied to vectors in mathematics and physics. The following vector equation
can also be written in terms of the elements of the vector (aka components), that is
where the indices take a given range of values. This expression represents a set of equations, one for each index. If the vectors each have n elements, meaning i = 1,2,…n, then the equations are explicitly
Hence, index notation serves as an efficient shorthand for
representing the general structure to an equation,
while applicable to individual components.
Two-dimensional arrays
More than one index is used to describe arrays of number
|
https://en.wikipedia.org/wiki/NCR%2053C9x
|
The NCR 53C9x is a family of application-specific integrated circuits (ASIC) produced by the former NCR Corporation and others for implementing the SCSI (small computer standard interface) bus protocol in hardware and relieving the host system of the work required to sequence the SCSI bus. The 53C9x was a low-cost solution and was therefore widely adopted by OEMs in various motherboard and peripheral device designs. The original 53C90 lacked direct memory access (DMA) capability, an omission that was addressed in the 53C90A and subsequent versions.
The 53C90(A) and later 53C94 supported the ANSI X3.l3I-I986 SCSI-1 protocol, implementing the eight bit parallel SCSI bus and eight bit host data bus transfers. The 53CF94 and 53CF96 added SCSI-2 support and implemented larger transfer sizes per SCSI transaction. Additionally, the 53CF96 could be interfaced to a single-ended bus or a high voltage differential (HVD) bus, the latter which supported long bus cables. All members of the 53C94/96 type support both eight and 16 bit host bus transfers via programmed input/output (PIO) and DMA.
QLogic FAS216 and Emulex ESP100 chips are a drop-in replacement for the NCR 53C94. The 53C90A and 53C(F)94/96 were also produced under license by Advanced Micro Devices (AMD).
A list of systems which included the 53C9x controller includes:
53C94
Sun Microsystems SPARCstations and the SPARCclassic
DEC 3000 AXP
DECstations and the PMAZ-A TURBOchannel card
VAXstation model 60, 4000-m90
MIPS Magnum
Power Macintosh G3; often used as a secondary SCSI controller with MESH (Macintosh Enhanced SCSI Hardware) as the primary
MacroSystem's Evolution family for Amiga (FAS216)
53C96
Macintosh Quadra 650
Macintosh LC475/Quadra 605/Performa 475
Macintosh Quadra 900 and 950
See also
NCR 5380
|
https://en.wikipedia.org/wiki/Network%20behavior%20anomaly%20detection
|
Network behavior anomaly detection (NBAD) is a security technique that provides network security threat detection. It is a complementary technology to systems that detect security threats based on packet signatures.
NBAD is the continuous monitoring of a network for unusual events or trends. NBAD is an integral part of network behavior analysis (NBA), which offers security in addition to that provided by traditional anti-threat applications such as firewalls, intrusion detection systems, antivirus software and spyware-detection software.
Description
Most security monitoring systems utilize a signature-based approach to detect threats. They generally monitor packets on the network and look for patterns in the packets which match their database of signatures representing pre-identified known security threats. NBAD-based systems are particularly helpful in detecting security threat vectors in two instances where signature-based systems cannot: (i) new zero-day attacks, and (ii) when the threat traffic is encrypted such as the command and control channel for certain Botnets.
An NBAD program tracks critical network characteristics in real time and generates an alarm if a strange event or trend is detected that could indicate the presence of a threat. Large-scale examples of such characteristics include traffic volume, bandwidth use and protocol use.
NBAD solutions can also monitor the behavior of individual network subscribers. In order for NBAD to be optimally effective, a baseline of normal network or user behavior must be established over a period of time. Once certain parameters have been defined as normal, any departure from one or more of them is flagged as anomalous.
NBAD technology/techniques are applied in a number of network and security monitoring domains including: (i) Log analysis (ii) Packet inspection systems (iii) Flow monitoring systems and (iv) Route analytics.
NBAD has also been described as outlier detection, novelty detection, deviation detecti
|
https://en.wikipedia.org/wiki/Biomagnetics
|
Biomagnetics is a field of biotechnology. It has actively been researched since at least 2004. Although the majority of structures found in living organisms are diamagnetic, the magnetic field itself, as well as magnetic nanoparticles, microstructures and paramagnetic molecules can influence specific physiological functions of organisms under certain conditions. The effect of magnetic fields on biosystems is a topic of research that falls under the biomagnetic umbrella, as well as the construction of magnetic structures or systems that are either biocompatible, biodegradable or biomimetic. Magnetic nanoparticles and magnetic microparticles are known to interact with certain prokaryotes and certain eukaryotes.
Magnetic nanoparticles under the influence of magnetic and electromagnetic fields were shown to modulate redox reactions for the inhibition or the promotion of animal tumor growth. The mechanism underlying nanomagnetic modulation involves the convergence of magnetochemical and magneto-mechanical reactions.
History
In 2014, biotechnicians at Monash University noticed that "the efficiency of delivery of DNA vaccines is often relatively low compared to protein vaccines" and on this basis suggested the use of superparamagnetic iron oxide nanoparticles (SPIONs) to deliver genetic materials via magnetofection because it increases the efficiency of drug delivery.
As of 2021, interactions have been studied between low cost iron oxide nanoparticles (IONPs) and the main groups of biomolecules: proteins, lipids, nucleic acids and carbohydrates. There have been suggestions of magnetically-targeted drug delivery systems, in particular for the cationic peptide lasioglossin.
Around May 2021 rumours abounded that certain mRNA biotech delivery systems were magnetically active. Prompted by state-owned broadcaster France24, :fr:Julien Bobroff who specialises in magnetism and teaches at the University of Paris-Saclay debunked the claims of Covid-19 conspiracy theorists using
|
https://en.wikipedia.org/wiki/Stochastic
|
Stochastic (; ) refers to the property of being well-described by a random probability distribution. Although stochasticity and randomness are distinct in that the former refers to a modeling approach and the latter refers to phenomena themselves, these two terms are often used synonymously. Furthermore, in probability theory, the formal concept of a stochastic process is also referred to as a random process.
Stochasticity is used in many different fields, including the natural sciences such as biology, chemistry, ecology, neuroscience, and physics, as well as technology and engineering fields such as image processing, signal processing, information theory, computer science, cryptography, and telecommunications. It is also used in finance, due to seemingly random changes in financial markets as well as in medicine, linguistics, music, media, colour theory, botany, manufacturing, and geomorphology.
Etymology
The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence. In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics". This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz, who in 1917 wrote in German the word Stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob. For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin, though the German term had been used earlier in 1931 by Andrey Kolmogorov.
Mathematics
In the early 1930s, Aleksandr Khinchin gave the first mathematical definition of a stochas
|
https://en.wikipedia.org/wiki/SAMV%20%28algorithm%29
|
SAMV (iterative sparse asymptotic minimum variance) is a parameter-free superresolution algorithm for the linear inverse problem in spectral estimation, direction-of-arrival (DOA) estimation and tomographic reconstruction with applications in signal processing, medical imaging and remote sensing. The name was coined in 2013 to emphasize its basis on the asymptotically minimum variance (AMV) criterion. It is a powerful tool for the recovery of both the amplitude and frequency characteristics of multiple highly correlated sources in challenging environments (e.g., limited number of snapshots and low signal-to-noise ratio). Applications include synthetic-aperture radar, computed tomography scan, and magnetic resonance imaging (MRI).
Definition
The formulation of the SAMV algorithm is given as an inverse problem in the context of DOA estimation. Suppose an -element uniform linear array (ULA) receive narrow band signals emitted from sources located at locations , respectively. The sensors in the ULA accumulates snapshots over a specific time. The dimensional snapshot vectors are
where is the steering matrix, contains the source waveforms, and is the noise term. Assume that , where is the Dirac delta and it equals to 1 only if and 0 otherwise. Also assume that and are independent, and that , where . Let be a vector containing the unknown signal powers and noise variance, .
The covariance matrix of that contains all information about is
This covariance matrix can be traditionally estimated by the sample covariance matrix where . After applying the vectorization operator to the matrix , the obtained vector is linearly related to the unknown parameter as
,
where , , , , and let
where
is the Kronecker product.
SAMV algorithm
To estimate the parameter from the statistic , we develop a series of iterative SAMV approaches based on the asymptotically minimum variance criterion. From, the covariance matrix of an arbitrary consistent estimator o
|
https://en.wikipedia.org/wiki/Minimum-Pairs%20Protocol
|
The minimum-pairs (or MP) is an active measurement protocol to estimate in real-time the smaller of the forward and reverse one-way network delays (OWDs). It is designed to work in hostile environments, where a set of three network nodes can estimate an upper-bound OWD between themselves and a fourth untrusted node. All four nodes must cooperate, though honest cooperation from the fourth node is not required. The objective is to conduct such estimates without involving the untrusted nodes in clock synchronization, and in a manner more accurate than simply half the round-trip time (RTT). The MP protocol can be used in delay-sensitive applications (such as placing content delivery network replicas) or for secure Internet geolocation.
Methodology
The MP protocol requires the three trusted network nodes to synchronize their clocks, and securely have access to their public keys, which could be achieved through a closed public key infrastructure (PKI) system. The untrusted node need not follow suit because it is not assumed to cooperate honestly. To estimate an upper bound to the smaller of the forward and reverse OWD between node A and the untrusted node X (see figure for notation), X first establishes an application-layer connection to all three nodes. This could be done transparently over the browser using, e.g., WebSockets. The three nodes then take turns in exchanging digitally-signed timestamps.
Assuming node A begins, it sends a signed timestamp to X. Node X forwards that message to the other two nodes. When the message is received, its receiving time is recorded. The receiving node then verifies the signature, and calculates the time it took the message to traverse the network from its originator to the recipient passing by the untrusted node. This is done by subtracting the timestamp in the message from the receiving time. Node B then repeats the process, followed by node C. After all three nodes have taken turns, they end-up with six delay estimates corresp
|
https://en.wikipedia.org/wiki/Contributors%20to%20the%20mathematical%20background%20for%20general%20relativity
|
This is a list of contributors to the mathematical background for general relativity. For ease of readability, the contributions (in brackets) are unlinked but can be found in the contributors' article.
B
Luigi Bianchi (Bianchi identities, Bianchi groups, differential geometry)
C
Élie Cartan (curvature computation, early extensions of GTR, Cartan geometries)
Elwin Bruno Christoffel (connections, tensor calculus, Riemannian geometry)
Clarissa-Marie Claudel (Geometry of photon surfaces)
D
Tevian Dray (The Geometry of General Relativity)
E
Luther P. Eisenhart (semi-Riemannian geometries)
Frank B. Estabrook (Wahlquist-Estabrook approach to solving PDEs; see also parent list)
Leonhard Euler (Euler-Lagrange equation, from which the geodesic equation is obtained)
G
Carl Friedrich Gauss (curvature, theory of surfaces, intrinsic vs. extrinsic)
K
Martin Kruskal (inverse scattering transform; see also parent list)
L
Joseph Louis Lagrange (Lagrangian mechanics, Euler-Lagrange equation)
Tullio Levi-Civita (tensor calculus, Riemannian geometry; see also parent list)
André Lichnerowicz (tensor calculus, transformation groups)
M
Alexander Macfarlane (space analysis and Algebra of Physics)
Jerrold E. Marsden (linear stability)
N
Isaac Newton (Newton's identities for characteristic of Einstein tensor)
R
Gregorio Ricci-Curbastro (Ricci tensor, differential geometry)
Georg Bernhard Riemann (Riemannian geometry, Riemann curvature tensor)
S
Richard Schoen (Yamabe problem; see also parent list)
Corrado Segre (Segre classification)
W
Hugo D. Wahlquist (Wahlquist-Estabrook algorithm; see also parent list)
Hermann Weyl (Weyl tensor, gauge theories; see also parent list)
Eugene P. Wigner (stabilizers in Lorentz group)
See also
Contributors to differential geometry
Contributors to general relativity
Physics-related lists
|
https://en.wikipedia.org/wiki/Transport%20of%20structure
|
In mathematics, particularly in universal algebra and category theory, transport of structure refers to the process whereby a mathematical object acquires a new structure and its canonical definitions, as a result of being isomorphic to (or otherwise identified with) another object with a pre-existing structure. Definitions by transport of structure are regarded as canonical.
Since mathematical structures are often defined in reference to an underlying space, many examples of transport of structure involve spaces and mappings between them. For example, if and are vector spaces with being an inner product on , such that there is an isomorphism from to , then one can define an inner product on by the following rule:
Although the equation makes sense even when is not an isomorphism, it only defines an inner product on when is, since otherwise it will cause to be degenerate. The idea is that allows one to consider and as "the same" vector space, and by following this analogy, then one can transport an inner product from one space to the other.
A more elaborated example comes from differential topology, in which the notion of smooth manifold is involved: if is such a manifold, and if is any topological space which is homeomorphic to , then one can consider as a smooth manifold as well. That is, given a homeomorphism , one can define coordinate charts on by "pulling back" coordinate charts on through . Recall that a coordinate chart on is an open set together with an injective map
for some natural number ; to get such a chart on , one uses the following rules:
and .
Furthermore, it is required that the charts cover (the fact that the transported charts cover follows immediately from the fact that is a bijection). Since is a smooth manifold, if U and V, with their maps and , are two charts on , then the composition, the "transition map"
(a self-map of )
is smooth. To verify this for the transported charts on , notice that
,
and there
|
https://en.wikipedia.org/wiki/Directory-based%20cache%20coherence
|
In computer engineering, directory-based cache coherence is a type of cache coherence mechanism, where directories are used to manage caches in place of bus snooping. Bus snooping methods scale poorly due to the use of broadcasting. These methods can be used to target both performance and scalability of directory systems.
Full bit vector format
In the full bit vector format, for each possible cache line in memory, a bit is used to track whether every individual processor has that line stored in its cache. The full bit vector format is the simplest structure to implement, but the least scalable. The SGI Origin 2000 uses a combination of full bit vector and coarse bit vector depending on the number of processors.
Each directory entry must have 1 bit stored per processor per cache line, along with bits for tracking the state of the directory. This leads to the total size required being (number of processors)×number of cache lines, having a storage overhead ratio of (number of processors)/(cache block size×8).
It can be observed that directory overhead scales linearly with the number of processors. While this may be fine for a small number of processors, when implemented in large systems the size requirements for the directory becomes excessive. For example, with a block size of 32 bytes and 1024 processors, the storage overhead ratio becomes 1024/(32×8) = 400%.
Coarse bit vector format
The coarse bit vector format has a similar structure to the full bit vector format, though rather than tracking one bit per processor for every cache line, the directory groups several processors into nodes, storing whether a cache line is stored in a node rather than a line. This improves size requirements at the expense of bus traffic saving (processors per node)×(total lines) bits of space. Thus the ratio overhead is the same, just replacing number of processors with number of processor groups. When a bus request is made for a cache line that one processor in the group has, th
|
https://en.wikipedia.org/wiki/Gelfond%27s%20constant
|
In mathematics, Gelfond's constant, named after Aleksandr Gelfond, is , that is, raised to the power . Like both and , this constant is a transcendental number. This was first established by Gelfond and may now be considered as an application of the Gelfond–Schneider theorem, noting that
where is the imaginary unit. Since is algebraic but not rational, is transcendental. The constant was mentioned in Hilbert's seventh problem. A related constant is , known as the Gelfond–Schneider constant. The related value + is also irrational.
Numerical value
The decimal expansion of Gelfond's constant begins
...
Construction
If one defines and
for , then the sequence
converges rapidly to .
Continued fraction expansion
This is based on the digits for the simple continued fraction:
As given by the integer sequence A058287.
Geometric property
The volume of the n-dimensional ball (or n-ball), is given by
where is its radius, and is the gamma function. Any even-dimensional ball has volume
and, summing up all the unit-ball () volumes of even-dimension gives
Similar or related constants
Ramanujan's constant
This is known as Ramanujan's constant. It is an application of Heegner numbers, where 163 is the Heegner number in question.
Similar to , is very close to an integer:
...
This number was discovered in 1859 by the mathematician Charles Hermite.
In a 1975 April Fool article in Scientific American magazine, "Mathematical Games" columnist Martin Gardner made the hoax claim that the number was in fact an integer, and that the Indian mathematical genius Srinivasa Ramanujan had predicted it—hence its name.
The coincidental closeness, to within 0.000 000 000 000 75 of the number is explained by complex multiplication and the q-expansion of the j-invariant, specifically:
and,
where is the error term,
which explains why is 0.000 000 000 000 75 below .
(For more detail on this proof, consult the article on Heegner numbers.)
The number
The decimal
|
https://en.wikipedia.org/wiki/Thermal%20energy
|
The term "thermal energy" is used loosely in various contexts in physics and engineering, generally related to the kinetic energy of vibrating and colliding atoms in a substance. It can refer to several different well-defined physical concepts. These include the internal energy or enthalpy of a body of matter and radiation; heat, defined as a type of energy transfer (as is thermodynamic work); and the characteristic energy of a degree of freedom, , in a system that is described in terms of its microscopic particulate constituents (where denotes temperature and denotes the Boltzmann constant).
Relation to heat and internal energy
In thermodynamics, heat is energy transferred to or from a thermodynamic system by mechanisms other than thermodynamic work or transfer of matter, such as conduction, radiation, and friction. Heat refers to a quantity transferred between systems, not to a property of any one system, or "contained" within it. On the other hand, internal energy and enthalpy are properties of a single system. Heat and work depend on the way in which an energy transfer occurred, whereas internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there.
Macroscopic thermal energy
The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that "the converted chemical potential energy has simply become internal energy". It is, however, convenient and more lucid to say that "the chemical potential energy has been converted into thermal energy". Such thermal energy may be viewed as a contributor to internal energy or to enthalpy, thinking of the contribution as a process without thinking that the contributed energy has become an identifiable component o
|
https://en.wikipedia.org/wiki/List%20of%20transforms
|
This is a list of transforms in mathematics.
Integral transforms
Abel transform
Bateman transform
Fourier transform
Short-time Fourier transform
Gabor transform
Hankel transform
Hartley transform
Hermite transform
Hilbert transform
Hilbert–Schmidt integral operator
Jacobi transform
Laguerre transform
Laplace transform
Inverse Laplace transform
Two-sided Laplace transform
Inverse two-sided Laplace transform
Laplace–Carson transform
Laplace–Stieltjes transform
Legendre transform
Linear canonical transform
Mellin transform
Inverse Mellin transform
Poisson–Mellin–Newton cycle
N-transform
Radon transform
Stieltjes transformation
Sumudu transform
Wavelet transform (integral)
Weierstrass transform
Hussein Jassim Transform
Discrete transforms
Binomial transform
Discrete Fourier transform, DFT
Fast Fourier transform, a popular implementation of the DFT
Discrete cosine transform
Modified discrete cosine transform
Discrete Hartley transform
Discrete sine transform
Discrete wavelet transform
Hadamard transform (or, Walsh–Hadamard transform)
Fast wavelet transform
Hankel transform, the determinant of the Hankel matrix
Discrete Chebyshev transform
Equivalent, up to a diagonal scaling, to a discrete cosine transform
Finite Legendre transform
Spherical Harmonic transform
Irrational base discrete weighted transform
Number-theoretic transform
Stirling transform
Discrete-time transforms
These transforms have a continuous frequency domain:
Discrete-time Fourier transform
Z-transform
Data-dependent transforms
Karhunen–Loève transform
Other transforms
Affine transformation (computer graphics)
Bäcklund transform
Bilinear transform
Box–Muller transform
Burrows–Wheeler transform (data compression)
Chirplet transform
Distance transform
Fractal transform
Gelfand transform
Hadamard transform
Hough transform (digital image processing)
Inverse scattering transform
Legendre transformation
Möbius transformation
Perspective transform (computer graphics)
Sequence transform
Watershed transform (
|
https://en.wikipedia.org/wiki/Current%E2%80%93voltage%20characteristic
|
A current–voltage characteristic or I–V curve (current–voltage curve) is a relationship, typically represented as a chart or graph, between the electric current through a circuit, device, or material, and the corresponding voltage, or potential difference, across it.
In electronics
In electronics, the relationship between the direct current (DC) through an electronic device and the DC voltage across its terminals is called a current–voltage characteristic of the device. Electronic engineers use these charts to determine basic parameters of a device and to model its behavior in an electrical circuit. These characteristics are also known as I–V curves, referring to the standard symbols for current and voltage.
In electronic components with more than two terminals, such as vacuum tubes and transistors, the current–voltage relationship at one pair of terminals may depend on the current or voltage on a third terminal. This is usually displayed on a more complex current–voltage graph with multiple curves, each one representing the current–voltage relationship at a different value of current or voltage on the third terminal.
For example the diagram at right shows a family of I–V curves for a MOSFET as a function of drain voltage with overvoltage (VGS − Vth) as a parameter.
The simplest I–V curve is that of a resistor, which according to Ohm's law exhibits a linear relationship between the applied voltage and the resulting electric current; the current is proportional to the voltage, so the I–V curve is a straight line through the origin with positive slope. The reciprocal of the slope is equal to the resistance.
The I–V curve of an electrical component can be measured with an instrument called a curve tracer. The transconductance and Early voltage of a transistor are examples of parameters traditionally measured from the device's I–V curve.
Types of I–V curves
The shape of an electrical component's characteristic curve reveals much about its operating properti
|
https://en.wikipedia.org/wiki/Superellipsoid
|
In mathematics, a superellipsoid (or super-ellipsoid) is a solid whose horizontal sections are superellipses (Lamé curves) with the same squareness parameter , and whose vertical sections through the center are superellipses with the squareness parameter . It is a generalization of an ellipsoid, which is a special case when .
Superellipsoids as computer graphics primitives were popularized by Alan H. Barr (who used the name "superquadrics" to refer to both superellipsoids and supertoroids). In modern computer vision and robotics literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics.
Superellipsoids have an rich shape vocabulary, including cuboids, cylinders, ellipsoids, octahedra and their intermediates. It becomes an important geometric primitive widely used in computer vision, robotics, and physical simulation. The main advantage of describing objects and envirionment with superellipsoids is its conciseness and expressiveness in shape. Furthermore, a closed-form expression of the Minkowski sum between two superellipsoids is available. This makes it a desirable geometric primitive for robot grasping, collision detection, and motion planning. Useful tools and algorithms for superquadric visualization, sampling, and recovery are open-sourced here.
Special cases
A handful of notable mathematical figures can arise as special cases of superellipsoids given the correct set of values, which are depicted in the above graphic:
Cylinder
Sphere
Steinmetz solid
Bicone
Regular octahedron
Cube, as a limiting case where the exponents tend to infinity
Piet Hein's supereggs are also special cases of superellipsoids.
Formulas
Basic (normalized) superellipsoid
The basic superellipsoid is defined by the implicit function
The parameters and are positive real numbers that control the squareness of the shape.
The surface of the superellipsoid is de
|
https://en.wikipedia.org/wiki/Antenna%20effect
|
The antenna effect, more formally plasma induced gate oxide damage, is an effect that can potentially cause yield and reliability problems during the manufacture of MOS integrated circuits. Factories (fabs) normally supply antenna rules, which are rules that must be obeyed to avoid this problem. A violation of such rules is called an antenna violation. The word antenna is something of a misnomer in this context—the problem is really the collection of charge, not the normal meaning of antenna, which is a device for converting electromagnetic fields to/from electrical currents. Occasionally the phrase antenna effect is used in this context, but this is less common since there are many effects, and the phrase does not make clear which is meant.
Figure 1(a) shows a side view of a typical net in an integrated circuit. Each net will include at least one driver, which must contain a source or drain diffusion (in newer technology implantation is used), and at least one receiver, which will consist of a gate electrode over a thin gate dielectric (see Figure 2 for a detailed view of a MOS transistor). Since the gate dielectric is so thin, only a few molecules thick, a big worry is breakdown of this layer. This can happen if the net somehow acquires a voltage somewhat higher than the normal operating voltage of the chip. (Historically, the gate dielectric has been silicon dioxide, so most of the literature refers to gate oxide damage or gate oxide breakdown. As of 2007, some manufacturers are replacing this oxide with various high-κ dielectric materials which may or may not be oxides, but the effect is still the same.)
Once the chip is fabricated, this cannot happen, since every net has at least some source/drain implant connected to it. The source/drain implant forms a diode, which breaks down at a lower voltage than the oxide (either forward diode conduction, or reverse breakdown), and does so non-destructively. This protects the gate oxide.
However, during th
|
https://en.wikipedia.org/wiki/Keepalive
|
A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent the link from being broken.
Description
Once a TCP connection has been established, that connection is defined to be valid until one side closes it. Once the connection has entered the connected state, it will remain connected indefinitely. But, in reality, the connection will not last indefinitely. Many firewall or NAT systems will close a connection if there has been no activity in some time period. The Keep Alive signal can be used to trick intermediate hosts to not close the connection due to inactivity. It is also possible that one host is no longer listening (e.g. application or system crash). In this case, the connection is closed, but no FIN was ever sent. In this case, a KeepAlive packet can be used to interrogate a connection to check if it is still intact.
A keepalive signal is often sent at predefined intervals, and plays an important role on the Internet. After a signal is sent, if no reply is received, the link is assumed to be down and future data will be routed via another path until the link is up again. A keepalive signal can also be used to indicate to Internet infrastructure that the connection should be preserved. Without a keepalive signal, intermediate NAT-enabled routers can drop the connection after timeout.
Since the only purpose is to find links that do not work or to indicate connections that should be preserved, keepalive messages tend to be short and not take much bandwidth. However, their precise format and usage terms depend on the communication protocol.
TCP keepalive
Transmission Control Protocol (TCP) keepalives are an optional feature, and if included must default to off. The keepalive packet contains no data. In an Ethernet network, this results in frames of minimum size (64 bytes). There are three parameters related to keepalive:
Keepalive time is the duration between two keepalive transmissions in
|
https://en.wikipedia.org/wiki/Gradient%20pattern%20analysis
|
Gradient pattern analysis (GPA) is a geometric computing method for characterizing geometrical bilateral symmetry breaking of an ensemble of symmetric vectors regularly distributed in a square lattice. Usually, the lattice of vectors represent the first-order gradient of a scalar field, here an M x M square amplitude matrix. An important property of the gradient representation is the following: A given M x M matrix where all amplitudes are different results in an M x M gradient lattice containing asymmetric vectors. As each vector can be characterized by its norm and phase, variations in the amplitudes can modify the respective gradient pattern.
The original concept of GPA was introduced by Rosa, Sharma and Valdivia in 1999. Usually GPA is applied for spatio-temporal pattern analysis in physics and environmental sciences operating on time-series and digital images.
Calculation
By connecting all vectors using a Delaunay triangulation criterion it is possible to characterize gradient asymmetries computing the so-called gradient asymmetry coefficient, that has been defined as:
,
where is the total number of asymmetric vectors, is the number of Delaunay connections among them and the property
is valid for any gradient square lattice.
As the asymmetry coefficient is very sensitive to small changes in the phase and modulus of each gradient vector, it can distinguish complex variability patterns (bilateral asymmetry) even when they are very similar but consist of a very fine structural difference. Note that, unlike most of the statistical tools, the GPA does not rely on the statistical properties of the data but
depends solely on the local symmetry properties of the correspondent gradient pattern.
For a complex extended pattern (matrix of amplitudes of a spatio-temporal pattern) composed by locally asymmetric fluctuations, is nonzero, defining different classes of irregular fluctuation patterns (1/f noise, chaotic, reactive-diffusive, etc.).
Besides o
|
https://en.wikipedia.org/wiki/Notation%20for%20differentiation
|
In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation (and its opposite operation, the antidifferentiation or indefinite integration) are listed below.
Leibniz's notation
The original notation employed by Gottfried Leibniz is used throughout mathematics. It is particularly common when the equation is regarded as a functional relationship between dependent and independent variables and . Leibniz's notation makes this relationship explicit by writing the derivative as
Furthermore, the derivative of at is therefore written
Higher derivatives are written as
This is a suggestive notational device that comes from formal manipulations of symbols, as in,
The value of the derivative of at a point may be expressed in two ways using Leibniz's notation:
.
Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially helpful when considering partial derivatives. It also makes the chain rule easy to remember and recognize:
Leibniz's notation for differentiation does not require assigning a meaning to symbols such as or (known as differentials) on their own, and some authors do not attempt to assign these symbols meaning. Leibniz treated these symbols as infinitesimals. Later authors have assigned them other meanings, such as infinitesimals in non-standard analysis, or exterior derivatives. Commonly, is left undefined or equated with , while is assigned a meaning in terms of , via the equation
which may also be written, e.g.
(see below). Such equations give rise to the terminology found in some texts wherein the derivative is referred to as the "differential coefficie
|
https://en.wikipedia.org/wiki/Sinc%20function
|
In mathematics, physics and engineering, the sinc function, denoted by , has two forms, normalized and unnormalized.
In mathematics, the historical unnormalized sinc function is defined for by
Alternatively, the unnormalized sinc function is often called the sampling function, indicated as Sa(x).
In digital signal processing and information theory, the normalized sinc function is commonly defined for by
In either case, the value at is defined to be the limiting value
for all real (the limit can be proven using the squeeze theorem).
The normalization causes the definite integral of the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value of ). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values of .
The normalized sinc function is the Fourier transform of the rectangular function with no scaling. It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal.
The only difference between the two definitions is in the scaling of the independent variable (the axis) by a factor of . In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1. The sinc function is then analytic everywhere and hence an entire function.
The function has also been called the cardinal sine or sine cardinal function. The term sinc was introduced by Philip M. Woodward in his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own", and his 1953 book Probability and Information Theory, with Applications to Radar.
The function itself was first mathematically derived in this form by Lord Rayleigh in his expression (Rayleigh's Formula) for the zeroth-order spherical Bessel function of the first
|
https://en.wikipedia.org/wiki/List%20of%20types%20of%20numbers
|
Numbers can be classified according to how they are represented or according to the properties that they have.
Main types
Natural numbers (): The counting numbers {1, 2, 3, ...} are commonly called natural numbers; however, other definitions include 0, so that the non-negative integers {0, 1, 2, 3, ...} are also called natural numbers. Natural numbers including 0 are also sometimes called whole numbers.
Integers (): Positive and negative counting numbers, as well as zero: {..., −3, −2, −1, 0, 1, 2, 3, ...}.
Rational numbers (): Numbers that can be expressed as a ratio of an integer to a non-zero integer. All integers are rational, but there are rational numbers that are not integers, such as .
Real numbers (): Numbers that correspond to points along a line. They can be positive, negative, or zero. All rational numbers are real, but the converse is not true.
Irrational numbers: Real numbers that are not rational.
Imaginary numbers: Numbers that equal the product of a real number and the square root of −1. The number 0 is both real and purely imaginary.
Complex numbers (): Includes real numbers, imaginary numbers, and sums and differences of real and imaginary numbers.
Hypercomplex numbers include various number-system extensions: quaternions (), octonions (), and other less common variants.
-adic numbers: Various number systems constructed using limits of rational numbers, according to notions of "limit" different from the one used to construct the real numbers.
Number representations
Decimal: The standard Hindu–Arabic numeral system using base ten.
Binary: The base-two numeral system used by computers, with digits 0 and 1.
Ternary: The base-three numeral system with 0, 1, and 2 as digits.
Quaternary: The base-four numeral system with 0, 1, 2, and 3 as digits.
Hexadecimal: Base 16, widely used by computer system designers and programmers, as it provides a more human-friendly representation of binary-coded values.
Octal: Base 8, occasionally used b
|
https://en.wikipedia.org/wiki/Embedded%20HTTP%20server
|
An embedded HTTP server is an HTTP server used in an embedded system.
The HTTP server is usually implemented as a software component of an application (embedded) system that controls and/or monitors a machine with mechanical and/or electrical parts.
The HTTP server implements the HTTP protocol in order to allow communications with one or more local or remote users using a browser. The aim is to let users to interact with information provided by the embedded system (user interface, data monitoring, data logging, data configuration, etc.) via network, without using traditional peripherals required for local user interfaces (display, keyboard, etc.).
In some cases the functionalities provided via HTTP server allow also program-to-program communications, e.g. to retrieve data logged about the monitored machine, etc.
Usages
Examples of usage within an embedded application might be (e.g.):
to provide a thin client interface for a traditional application;
to provide indexing, reporting, and debugging tools during the development stage;
to implement a protocol for the distribution and acquisition of information to be displayed in the regular interface — possibly a web service, and possibly using XML as the data format;
to develop a web application.
Advantages
There are a few advantages to using HTTP to perform the above:
HTTP is a well studied cross-platform protocol and there are mature implementations freely available;
HTTP is seldom blocked by firewalls and intranet routers;
HTTP clients (e.g. web browsers) are readily available with all modern computers;
there is a growing tendency of using embedded HTTP servers in applications that parallels the rising trends of home-networking and ubiquitous computing.
Typical requirements
Natural limitations of the platforms where an embedded HTTP server runs contribute to the list of the non-functional requirements of the embedded, or more precise, embeddable HTTP server. Some of these requirements are the followin
|
https://en.wikipedia.org/wiki/Reliable%20Data%20Transfer
|
Reliable Data Transfer is a topic in computer networking concerning the transfer of data across unreliable channels. Unreliability is one of the drawbacks of packet switched networks such as the modern internet, as packet loss can occur for a variety of reasons, and delivery of packets is not guaranteed to happen in the order that the packets were sent. Therefore, in order to create long-term data streams over the internet, techniques have been developed to provide reliability, which are generally implemented in the Transport layer of the internet protocol suite.
In instructional materials, the topic is often presented in the form of theoretical example protocols which are themselves referred to as "RDT", in order to introduce students to the problems and solutions encountered in Transport layer protocols such as the Transmission Control Protocol. These sources often describe a pseudo-API and include Finite-state machine diagrams to illustrate how such a protocol might be implemented, as well as a version history. These details are generally consistent between sources, yet are often left uncited, so the origin of this theoretical RDT protocol is unclear.
Example Versions
Sources that describe an example RDT protocol often provide a "version history" to illustrate the development of modern Transport layer techniques, generally resembling the below:
Reliable Data Transfer 1.0
With Reliable Data Transfer 1.0, the data can only be transferred via a reliable data channel. It is the most simple of the Reliable Data Transfer protocols in terms of algorithm processing.
Reliable Data Transfer 2.0
Reliable Data Transfer 2.0 supports reliable data transfer in unreliable data channels. It uses a checksum to detect errors. The receiver sends acknowledgement message if the message is complete, and if the message is incomplete, it sends a negative acknowledgement message and requests the data again.
Reliable Data Transfer 2.1
Reliable Data Transfer 2.1 also suppor
|
https://en.wikipedia.org/wiki/Overlap%E2%80%93add%20method
|
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal with a finite impulse response (FIR) filter :
where for m outside the region .
This article uses common abstract notations, such as or in which it is understood that the functions should be thought of in their totality, rather than at specific instants (see Convolution#Notation).
The concept is to divide the problem into multiple convolutions of h[n] with short segments of :
where L is an arbitrary segment length. Then:
and y[n] can be written as a sum of short convolutions:
where the linear convolution is zero outside the region . And for any parameter it is equivalent to the N-point circular convolution of with in the . The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem:
where:
DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and
is customarily chosen such that is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency.
Pseudocode
The following is a pseudocode of the algorithm:
(Overlap-add algorithm for linear convolution)
h = FIR_filter
M = length(h)
Nx = length(x)
N = 8 × 2^ceiling( log2(M) ) (8 times the smallest power of two bigger than filter length M. See next section for a slightly better choice.)
step_size = N - (M-1) (L in the text above)
H = DFT(h, N)
position = 0
y(1 : Nx + M-1) = 0
while position + step_size ≤ Nx do
y(position+(1:N)) = y(position+(1:N)) + IDFT(DFT(x(position+(1:step_size)), N) × H)
position = position + step_size
end
Efficiency considerations
When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces output samples, so the number of compl
|
https://en.wikipedia.org/wiki/E-SCREEN
|
E-SCREEN is a cell proliferation assay based on the enhanced proliferation of human breast cancer cells (MCF-7) in the presence of estrogen active substances. The E-SCREEN test is a tool to easily and rapidly assess estrogenic activity of suspected xenoestrogens (singly or in combination). This bioassay measures estrogen-induced increase of the number of human breast cancer cell, which is biologically equivalent to the increase of mitotic activity in tissues of the genital tract. It was originally developed by Soto et al. and was included in the first version of the OECD Conceptual Framework for Testing and Assessment of Endocrine Disrupters published in 2012. However, due to failed validation, it was not included in the updated version of the framework published in 2018.
The E-SCREEN test
The E-SCREEN cell proliferation assay is performed with the human MCF-7 breast cancer cell line, an established estrogenic cell line that endogenously expresses ERα.
Human MCF-7 are cultivated in Dulbecco’s modified Eagle’s medium (DMEM) with fetal bovine serum (FBS) and phenol red as buffer tracer (culture medium), at 37 °C, in an atmosphere of 5% CO₂ and 95% air under saturating humidity. To accomplish the E-SCREEN assay the cells are trypsinized and plated in well culture plates. Cells are allowed to attach for 24 h, and the 4 seeding medium is then removed and replaced with the experimental culture medium (phenol red free DMEM with charcoal dextran treated fetal bovine serum -steroid-free-).
For assaying suspected estrogen active substances, a range of concentrations of the test compound is added to the experimental medium. In each experiment, the cells are exposed to a dilution series of 17β-estradiol (0.1 pM–1000 pM) for providing a positive control (standard dose-response curve), and treated only with hormone-free medium as a negative control. The bioassay ends on day 6 (late exponential phase) by removing the media from the wells and fixing the cells with trichloroace
|
https://en.wikipedia.org/wiki/Operating%20system%20Wi-Fi%20support
|
Operating system Wi-Fi support is the support in the operating system for Wi-Fi and usually consists of two pieces: driver level support, and configuration and management support.
Driver support is usually provided by multiple manufacturers of the chip set hardware or end manufacturers. Also available are Unix clones such as Linux, sometimes through open source projects.
Configuration and management support consists of software to enumerate, join, and check the status of available Wi-Fi networks. This also includes support for various encryption methods. These systems are often provided by the operating system backed by a standard driver model. In most cases, drivers emulate an Ethernet device and use the configuration and management utilities built into the operating system. In cases where built-in configuration and management support is non-existent or inadequate, hardware manufacturers may include their own software to handle the respective tasks.
Microsoft Windows
Microsoft Windows has comprehensive driver-level support for Wi-Fi, the quality of which depends on the hardware manufacturer. Hardware manufacturers almost always ship Windows drivers with their products. Windows ships with very few Wi-Fi drivers and depends on the original equipment manufacturers (OEMs) and device manufacturers to make sure users get drivers. Configuration and management depend on the version of Windows.
Earlier versions of Windows, such as 98, ME and 2000 do not have built-in configuration and management support and must depend on software provided by the manufacturer
Microsoft Windows XP has built-in configuration and management support. The original shipping version of Windows XP included rudimentary support which was dramatically improved in Service Pack 2. Support for WPA2 and some other security protocols require updates from Microsoft. Many hardware manufacturers include their own software and require the user to disable Windows’ built-in Wi-Fi support.
Windows Vista, Win
|
https://en.wikipedia.org/wiki/Food%20rheology
|
Food rheology is the study of the rheological properties of food, that is, the consistency and flow of food under tightly specified conditions. The consistency, degree of fluidity, and other mechanical properties are important in understanding how long food can be stored, how stable it will remain, and in determining food texture. The acceptability of food products to the consumer is often determined by food texture, such as how spreadable and creamy a food product is. Food rheology is important in quality control during food manufacture and processing. Food rheology terms have been noted since ancient times. In ancient Egypt, bakers judged the consistency of dough by rolling it in their hands.
Overview
There is a large body of literature on food rheology because the study of food rheology entails unique factors beyond an understanding of the basic rheological dynamics of the flow and deformation of matter. Food can be classified according to its rheological state, such as a solid, gel, liquid, emulsion with associated rheological behaviors, and its rheological properties can be measured. These properties will affect the design of food processing plants, as well as shelf life and other important factors, including sensory properties that appeal to consumers. Because foods are structurally complex, often a mixture of fluid and solids with varying properties within a single mass, the study of food rheology is more complicated than study in fields such as the rheology of polymers. However, food rheology is something we experience every day with our perception of food texture (see below) and basic concepts of food rheology well apply to polymers physics, oil flow etc. For this reason, examples of food rheology are didactically useful to explain the dynamics of other materials we are less familiar with. Ketchup is commonly used an example of Bingham fluid and its flow behavior can be compared to that of a polymer melt.
Psychorheology
Psychorheology is the
|
https://en.wikipedia.org/wiki/Chamber%20of%20Computer%20Engineers%20of%20Turkey
|
Chamber of Computer Engineers of Turkey (, abbreviated BMO) was founded on 2 June 2012.
Formerly, the computer engineers in Turkey were the members of Chamber of Electrical Engineers of Turkey. But, on 9 March 2011 computer engineers decided to form their own chamber. The regulatory board announced that each year about 6,500 new CS engineers (including related undergraduate studies) graduate from the universities. During the general assembly of Union of chambers of Turkish engineers and architects (UCTEA) on the 2 June 2012, the request was approved. The chamber has become the 24th member of the union - UCTEA.
|
https://en.wikipedia.org/wiki/Spurious%20tone
|
In electronics (radio in particular), a spurious tone (also known as an interfering tone, a continuous tone or a spur) denotes a tone in an electronic circuit which interferes with a signal and is often masked underneath that signal. Spurious tones are any tones other than a fundamental tone or its harmonics. They also include tones generated within the back-to-back connected transmit and receive terminal or channel units, when the fundamental is applied to the transmit terminal or channel-unit input.
|
https://en.wikipedia.org/wiki/Tasmanian%20coniferous%20shrubbery
|
The vegetation in Tasmania's alpine environments is predominately woody and shrub-like. One vegetation type is coniferous shrubbery, characterised by the gymnosperm species Microcachrys tetragona, Pherosphaera hookeriana, Podocarpus lawrencei, and Diselma archeri. Distribution of these species is relevant with abiotic factors including edaphic conditions and fire frequency, and increasingly, the threat of climate change towards species survival exists. Conservation and management of coniferous shrubbery are necessary considering that the paleoendemic species, Microcachrys, Pherosphaera and Diselma, have persisted in western Tasmanian environments for millions of years.
Distribution
These coniferous shrub species are restricted to subalpine and alpine heathlands in western Tasmania, with the exception of Podocarpus lawrencei which lives on the mainland. The alpine environments where these conifers occur have high levels of conifer endemism, which is an ecologically habitat for coniferous shrub species.
Coniferous shrub species can be observed in Mount Field National Park in Tasmania's south west along the Tarn Shelf. All species can be observed in rocky environments with shallow soil above .
Ecology
Both the alpine environment and the harsh maritime climate have the pressures and limitations of wind exposure and ice abrasion for the woody and shrub-like habit of coniferous shrubbery. The lack of protective snow cover on Tasmanian mountains means that vegetation must be mechanically resistant to these elements, hence an ecologically habitat for coniferous shrub species. This is contrasted to alps of mainland Australia or New Zealand, where the presence of prolonged snow lie lead to the development of a grassland-herbland vegetation community.
Low productivity of the environment is indicated through the slow growth habit of the conifers, and the effects of fire are detrimental to the species. As well as this, physiological drought intolerance in conifers could in
|
https://en.wikipedia.org/wiki/List%20of%20algebraic%20coding%20theory%20topics
|
This is a list of algebraic coding theory topics.
Algebraic coding theory
|
https://en.wikipedia.org/wiki/List%20of%20unusual%20units%20of%20measurement
|
An unusual unit of measurement is a unit of measurement that does not form part of a coherent system of measurement, especially because its exact quantity may not be well known or because it may be an inconvenient multiple or fraction of a base unit.
Many of the unusual units of measurements listed here are colloquial measurements, units devised to compare a measurement to common and familiar objects.
Length
Hammer unit
Valve's Source game engine uses the Hammer unit as its base unit of length. This unit refers to Source's official map creation software, Hammer. The exact definition varies from game to game, but a Hammer unit is usually defined as a sixteenth of a foot (16 Hammer units = 1 foot). This means that 1 Hammer unit is equal to exactly .
Rack unit
One rack unit (U) is and is used to measure rack-mountable audiovisual, computing and industrial equipment. Rack units are typically denoted without a space between the number of units and the 'U'. Thus, a 4U server enclosure (case) is high, or more practically, built to occupy a vertical space seven inches high, with sufficient clearance to allow movement of adjacent hardware.
Hand
The hand is a non-SI unit of length equal to exactly . It is normally used to measure the height of horses in some English-speaking countries, including Australia, Canada, Ireland, the United Kingdom, and the United States. It is customary when measuring in hands to use a point to indicate inches (quarter-hands) and not tenths of a hand. For example, 15.1 hands normally means 15 hands, 1 inch (5 ft 1 in), rather than 15 hands.
Light-nanosecond
The light-nanosecond is defined as exactly 29.9792458 cm. It was popularized in information technology as a unit of distance by Grace Hopper as the distance which a photon could travel in one billionth of a second (roughly 30 cm or one foot): "The speed of light is one foot per nanosecond."
Metric feet
A metric foot, defined as ), has been used occasionally in the UK but has never b
|
https://en.wikipedia.org/wiki/List%20of%20algebras
|
This is a list of possibly nonassociative algebras. An algebra is a module, wherein you can also multiply two module elements. (The multiplication in the module is compatible with multiplication-by-scalars from the base ring).
*-algebra
Akivis algebra
Algebra for a monad
Albert algebra
Alternative algebra
Azumaya algebra
Banach algebra
Birman–Wenzl algebra
Boolean algebra
Borcherds algebra
Brauer algebra
C*-algebra
Central simple algebra
Clifford algebra
Cluster algebra
Dendriform algebra
Differential graded algebra
Differential graded Lie algebra
Exterior algebra
F-algebra
Filtered algebra
Flexible algebra
Freudenthal algebra
Genetic algebra
Geometric algebra
Gerstenhaber algebra
Graded algebra
Griess algebra
Group algebra
Group algebra of a locally compact group
Hall algebra
Hecke algebra of a locally compact group
Heyting algebra
Hopf algebra
Hurwitz algebra
Hypercomplex algebra
Incidence algebra
Iwahori–Hecke algebra
Jordan algebra
Kac–Moody algebra
Kleene algebra
Leibniz algebra
Lie algebra
Lie superalgebra
Malcev algebra
Matrix algebra
Non-associative algebra
Octonion algebra
Pre-Lie algebra
Poisson algebra
Process algebra
Quadratic algebra
Quaternion algebra
Rees algebra
Relation algebra
Relational algebra
Schur algebra
Semisimple algebra
Separable algebra
Shuffle algebra
Sigma-algebra
Simple algebra
Structurable algebra
Supercommutative algebra
Symmetric algebra
Tensor algebra
Universal enveloping algebra
Vertex operator algebra
von Neumann algebra
Weyl algebra
Zinbiel algebra
This is a list of fields of algebra.
Linear algebra
Homological algebra
Universal algebra
Algebras
|
https://en.wikipedia.org/wiki/EKV%20MOSFET%20model
|
The EKV Mosfet model is a mathematical model of metal-oxide semiconductor field-effect transistors (MOSFET) which is intended for circuit simulation and analog circuit design. It was developed by Christian C. Enz, François Krummenacher and Eric A. Vittoz (hence the initials EKV) around 1995 based in part on work they had done in the 1980s. Unlike simpler models like the Quadratic Model, the EKV Model is accurate even when the MOSFET is operating in the subthreshold region (e.g. when Vbulk=Vsource then the MOSFET is subthreshold when Vgate-source < VThreshold). In addition, it models many of the specialized effects seen in submicrometre CMOS IC design.
See also
Transistor models
MOSFET
Ngspice
SPICE
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.