source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Octacube%20%28sculpture%29
|
The Octacube is a large, stainless steel sculpture displayed in the mathematics department of Pennsylvania State University in State College, PA. The sculpture represents a mathematical object called the 24-cell or "octacube". Because a real 24-cell is four-dimensional, the artwork is actually a projection into the three-dimensional world.
Octacube has very high intrinsic symmetry, which matches features in chemistry (molecular symmetry) and physics (quantum field theory).
The sculpture was designed by , a mathematics professor at Pennsylvania State University. The university's machine shop spent over a year completing the intricate metal-work. Octacube was funded by an alumna in memory of her husband, Kermit Anderson, who died in the September 11 attacks.
Artwork
The Octacube's metal skeleton measures about in all three dimensions. It is a complex arrangement of unpainted, tri-cornered flanges. The base is a high granite block, with some engraving.
The artwork was designed by Adrian Ocneanu, a Penn State mathematics professor. He supplied the specifications for the sculpture's 96 triangular pieces of stainless steel and for their assembly. Fabrication was done by Penn State's machine shop, led by Jerry Anderson. The work took over a year, involving bending and welding as well as cutting. Discussing the construction, Ocneanu said: It's very hard to make 12 steel sheets meet perfectly—and conformally—at each of the 23 vertices, with no trace of welding left. The people who built it are really world-class experts and perfectionists—artists in steel.
Because of the reflective metal at different angles, the appearance is pleasantly strange. In some cases, the mirror-like surfaces create an illusion of transparency by showing reflections from unexpected sides of the structure. The sculpture's mathematician creator commented: When I saw the actual sculpture, I had quite a shock. I never imagined the play of light on the surfaces. There are subtle optical effects t
|
https://en.wikipedia.org/wiki/Svenska%20Spindlar
|
The book or (Swedish and Latin, respectively, for "Swedish spiders") is one of the major works of the Swedish arachnologist and entomologist Carl Alexander Clerck and was first published in Stockholm in the year 1757. It was the first comprehensive book on the spiders of Sweden and one of the first regional monographs of a group of animals worldwide. The full title of the work is – , ("Swedish spiders into their main genera separated, and as sixty and a few particular species described and with illuminated figures illustrated") and included 162 pages of text (eight pages were unpaginated) and six colour plates. It was published in Swedish, with a Latin translation printed in a slightly smaller font below the Swedish text.
Clerck described in detail 67 species of Swedish spiders, and for the first time in a zoological work consistently applied binomial nomenclature as proposed by Carl Linnaeus. Linnaeus had originally invented this system for botanical names in his 1753 work Species Plantarum, and presented it again in 1758 in the 10th edition of Systema Naturae for more than 4,000 animal species. Svenska Spindlar is the only pre-Linnaean source to be recognised as a taxonomic authority for such names.
Presentation of the spiders
Clerck explained in the last (9th of the 2nd part) chapter of his work that in contrast to previous authors he used the term "spider" in the strict sense, for animals possessing eight eyes and separated prosoma and opisthosoma, and that his concept of this group of animals did not include Opiliones (because they had two eyes and a broadly joined prosoma and opisthosoma) and other groups of arachnids.
For all spiders Clerck used a single generic name (Araneus), to which was added a specific name which consisted of only one word. Each species was presented in the Swedish text with their Latin scientific names, followed by detailed information containing the exact dates when he had found the animals, and a detailed description of eyes,
|
https://en.wikipedia.org/wiki/Remote%20infrastructure%20management
|
Remote infrastructure management (RIM) is the remote management of information technology (IT) infrastructure. This can include the management of computer hardware and software, such as workstations (desktops, laptops, notebooks, etc.), servers, network devices, storage devices, IT security devices, etc. of a company.
Major sub-services included in RIM are:
Service desk / Help desk
Proactive monitoring of server and network devices
Workstation management
Server Management
Storage management
Application support
IT security Management and database management.
See also
Remote monitoring and management
Network monitoring
Network performance management
Systems management
Comparison of network monitoring systems
|
https://en.wikipedia.org/wiki/Adaptive%20beamformer
|
An adaptive beamformer is a system that performs adaptive spatial signal processing with an array of transmitters or receivers. The signals are combined in a manner which increases the signal strength to/from a chosen direction. Signals to/from other directions are combined in a benign or destructive manner, resulting in degradation of the signal to/from the undesired direction. This technique is used in both radio frequency and acoustic arrays, and provides for directional sensitivity without physically moving an array of receivers or transmitters.
Motivation/Applications
Adaptive beamforming was initially developed in the 1960s for the military applications of sonar and radar. There exist several modern applications for beamforming, one of the most visible applications being commercial wireless networks such as LTE. Initial applications of adaptive beamforming were largely focused in radar and electronic countermeasures to mitigate the effect of signal jamming in the military domain.
Radar uses can be seen here Phased array radar. Although not strictly adaptive, these radar applications make use of either static or dynamic (scanning) beamforming.
Commercial wireless standards such as 3GPP Long Term Evolution (LTE (telecommunication)) and IEEE 802.16 WiMax rely on adaptive beamforming to enable essential services within each standard.
Basic Concepts
An adaptive beamforming system relies on principles of wave propagation and phase relationships. See Constructive interference, and Beamforming. Using the principles of superimposing waves, a higher or lower amplitude wave is created (e.g. by delaying and weighting the signal received). The adaptive beamforming system dynamically adapts in order to maximize or minimize a desired parameter, such as Signal-to-interference-plus-noise ratio.
Adaptive Beamforming Schemes
There are several ways to approach the beamforming design, the first approach was implemented by maximizing the signal to noise ratio (SNR) by Appleb
|
https://en.wikipedia.org/wiki/List%20of%20complex%20analysis%20topics
|
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematics that investigates functions of complex numbers. It is useful in many branches of mathematics, including number theory and applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, and electrical engineering.
Overview
Complex numbers
Complex plane
Complex functions
Complex derivative
Holomorphic functions
Harmonic functions
Elementary functions
Polynomial functions
Exponential functions
Trigonometric functions
Hyperbolic functions
Logarithmic functions
Inverse trigonometric functions
Inverse hyperbolic functions
Residue theory
Isometries in the complex plane
Related fields
Number theory
Hydrodynamics
Thermodynamics
Electrical engineering
Local theory
Holomorphic function
Antiholomorphic function
Cauchy–Riemann equations
Conformal mapping
Conformal welding
Power series
Radius of convergence
Laurent series
Meromorphic function
Entire function
Pole (complex analysis)
Zero (complex analysis)
Residue (complex analysis)
Isolated singularity
Removable singularity
Essential singularity
Branch point
Principal branch
Weierstrass–Casorati theorem
Landau's constants
Holomorphic functions are analytic
Schwarzian derivative
Analytic capacity
Disk algebra
Growth and distribution of values
Ahlfors theory
Bieberbach conjecture
Borel–Carathéodory theorem
Corona theorem
Hadamard three-circle theorem
Hardy space
Hardy's theorem
Maximum modulus principle
Nevanlinna theory
Paley–Wiener theorem
Progressive function
Value distribution theory of holomorphic functions
Contour integrals
Line integral
Cauchy's integral theorem
Cauchy's integral formula
Residue theorem
Liouville's theorem (complex analysis)
Examples of contour integration
Fundamental theorem of algebra
Simply connected
Winding number
Principle of the argument
Rouché's theorem
Bromwich integral
Morera's theorem
Mellin transform
Kramers–Kronig relation, a. k. a.
|
https://en.wikipedia.org/wiki/Coherent%20turbulent%20structure
|
Turbulent flows are complex multi-scale and chaotic motions that need to be classified into more elementary components, referred to coherent turbulent structures. Such a structure must have temporal coherence, i.e. it must persist in its form for long enough periods that the methods of time-averaged statistics can be applied. Coherent structures are typically studied on very large scales, but can be broken down into more elementary structures with coherent properties of their own, such examples include hairpin vortices. Hairpins and coherent structures have been studied and noticed in data since the 1930s, and have been since cited in thousands of scientific papers and reviews.
Flow visualization experiments, using smoke and dye as tracers, have been historically used to simulate coherent structures and verify theories, but computer models are now the dominant tools widely used in the field to verify and understand the formation, evolution, and other properties of such structures. The kinematic properties of these motions include size, scale, shape, vorticity, energy, and the dynamic properties govern the way coherent structures grow, evolve, and decay. Most coherent structures are studied only within the confined forms of simple wall turbulence, which approximates the coherence to be steady, fully developed, incompressible, and with a zero pressure gradient in the boundary layer. Although such approximations depart from reality, they contain sufficient parameters needed to understand turbulent coherent structures in a highly conceptual degree.
History and Discovery
The presence of organized motions and structures in turbulent shear flows was apparent for a long time, and has been additionally implied by mixing length hypothesis even before the concept was explicitly stated in literature. There were also early correlation data found by measuring jets and turbulent wakes, particularly by Corrsin and Roshko. Hama's hydrogen bubble technique, which used flow visu
|
https://en.wikipedia.org/wiki/Rigidity%20%28mathematics%29
|
In mathematics, a rigid collection C of mathematical objects (for instance sets or functions) is one in which every c ∈ C is uniquely determined by less information about c than one would expect.
The above statement does not define a mathematical property; instead, it describes in what sense the adjective "rigid" is typically used in mathematics, by mathematicians.
Examples
Some examples include:
Harmonic functions on the unit disk are rigid in the sense that they are uniquely determined by their boundary values.
Holomorphic functions are determined by the set of all derivatives at a single point. A smooth function from the real line to the complex plane is not, in general, determined by all its derivatives at a single point, but it is if we require additionally that it be possible to extend the function to one on a neighbourhood of the real line in the complex plane. The Schwarz lemma is an example of such a rigidity theorem.
By the fundamental theorem of algebra, polynomials in C are rigid in the sense that any polynomial is completely determined by its values on any infinite set, say N, or the unit disk. By the previous example, a polynomial is also determined within the set of holomorphic functions by the finite set of its non-zero derivatives at any single point.
Linear maps L(X, Y) between vector spaces X, Y are rigid in the sense that any L ∈ L(X, Y) is completely determined by its values on any set of basis vectors of X.
Mostow's rigidity theorem, which states that the geometric structure of negatively curved manifolds is determined by their topological structure.
A well-ordered set is rigid in the sense that the only (order-preserving) automorphism on it is the identity function. Consequently, an isomorphism between two given well-ordered sets will be unique.
Cauchy's theorem on geometry of convex polytopes states that a convex polytope is uniquely determined by the geometry of its faces and combinatorial adjacency rules.
Alexandrov's uniqueness theor
|
https://en.wikipedia.org/wiki/Universal%20parabolic%20constant
|
The universal parabolic constant is a mathematical constant.
It is defined as the ratio, for any parabola, of the arc length of the parabolic segment formed by the latus rectum to the focal parameter. The focal parameter is twice the focal length. The ratio is denoted P.
In the diagram, the latus rectum is pictured in blue, the parabolic segment that it forms in red and the focal parameter in green. (The focus of the parabola is the point F and the directrix is the line L.)
The value of P is
. The circle and parabola are unique among conic sections in that they have a universal constant. The analogous ratios for ellipses and hyperbolas depend on their eccentricities. This means that all circles are similar and all parabolas are similar, whereas ellipses and hyperbolas are not.
Derivation
Take as the equation of the parabola. The focal parameter is and the semilatus rectum is .
Properties
P is a transcendental number.
Proof. Suppose that P is algebraic. Then must also be algebraic. However, by the Lindemann–Weierstrass theorem, would be transcendental, which is not the case. Hence P is transcendental.
Since P is transcendental, it is also irrational.
Applications
The average distance from a point randomly selected in the unit square to its center is
Proof.
There is also an interesting geometrical reason why this constant appears in unit squares. The average distance between a center of a unit square and a point on the square's boundary is .
If we uniformly sample every point on the perimeter of the square, take line segments (drawn from the center) corresponding to each point, add them together by joining each line segment next to the other, scaling them down, the curve obtained is a parabola.
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20properties%20of%20points
|
In mathematics, the following appear:
Algebraic point
Associated point
Base point
Closed point
Divisor point
Embedded point
Extreme point
Fermat point
Fixed point
Focal point
Geometric point
Hyperbolic equilibrium point
Ideal point
Inflection point
Integral point
Isolated point
Generic point
Heegner point
Lattice hole, Lattice point
Lebesgue point
Midpoint
Napoleon points
Non-singular point
Normal point
Parshin point
Periodic point
Pinch point
Point (geometry)
Point source
Rational point
Recurrent point
Regular point, Regular singular point
Saddle point
Semistable point
Separable point
Simple point
Singular point of a curve
Singular point of an algebraic variety
Smooth point
Special point
Stable point
Torsion point
Vertex (curve)
Weierstrass point
Calculus
Critical point (aka stationary point), any value v in the domain of a differentiable function of any real or complex variable, such that the derivative of v is 0 or undefined
Geometry
Antipodal point, the point diametrically opposite to another point on a sphere, such that a line drawn between them passes through the centre of the sphere and forms a true diameter
Conjugate point, any point that can almost be joined to another by a 1-parameter family of geodesics (e.g., the antipodes of a sphere, which are linkable by any meridian
Vertex (geometry), a point that describes a corner or intersection of a geometric shape
Apex (geometry), the vertex that is in some sense the highest of the figure to which it belongs
Topology
Adherent point, a point x in topological space X such that every open set containing x contains at least one point of a subset A
Condensation point, any point p of a subset S of a topological space, such that every open neighbourhood of p contains uncountably many points of S
Limit point, a set S in a topological space X is a point x (which is in X, but not necessarily in S) that can be approximated by points of S, since every neighbourhood o
|
https://en.wikipedia.org/wiki/Fastest%20animals
|
This is a list of the fastest animals in the world, by types of animal.
Fastest organism
The peregrine falcon is the fastest bird, and the fastest member of the animal kingdom, with a diving speed of over . The fastest land animal is the cheetah. Among the fastest animals in the sea is the black marlin, with uncertain and conflicting reports of recorded speeds.
When drawing comparisons between different classes of animals, an alternative unit is sometimes used for organisms: body length per second. On this basis the 'fastest' organism on earth, relative to its body length, is the Southern Californian mite, Paratarsotomus macropalpis, which has a speed of 322 body lengths per second. The equivalent speed for a human, running as fast as this mite, would be , or approximately Mach 1.7. The speed of the P. macropalpis is far in excess of the previous record holder, the Australian tiger beetle Cicindela eburneola, which is the fastest insect in the world relative to body size, with a recorded speed of , or 171 body lengths per second. The cheetah, the fastest land mammal, scores at only 16 body lengths per second, while Anna's hummingbird has the highest known length-specific velocity attained by any vertebrate.
Invertebrates
Fish
Due to physical constraints, fish may be incapable of exceeding swim speeds of 36 km/h (22 mph). The larger reported figures below are therefore highly questionable:
Amphibians
Reptiles
Birds
Mammals
See also
Speed records
Notes
|
https://en.wikipedia.org/wiki/Low-voltage%20detect
|
A low-voltage detect (LVD) is a microcontroller or microprocessor peripheral that generates a reset signal when the Vcc supply voltage falls below Vref. Sometimes is combined with power-on reset (POR) and then it is called POR-LVD.
See also
Power-on reset
Embedded systems
|
https://en.wikipedia.org/wiki/Data%20engineering
|
Data engineering refers to the building of systems to enable the collection and usage of data. This data is usually used to enable subsequent analysis and data science; which often involves machine learning. Making the data usable usually involves substantial compute and storage, as well as data processing
History
Around the 1970s/1980s the term information engineering methodology (IEM) was created to describe database design and the use of software for data analysis and processing. These techniques were intended to be used by database administrators (DBAs) and by systems analysts based upon an understanding of the operational processing needs of organizations for the 1980s. In particular, these techniques were meant to help bridge the gap between strategic business planning and information systems. A key early contributor (often called the "father" of information engineering methodology) was the Australian Clive Finkelstein, who wrote several articles about it between 1976 and 1980, and also co-authored an influential Savant Institute report on it with James Martin. Over the next few years, Finkelstein continued work in a more business-driven direction, which was intended to address a rapidly changing business environment; Martin continued work in a more data processing-driven direction. From 1983 to 1987, Charles M. Richter, guided by Clive Finkelstein, played a significant role in revamping IEM as well as helping to design the IEM software product (user data), which helped automate IEM.
In the early 2000s, the data and data tooling was generally held by the information technology (IT) teams in most companies. Other teams then used data for their work (e.g. reporting), and there was usually little overlap in data skillset between these parts of the business.
In the early 2010s, with the rise of the internet, the massive increase in data volumes, velocity, and variety led to the term big data to describe the data itself, and data-driven tech companies like Face
|
https://en.wikipedia.org/wiki/Macroscopic%20scale
|
The macroscopic scale is the length scale on which objects or phenomena are large enough to be visible with the naked eye, without magnifying optical instruments. It is the opposite of microscopic.
Overview
When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations (microscopy) or theories (microphysics, statistical physics) of objects of geometric lengths smaller than perhaps some hundreds of micrometers.
A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope) or, further down in scale, a collection of molecules in a roughly spherical shape (as viewed through an electron microscope). An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics. An example of a topic that extends from macroscopic to microscopic viewpoints is histology.
Not quite by the distinction between macroscopic and microscopic, classical and quantum mechanics are theories that are distinguished in a subtly different way. At first glance one might think of them as differing simply in the size of objects that they describe, classical objects being considered far larger as to mass and geometrical size than quantal objects, for example a football versus a fine particle of dust. More refined consideration distinguishes classical and quantum mechanics on the basis that classical mechanics fails to recognize that matter and energy cannot be divided into infinitesimally small parcels, so that ultimately fine division reveals irreducibly granular features. The criterion of fineness is whether or not the interactions are described in terms of Planck's constant. Roughly speaking, classical mechanics considers particles in mathematically idealized terms even as fine as geometrical points wi
|
https://en.wikipedia.org/wiki/Carleman%20matrix
|
In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.
Definition
The Carleman matrix of an infinitely differentiable function is defined as:
so as to satisfy the (Taylor series) equation:
For instance, the computation of by
simply amounts to the dot-product of row 1 of with a column vector .
The entries of in the next row give the 2nd power of :
and also, in order to have the zeroth power of in , we adopt the row 0 containing zeros everywhere except the first position, such that
Thus, the dot product of with the column vector yields the column vector
Generalization
A generalization of the Carleman matrix of a function can be defined around any point, such as:
or where . This allows the matrix power to be related as:
General Series
Another way to generalize it even further is think about a general series in the following way:
Let be a series approximation of , where is a basis of the space containing
We can define , therefore we have , now we can prove that , if we assume that is also a basis for and .
Let be such that where .
Now
Comparing the first and the last term, and from being a base for , and it follows that
Examples
If we set we have the Carleman matrix
If is an orthonormal basis for a Hilbert Space with a defined inner product , we can set and will be . If we have the analogous for Fourier Series, namely
Properties
Carleman matrices satisfy the fundamental relationship
which makes the Carleman matrix M a (direct) representation of . Here the term denotes the composition of functions .
Other properties include:
, where is an iterated function and
, where is the inverse function (if the Carleman matrix is invertib
|
https://en.wikipedia.org/wiki/Itakura%E2%80%93Saito%20distance
|
The Itakura–Saito distance (or Itakura–Saito divergence) is a measure of the difference between an original spectrum and an approximation of that spectrum. Although it is not a perceptual measure, it is intended to reflect perceptual (dis)similarity. It was proposed by Fumitada Itakura and Shuzo Saito in the 1960s while they were with NTT.
The distance is defined as:
The Itakura–Saito distance is a Bregman divergence generated by minus the logarithmic function, but is not a true metric since it is not symmetric and it does not fulfil triangle inequality.
In Non-negative matrix factorization, the Itakura-Saito divergence can be used as a measure of the quality of the factorization: this implies a meaningful statistical model of the components and can be solved through an iterative method.
The Itakura-Saito distance is the Bregman divergence associated with the Gamma exponential family where the information divergence of one distribution in the family from another element in the family is given by the Itakura-Saito divergence of the mean value of the first distribution from the mean value of the second distribution.
See also
Log-spectral distance
|
https://en.wikipedia.org/wiki/French%20curve
|
A French curve is a template usually made from metal, wood or plastic composed of many different segments of the Euler spiral (aka the clothoid curve). It is used in manual drafting and in fashion design to draw smooth curves of varying radii. The curve is placed on the drawing material, and a pencil, knife or other implement is traced around its curves to produce the desired result. They were invented by the German mathematician Ludwig Burmester and are also known as Burmester (curve) set.
Clothing design
French curves are used in fashion design and sewing alongside hip curves, straight edges and right-angle rulers. Commercial clothing patterns can be personalized for fit by using French curves to draw neckline, sleeve, bust and waist variations.
See also
|
https://en.wikipedia.org/wiki/Foodomics
|
Foodomics was defined in 2009 as "a discipline that studies the Food and Nutrition domains through the application and integration of advanced -omics technologies to improve consumer's well-being, health, and knowledge". Foodomics requires the combination of food chemistry, biological sciences, and data analysis.
The study of foodomics became under the spotlight after it was introduced in the first international conference in 2009 at Cesena, Italy. Many experts in the field of omics and nutrition were invited to this event in order to find the new approach and possibility in the area of food science and technology. However, research and development of foodomics today are still limited due to high throughput analysis required. The American Chemical Society journal called Analytical Chemistry dedicated its cover to foodomics in December 2012.
Foodomics involves four main areas of omics:
Genomics, which involves investigation of genome and its pattern.
Transcriptomics, which explores a set of gene and identifies the difference among various conditions, organisms, and circumstance, by using several techniques including microarray analysis;
Proteomics, studies every kind of proteins that is a product of the genes. It covers how protein functions in a particular place, structures, interactions with other proteins, etc.;
Metabolomics, includes chemical diversity in the cells and how it affects cell behavior;
Advantages of foodomics
Foodomics greatly helps the scientists in an area of food science and nutrition to gain a better access to data, which is used to analyze the effects of food on human health, etc. It is believed to be another step towards better understanding of development and application of technology and food. Moreover, the study of foodomics leads to other omics sub-disciplines, including nutrigenomics which is the integration of the study of nutrition, gene and omics.
Colon cancer
Foodomics approach is used to analyze and establish the links betwee
|
https://en.wikipedia.org/wiki/List%20of%20optical%20illusions
|
This is a list of visual illusions.
See also
Adaptation (eye)
Alice in Wonderland syndrome
Auditory illusion
Camouflage
Contingent perceptual aftereffect
Contour rivalry
Depth perception
Emmert's law
Entoptic phenomenon
Gestalt psychology
Infinity pool
Kinetic depth effect
Mirage
Multistable perception
Op Art
Notes
External links
Optical Illusion Examples by Great Optical Illusions
Optical Illusions & Visual Phenomena by Michael Bach
Optical Illusions Database by Mighty Optical Illusions
Optical illusions and perception paradoxes by Archimedes Lab
https://web.archive.org/web/20100419004856/http://ilusaodeotica.com/ hundreds of optical illusions
Project LITE Atlas of Visual Phenomena
Akiyoshi's illusion pages Professor Akiyoshi KITAOKA's anomalous motion illusions
Spiral Or Not? by Enrique Zeleny, Wolfram Demonstrations Project
Magical Optical Illusions by Rangki
Hunch Optical Illusions by Hunch
Optical Illusions by Ooh, My Brain!
Optical phenomena
Articles containing video clips
|
https://en.wikipedia.org/wiki/Memory%20cell%20%28computing%29
|
The memory cell is the fundamental building block of computer memory. The memory cell is an electronic circuit that stores one bit of binary information and it must be set to store a logic 1 (high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it.
Over the history of computing, different memory cell architectures have been used, including core memory and bubble memory. Today, the most common memory cell architecture is MOS memory, which consists of metal–oxide–semiconductor (MOS) memory cells. Modern random-access memory (RAM) uses MOS field-effect transistors (MOSFETs) as flip-flops, along with MOS capacitors for certain types of RAM.
The SRAM (static RAM) memory cell is a type of flip-flop circuit, typically implemented using MOSFETs. These require very low power to keep the stored value when not being accessed. A second type, DRAM (dynamic RAM), is based around MOS capacitors. Charging and discharging a capacitor can store a '1' or a '0' in the cell. However, the charge in this capacitor will slowly leak away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power. However, DRAM can achieve greater storage densities.
On the other hand, most non-volatile memory (NVM) is based on floating-gate memory cell architectures. Non-volatile memory technologies including EPROM, EEPROM and flash memory use floating-gate memory cells, which are based around floating-gate MOSFET transistors.
Description
The memory cell is the fundamental building block of memory. It can be implemented using different technologies, such as bipolar, MOS, and other semiconductor devices. It can also be built from magnetic material such as ferrite cores or magnetic bubbles. Regardless of the implementation technology used, the purpose of the binary memory cell is always the same. It stores one bit of binary in
|
https://en.wikipedia.org/wiki/UDPCast
|
UDPcast is a file transfer tool that can send data simultaneously to many destinations on a LAN. This can for instance be used to install entire classrooms of PCs at once. The advantage of UDPcast over using other methods (nfs, ftp, whatever) is that UDPcast uses the User Datagram Protocol's multicast abilities: it won't take longer to install 15 machines than it would to install just 2.
By default this protocol operates on the UDP port 9000. This default behaviour can be changed during the boot stage.
See also
List of disk cloning software
External links
Free system software
Free backup software
Free software programmed in Perl
Cross-platform software
Disk cloning
Computer networking
|
https://en.wikipedia.org/wiki/ATM
|
ATM or atm often refers to:
Atmosphere (unit) or atm, a unit of atmospheric pressure
Automated teller machine, a cash dispenser or cash machine
ATM or atm may also refer to:
Computing
ATM (computer), a ZX Spectrum clone developed in Moscow in 1991
Adobe Type Manager, a computer program for managing fonts
Accelerated Turing machine, or Zeno machine, a model of computation used in theoretical computer science
Alternating Turing machine, a model of computation used in theoretical computer science
Asynchronous Transfer Mode, a telecommunications protocol used in networking
ATM adaptation layer
ATM Adaptation Layer 5
Media
Amateur Telescope Making, a series of books by Albert Graham Ingalls
ATM (2012 film), an American film
ATM: Er Rak Error, a 2012 Thai film
Azhagiya Tamil Magan, a 2007 Indian film
"ATM" (song), a 2018 song by J. Cole from KOD
People and organizations
Abiding Truth Ministries, anti-LGBT organization in Springfield, Massachusetts, US
Association of Teachers of Mathematics, UK
Acrylic Tank Manufacturing, US aquarium manufacturer, televised in Tanked
ATM FA, a football club in Malaysia
A. T. M. Wilson (1906–1978), British psychiatrist
African Transformation Movement, South African political party founded in 2018
The a2 Milk Company (NZX ticker symbol ATM)
Science
Apollo Telescope Mount, a solar observatory
ATM serine/threonine kinase, a serine/threonine kinase activated by DNA damage
The Airborne Topographic Mapper, a laser altimeter among the instruments used by NASA's Operation IceBridge
Transportation
Active traffic management, a motorway scheme on the M42 in England
Air traffic management, a concept in aviation
Altamira Airport, in Brazil (IATA code ATM)
Azienda Trasporti Milanesi, the municipal public transport company of Milan
Airlines of Tasmania (ICAO code ATM)
Catalonia, Spain
Autoritat del Transport Metropolità (ATM Àrea de Barcelona), in the Barcelona metropolitan area
Autoritat Territorial de la Mobil
|
https://en.wikipedia.org/wiki/End-to-end%20encryption
|
End-to-end encryption (E2EE) is a private communication system in which only communicating users can participate. As such, no one, including the communication system provider, telecom providers, Internet providers or malicious actors, can access the cryptographic keys needed to converse.
End-to-end encryption is intended to prevent data being read or secretly modified, other than by the true sender and recipient(s). The messages are encrypted by the sender but the third party does not have a means to decrypt them, and stores them encrypted. The recipients retrieve the encrypted data and decrypt it themselves.
Because no third parties can decipher the data being communicated or stored, for example, companies that provide end-to-end encryption are unable to hand over texts of their customers' messages to the authorities.
In 2022, the UK's Information Commissioner's Office, the government body responsible for enforcing online data standards, stated that opposition to E2EE was misinformed and the debate too unbalanced, with too little focus on benefits, since E2EE "helped keep children safe online" and law enforcement access to stored data on servers was "not the only way" to find abusers.
E2EE and privacy
In many messaging systems, including email and many chat networks, messages pass through intermediaries and are stored by a third party, from which they are retrieved by the recipient. Even if the messages are encrypted, they are only encrypted 'in transit', and are thus accessible by the service provider, regardless of whether server-side disk encryption is used. Server-side disk encryption simply prevents unauthorized users from viewing this information. It does not prevent the company itself from viewing the information, as they have the key and can simply decrypt this data.
This allows the third party to provide search and other features, or to scan for illegal and unacceptable content, but also means they can be read and misused by anyone who has acces
|
https://en.wikipedia.org/wiki/List%20of%20representation%20theory%20topics
|
This is a list of representation theory topics, by Wikipedia page. See also list of harmonic analysis topics, which is more directed towards the mathematical analysis aspects of representation theory.
See also: Glossary of representation theory
General representation theory
Linear representation
Unitary representation
Trivial representation
Irreducible representation
Semisimple
Complex representation
Real representation
Quaternionic representation
Pseudo-real representation
Symplectic representation
Schur's lemma
Restricted representation
Representation theory of groups
Group representation
Group ring
Maschke's theorem
Regular representation
Character (mathematics)
Character theory
Class function
Representation theory of finite groups
Modular representation theory
Frobenius reciprocity
Restricted representation
Induced representation
Peter–Weyl theorem
Young tableau
Spherical harmonic
Hecke operator
Representation theory of the symmetric group
Representation theory of diffeomorphism groups
Permutation representation
Affine representation
Projective representation
Central extension
Representation theory of Lie groups and Lie algebras
Representation of a Lie group
Lie algebra representation, Representation of a Lie superalgebra
Universal enveloping algebra
Casimir element
Infinitesimal character
Harish-Chandra homomorphism
Fundamental representation
Antifundamental representation
Bifundamental representation
Adjoint representation
Weight (representation theory)
Cartan's theorem
Spinor
Wigner's classification, Representation theory of the Poincaré group
Wigner–Eckart theorem
Stone–von Neumann theorem
Orbit method
Kirillov character formula
Weyl character formula
Discrete series representation
Principal series representation
Borel–Weil–Bott theorem
Weyl's character formula
Representation theory of algebras
Algebra representation
Representation theory of Hopf algebras
Representation theory
|
https://en.wikipedia.org/wiki/PowerHUB
|
PowerHUB refers to the name of a series of Integrated Circuits (ICs) developed by ST-Ericsson, a 50/50 joint venture of Ericsson and STMicroelectronics established on February 3, 2009.
These ICs are designed for the energy management and the battery charging of mobile devices.
The first member of the PowerHUB family, the PM2300, has been announced by ST-Ericsson on February 9, 2011.
On February 28, 2012, ST-Ericsson introduced a new IC in this family, the PM2020, supporting the wireless charging technology standardized by the Wireless Power Consortium (WPC).
|
https://en.wikipedia.org/wiki/Turing%20pattern
|
The Turing pattern is a concept introduced by English mathematician Alan Turing in a 1952 paper titled "The Chemical Basis of Morphogenesis" which describes how patterns in nature, such as stripes and spots, can arise naturally and autonomously from a homogeneous, uniform state. The pattern arises due to Turing instability which in turn arises due to the interplay between differential diffusion (i.e., different values of diffusion coefficients) of chemical species and chemical reaction. The instability mechanism is unforeseen because a pure diffusion process would be anticipated to have a stabilizing influence on the system.
Overview
In his paper, Turing examined the behaviour of a system in which two diffusible substances interact with each other, and found that such a system is able to generate a spatially periodic pattern even from a random or almost uniform initial condition. Prior to the discovery of this instability mechanism arising due to unequal diffusion coefficients of the two substances, diffusional effects were always presumed to have stabilizing influences on the system.
Turing hypothesized that the resulting wavelike patterns are the chemical basis of morphogenesis. Turing patterning is often found in combination with other patterns: vertebrate limb development is one of the many phenotypes exhibiting Turing patterning overlapped with a complementary pattern (in this case a French flag model).
Before Turing, Yakov Zeldovich in 1944 discovered this instability mechanism in connection with the cellular structures observed in lean hydrogen flames. Zeldovich explained the cellular structure as a consequence of hydrogen's diffusion coefficient being larger than the thermal diffusion coefficient. In combustion literature, Turing instability is referred to as diffusive–thermal instability.
Concept
The original theory, a reaction–diffusion theory of morphogenesis, has served as an important model in theoretical biology. Reaction–diffusion systems hav
|
https://en.wikipedia.org/wiki/Liouville%20number
|
In number theory, a Liouville number is a real number with the property that, for every positive integer , there exists a pair of integers with such that
Liouville numbers are "almost rational", and can thus be approximated "quite closely" by sequences of rational numbers. Precisely, these are transcendental numbers that can be more closely approximated by rational numbers than any algebraic irrational number can be. In 1844, Joseph Liouville showed that all Liouville numbers are transcendental, thus establishing the existence of transcendental numbers for the first time.
It is known that and are not Liouville numbers.
The existence of Liouville numbers (Liouville's constant)
Liouville numbers can be shown to exist by an explicit construction.
For any integer and any sequence of integers such that for all and for infinitely many , define the number
In the special case when , and for all , the resulting number is called Liouville's constant:
L = 0.11000100000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001...
It follows from the definition of that its base- representation is
where the th term is in the th place.
Since this base- representation is non-repeating it follows that is not a rational number. Therefore, for any rational number , .
Now, for any integer , and can be defined as follows:
Then,
Therefore, any such is a Liouville number.
Notes on the proof
The inequality follows since ak ∈ {0, 1, 2, …, b−1} for all k, so at most ak = b−1. The largest possible sum would occur if the sequence of integers (a1, a2, …) were (b−1, b−1, ...), i.e. ak = b−1, for all k. will thus be less than or equal to this largest possible sum.
The strong inequality follows from the motivation to eliminate the series by way of reducing it to a series for which a formula is known. In the proof so far, the purpose for introducing the inequality in #1 comes from intuition that (the geometric s
|
https://en.wikipedia.org/wiki/Berkeley%20IRAM%20project
|
The Berkeley IRAM project was a 1996–2004 research project in the Computer Science Division of the University of California, Berkeley which explored computer architecture enabled by the wide bandwidth between memory and processor made possible when both are designed on the same integrated circuit (chip). Since it was envisioned that such a chip would consist primarily of random-access memory (RAM), with a smaller part needed for the central processing unit (CPU), the research team used the term "Intelligent RAM" (or IRAM) to describe a chip with this architecture. Like the J–Machine project at MIT, the primary objective of the research was to avoid the Von Neumann bottleneck which occurs when the connection between memory and CPU is a relatively narrow memory bus between separate integrated circuits.
Theory
With strong competitive pressures, the technology employed for each component of a computer system—principally CPU, memory, and offline storage—is typically selected to minimize the cost needed to attain a given level of performance. Though both microprocessor and memory are implemented as integrated circuits, the prevailing technology used for each differs; microprocessor technology optimizes speed and memory technology optimizes density. For this reason, the integration of memory and processor in the same chip has (for the most part) been limited to static random-access memory (SRAM), which may be implemented using circuit technology optimized for logic performance, rather than the denser and lower-cost dynamic random-access memory (DRAM), which is not. Microprocessor access to off-chip memory costs time and power, however, significantly limiting processor performance. For this reason computer architecture employing a hierarchy of memory systems has developed, in which static memory is integrated with the microprocessor for temporary, easily accessible storage (or cache) of data which is also retained off-chip in DRAM. Since the on-chip cache memory is redun
|
https://en.wikipedia.org/wiki/Sulemana%20Abdul%20Samed
|
Sulemana Abdul Samed, also known as Awuche (meaning 'Let's Go' in the Hausa language), is the tallest man in Ghana. He was born in 1994 in the Northern Region of Ghana.
Abdul Samed was diagnosed with the endocrine disorder acromegaly, which is caused by an excess of growth hormone in the body. An investigation by a BBC reporter revealed that Samed was only 7 feet 4 inches (223 cm), suggesting that the hospital at which he had been measured had made a "mistake" when other sources reported a larger height.
He has undergone treatment for his condition. Despite his unusual height, Abdul Samed has lived a relatively normal life, attending school and being employed as a farmer and a mechanic. He has stated that he hopes to marry and have children.
Abdul Samed has received media attention for his height, which he has used to raise awareness about acromegaly and the challenges faced by people who have the condition.
|
https://en.wikipedia.org/wiki/Ouchterlony%20double%20immunodiffusion
|
Ouchterlony double immunodiffusion (also known as passive double immunodiffusion) is an immunological technique used in the detection, identification and quantification of antibodies and antigens, such as immunoglobulins and extractable nuclear antigens. The technique is named after Örjan Ouchterlony, the Swedish physician who developed the test in 1948 to evaluate the production diphtheria toxins from isolated bacteria.
Procedure
A gel plate is cut to form a series of holes ("wells") in an agar or agarose gel. A sample extract of interest (for example human cells harvested from tonsil tissue) is placed in one well, sera or purified antibodies are placed in another well and the plate left for 48 hours to develop. During this time the antigens in the sample extract and the antibodies each diffuse out of their respective wells. Where the two diffusion fronts meet, if any of the antibodies recognize any of the antigens, they will bind to the antigens and form an immune complex. The immune complex precipitates in the gel to give a thin white line (precipitin line), which is a visual signature of antigen recognition.
The method can be conducted in parallel with multiple wells filled with different antigen mixtures and multiple wells with different antibodies or mixtures of antibodies, and antigen-antibody reactivity can be seen by observing between which wells the precipitate is observed. When more than one well is used there are many possible outcomes based on the reactivity of the antigen and antibody selected. The zone of equivalence lines may give a full identity (i.e. a continuous line), partial identity (i.e. a continuous line with a spur at one end), or a non-identity (i.e. the two lines cross completely).
The sensitivity of the assay can be increased by using a stain such as Coomassie brilliant blue, this is done by repeated staining and destaining of the assay until the precipitin lines are at maximum visibility.
Theory
Precipitation occurs with most antige
|
https://en.wikipedia.org/wiki/Melanoidin
|
Melanoidins are brown, high molecular weight heterogeneous polymers that are formed when sugars and amino acids combine (through the Maillard reaction) at high temperatures and low water activity. They were discovered by Schmiedeberg in 1897.
Melanoidins are commonly present in foods that have undergone some form of non-enzymatic browning, such as barley malts (Vienna and Munich), bread crust, bakery products and coffee. They are also present in the wastewater of sugar refineries, necessitating treatment in order to avoid contamination around the outflow of these refineries.
Dietary melanoidins themselves produce various effects in the organism: they decrease Phase I liver enzyme activity and promote glycation in vivo, which may contribute to diabetes, reduced vascular compliance and Alzheimer's disease. Some of the melanoidins are metabolized by the intestinal microflora.
Coffee is one of the main sources of melanoidins in the human diet, yet coffee consumption is associated with some health benefits and antiglycative action.
|
https://en.wikipedia.org/wiki/Softwire%20%28protocol%29
|
In computer networking, a softwire protocol is a type of tunneling protocol that creates a virtual "wire" that transparently encapsulates another protocol as if it was an anonymous point-to-point low-level link. Softwires are used for various purposes, one of which is to carry IPv4 traffic over IPv6 and vice versa, in order to support IPv6 transition mechanisms.
|
https://en.wikipedia.org/wiki/Developmental%20systems%20theory
|
Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes.
Overview
All versions of developmental systems theory espouse the view that:
All biological processes (including both evolution and development) operate by continually assembling new structures.
Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws.
Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms.
Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for.
In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any p
|
https://en.wikipedia.org/wiki/Franz%E2%80%93Keldysh%20effect
|
The Franz–Keldysh effect is a change in optical absorption by a semiconductor when an electric field is applied. The effect is named after the German physicist Walter Franz and Russian physicist Leonid Keldysh.
Karl W. Böer observed first the shift of the optical absorption edge with electric fields during the discovery of high-field domains and named this the Franz-effect. A few months later, when the English translation of the Keldysh paper became available, he corrected this to the Franz–Keldysh effect.
As originally conceived, the Franz–Keldysh effect is the result of wavefunctions "leaking" into the band gap. When an electric field is applied, the electron and hole wavefunctions become Airy functions rather than plane waves. The Airy function includes a "tail" which extends into the classically forbidden band gap. According to Fermi's golden rule, the more overlap there is between the wavefunctions of a free electron and a hole, the stronger the optical absorption will be. The Airy tails slightly overlap even if the electron and hole are at slightly different potentials (slightly different physical locations along the field). The absorption spectrum now includes a tail at energies below the band gap and some oscillations above it. This explanation does, however, omit the effects of excitons, which may dominate optical properties near the band gap.
The Franz–Keldysh effect occurs in uniform, bulk semiconductors, unlike the quantum-confined Stark effect, which requires a quantum well. Both are used for electro-absorption modulators. The Franz–Keldysh effect usually requires hundreds of volts, limiting its usefulness with conventional electronics – although this is not the case for commercially available Franz–Keldysh-effect electro-absorption modulators that use a waveguide geometry to guide the optical carrier.
Effect on modulation spectroscopy
The absorption coefficient is related to the dielectric constant (especially the complex part 2). From Maxwell's
|
https://en.wikipedia.org/wiki/Quantum%20pseudo-telepathy
|
Quantum pseudo-telepathy is the fact that in certain Bayesian games with asymmetric information, players who have access to a shared physical system in an entangled quantum state, and who are able to execute strategies that are contingent upon measurements performed on the entangled physical system, are able to achieve higher expected payoffs in equilibrium than can be achieved in any mixed-strategy Nash equilibrium of the same game by players without access to the entangled quantum system.
In their 1999 paper, Gilles Brassard, Richard Cleve and Alain Tapp demonstrated that quantum pseudo-telepathy allows players in some games to achieve outcomes that would otherwise only be possible if participants were allowed to communicate during the game.
This phenomenon came to be referred to as quantum pseudo-telepathy, with the prefix pseudo referring to the fact that quantum pseudo-telepathy does not involve the exchange of information between any parties. Instead, quantum pseudo-telepathy removes the need for parties to exchange information in some circumstances.
By removing the need to engage in communication to achieve mutually advantageous outcomes in some circumstances, quantum pseudo-telepathy could be useful if some participants in a game were separated by many light years, meaning that communication between them would take many years. This would be an example of a macroscopic implication of quantum non-locality.
Quantum pseudo-telepathy is generally used as a thought experiment to demonstrate the non-local characteristics of quantum mechanics. However, quantum pseudo-telepathy is a real-world phenomenon which can be verified experimentally. It is thus an especially striking example of an experimental confirmation of Bell inequality violations.
Games of asymmetric information
A Bayesian game is a game in which both players have imperfect information regarding the value of certain parameters. In a Bayesian game it is sometimes the case that for at least some pla
|
https://en.wikipedia.org/wiki/System%20appreciation
|
System appreciation is an activity often included in the maintenance phase of software engineering projects. Key deliverables from this phase include documentation that describes what the system does in terms of its functional features, and how it achieves those features in terms of its architecture and design. Software architecture recovery is often the first step within System appreciation.
|
https://en.wikipedia.org/wiki/Multitaper
|
In signal processing, multitaper is a spectral density estimation technique developed by David J. Thomson. It can estimate the power spectrum SX of a stationary ergodic finite-variance random process X, given a finite contiguous realization of X as data.
Motivation
The multitaper method overcomes some of the limitations of non-parametric Fourier analysis. When applying the Fourier transform to extract spectral information from a signal, we assume that each Fourier coefficient is a reliable representation of the amplitude and relative phase of the corresponding component frequency. This assumption, however, is not generally valid for empirical data. For instance, a single trial represents only one noisy realization of the underlying process of interest. A comparable situation arises in statistics when estimating measures of central tendency i.e., it is bad practice to estimate qualities of a population using individuals or very small samples. Likewise, a single sample of a process does not necessarily provide a reliable estimate of its spectral properties. Moreover, the naive power spectral density obtained from the signal's raw Fourier transform is a biased estimate of the true spectral content.
These problems are often overcome by averaging over many realizations of the same event after applying a taper to each trial. However, this method is unreliable with small data sets and undesirable when one does not wish to attenuate signal components that vary across trials. Furthermore, even when many trials are available the untapered periodogram is generally biased (with the exception of white noise) and the bias depends upon the length of each realization, not the number of realizations recorded. Applying a single taper reduces bias but at the cost of increased estimator variance due to attenuation of activity at the start and end of each recorded segment of the signal. The multitaper method partially obviates these problems by obtaining multiple independent e
|
https://en.wikipedia.org/wiki/Manganese%20in%20biology
|
Manganese is an essential biological element in all organisms. It is used in many enzymes and proteins. It is essential in plants.
Biochemistry
The classes of enzymes that have manganese cofactors include oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. Other enzymes containing manganese are arginase and Mn-containing superoxide dismutase (Mn-SOD). Also the enzyme class of reverse transcriptases of many retroviruses (though not lentiviruses such as HIV) contains manganese. Manganese-containing polypeptides are the diphtheria toxin, lectins and integrins.
Biological role in humans
Manganese is an essential human dietary element. It is present as a coenzyme in several biological processes, which include macronutrient metabolism, bone formation, and free radical defense systems. It is a critical component in dozens of proteins and enzymes. The human body contains about 12 mg of manganese, mostly in the bones. The soft tissue remainder is concentrated in the liver and kidneys. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes.
Nutrition
Dietary recommendations
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for minerals in 2001. For manganese there was not sufficient information to set EARs and RDAs, so needs are described as estimates for Adequate Intakes (AIs). As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of manganese the adult UL is set at 11 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). Manganese deficiency is rare.
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the s
|
https://en.wikipedia.org/wiki/Swiss%20Network%20Operators%20Group
|
The Swiss Network Operators Group (SwiNOG) is a Swiss counterpart to NANOG. Like NANOG, SwiNOG operates a mailing list for operators of Swiss data networks, including ISPs.
Events
Twice a year the community gathers in Bern, the capitol of Switzerland for a social gathering containing technical presentations and of course direct interaction between the people in the community. Usually these talks are very technical and can contain various topics related to the work of network operators like out-of-band management. Of course there are also more high-level presentations like the one about SDN and NFV. Usually some months before the event, someone from the SwiNOG-Core-Team sends out a CfP.
On a monthly basis, Steven Glogger is also organizing the SwiNOG Beer Events. In the past there where already more than 100 events, taken place in the city of Zurich, a social gathering where people talk about technology, their employer and sometimes also about customers but mainly to exchange information to each other in an offline mode.
History
See also
Internet network operators' group
|
https://en.wikipedia.org/wiki/System%20basis%20chip
|
A system basis chip (SBC) is an integrated circuit that includes various functions of automotive electronic control units (ECU) on a single die.
It typically includes a mixture between digital standard functionality like communication bus interfaces and analog or power functionality, denoted as smart power. Therefore SBCs are based on special smart power technology platforms.
The embedded functions may include:
Voltage regulators
Supervision functions
Reset generators,
Watchdog functions
Bus interfaces, like Local Interconnect Network (LIN), CAN bus or others
Wake-up logic
Power switches
The complexity range for SBC starts with rather simple hardwired devices to configurable state-machine controlled devices (e.g. through a serial peripheral interface).
Various major automotive semiconductor manufacturers offer SBCs.
|
https://en.wikipedia.org/wiki/Stationary%20process
|
In mathematics and statistics, a stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time. If you draw a line through the middle of a stationary process then it should be flat; it may have 'seasonal' cycles around the trend line, but overall it does not trend up nor down.
Since stationarity is an assumption underlying many statistical procedures used in time series analysis, non-stationary data are often transformed to become stationary. The most common cause of violation of stationarity is a trend in the mean, which can be due either to the presence of a unit root or of a deterministic trend. In the former case of a unit root, stochastic shocks have permanent effects, and the process is not mean-reverting. In the latter case of a deterministic trend, the process is called a trend-stationary process, and stochastic shocks have only transitory effects after which the variable tends toward a deterministically evolving (non-constant) mean.
A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by removing the underlying trend, which is solely a function of time. Similarly, processes with one or more unit roots can be made stationary through differencing. An important type of non-stationary process that does not include a trend-like behavior is a cyclostationary process, which is a stochastic process that varies cyclically with time.
For many applications strict-sense stationarity is too restrictive. Other forms of stationarity such as wide-sense stationarity or N-th-order stationarity are then employed. The definitions for different kinds of stationarity are not consistent among different authors (see Other terminology).
Strict-sense stationarity
Definition
Formally, let be a
|
https://en.wikipedia.org/wiki/Network%20diagram%20software
|
A number of tools exist to generate computer network diagrams. Broadly, there are four types of tools that help create network maps and diagrams:
Hybrid tools
Network Mapping tools
Network Monitoring tools
Drawing tools
Network mapping and drawing software support IT systems managers to understand the hardware and software services on a network and how they are interconnected. Network maps and diagrams are a component of network documentation. They are required artifacts to better manage IT systems' uptime, performance, security risks, plan network changes and upgrades.
Hybrid tools
These tools have capabilities in common with drawing tools and network monitoring tools. They are more specialized than general drawing tools and provide network engineers and IT systems administrators a higher level of automation and the ability to develop more detailed network topologies and diagrams. Typical capabilities include but not limited to:
Displaying port / interface information on connections between devices on the maps
Visualizing VLANs / subnets
Visualizing virtual servers and storage
Visualizing flow of network traffic across devices and networks
Displaying WAN and LAN maps by location
Importing network configuration files to generate topologies automatically
Network mapping tools
These tools are specifically designed to generate automated network topology maps. These visual maps are automatically generated by scanning the network using network discovery protocols. Some of these tools integrate into documentation and monitoring tools. Typical capabilities include but not limited to:
Automatically scanning the network using SNMP, SSH, WMI, etc.
Scanning Windows and Unix servers
Scanning virtual hosts
Scanning routing protocols
Performing scheduled scans
Tracking changes to the network
Notifying users of changes to the network
Network monitoring tools
Some network monitoring tools generate visual maps by automatically scanning the network using net
|
https://en.wikipedia.org/wiki/Non-contact%20force
|
A non-contact force is a force which acts on an object without coming physically in contact with it. The most familiar non-contact force is gravity, which confers weight. In contrast, a contact force is a force which acts on an object coming physically in contact with it.
All four known fundamental interactions are non-contact forces:
Gravity, the force of attraction that exists among all bodies that have mass. The force exerted on each body by the other through weight is proportional to the mass of the first body times the mass of the second body divided by the square of the distance between them.
Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Examples of this force include: electricity, magnetism, radio waves, microwaves, infrared, visible light, X-rays and gamma rays. Electromagnetism mediates all chemical, biological, electrical and electronic processes.
Strong nuclear force: Unlike gravity and electromagnetism, the strong nuclear force is a short distance force that takes place between fundamental particles within a nucleus. It is charge independent and acts equally between a proton and a proton, a neutron and a neutron, and a proton and a neutron. The strong nuclear force is the strongest force in nature; however, its range is small (acting only over distances of the order of 10−15 m). The strong nuclear force mediates both nuclear fission and fusion reactions.
Weak nuclear force: The weak nuclear force mediates the β decay of a neutron, in which the neutron decays into a proton and in the process emits a β particle and an uncharged particle called a neutrino. As a result of mediating the β decay process, the weak nuclear force plays a key role in supernovas. Both the strong and weak forces form an important part of quantum mechanics.The Casimir effect could also be thought of as a non-contact force.
See also
Tension
Body force
Surface
|
https://en.wikipedia.org/wiki/Jim%20Al-Khalili
|
Jameel Sadik "Jim" Al-Khalili (; born 20 September 1962) is an Iraqi-British theoretical physicist, author and broadcaster. He is professor of theoretical physics and chair in the public engagement in science at the University of Surrey. He is a regular broadcaster and presenter of science programmes on BBC radio and television, and a frequent commentator about science in other British media.
In 2014, Al-Khalili was named as a RISE (Recognising Inspirational Scientists and Engineers) leader by the UK's Engineering and Physical Sciences Research Council (EPSRC). He was President of Humanists UK between January 2013 and January 2016.
Early life and education
Al-Khalili was born in Baghdad in 1962. His father was an Iraqi Air Force engineer, and his English mother was a librarian. Al-Khalili settled permanently in the United Kingdom in 1979. After completing (and retaking) his A-levels over three years until 1982, he studied physics at the University of Surrey and graduated with a Bachelor of Science degree in 1986. He stayed on at Surrey to pursue a Doctor of Philosophy degree in nuclear reaction theory, which he obtained in 1989, rather than accepting a job offer from the National Physical Laboratory.
Career and research
In 1989, Al-Khalili was awarded a Science and Engineering Research Council (SERC) postdoctoral fellowship at University College London, after which he returned to Surrey in 1991, first as a research assistant, then as a lecturer. In 1994, Al-Khalili was awarded an Engineering and Physical Sciences Research Council (EPSRC) Advanced Research Fellowship for five years, during which time he established himself as a leading expert on mathematical models of exotic atomic nuclei. He has published widely in his field.
Al-Khalili is a professor of physics at the University of Surrey, where he also holds a chair in the Public Engagement in Science. He has been a trustee (2006–2012) and vice president (2008–2011) of the British Science Association. He a
|
https://en.wikipedia.org/wiki/Full%20custom
|
In integrated circuit design, full-custom is a design methodology in which the layout of each individual transistor on the integrated circuit (IC), and the interconnections between them, are specified. Alternatives to full-custom design include various forms of semi-custom design, such as the repetition of small transistor subcircuits; one such methodology is the use of standard cell libraries (which are themselves designed full-custom).
Full-custom design potentially maximizes the performance of the chip, and minimizes its area, but is extremely labor-intensive to implement. Full-custom design is limited to ICs that are to be fabricated in extremely high volumes, notably certain microprocessors and a small number of application-specific integrated circuits (ASICs).
As of 2008 the main factor affecting the design and production of ASICs was the high cost of mask sets (number of which is depending on the number of IC layers) and the requisite EDA design tools. The mask sets are required in order to transfer the ASIC designs onto the wafer.
See also
Electronics design flow
|
https://en.wikipedia.org/wiki/Index%20of%20logic%20articles
|
A
A System of Logic --
A priori and a posteriori --
Abacus logic --
Abduction (logic) --
Abductive validation --
Academia Analitica --
Accuracy and precision --
Ad captandum --
Ad hoc hypothesis --
Ad hominem --
Affine logic --
Affirming the antecedent --
Affirming the consequent --
Algebraic logic --
Ambiguity --
Analysis --
Analysis (journal) --
Analytic reasoning --
Analytic–synthetic distinction --
Anangeon --
Anecdotal evidence --
Antecedent (logic) --
Antepredicament --
Anti-psychologism --
Antinomy --
Apophasis --
Appeal to probability --
Appeal to ridicule --
Archive for Mathematical Logic --
Arché --
Argument --
Argument by example --
Argument form --
Argument from authority --
Argument map --
Argumentation theory --
Argumentum ad baculum --
Argumentum e contrario --
Ariadne's thread (logic) --
Aristotelian logic --
Aristotle --
Association for Informal Logic and Critical Thinking --
Association for Logic, Language and Information --
Association for Symbolic Logic --
Attacking Faulty Reasoning --
Australasian Association for Logic --
Axiom --
Axiom independence --
Axiom of reducibility --
Axiomatic system --
Axiomatization --
B
Backward chaining --
Barcan formula --
Begging the question --
Begriffsschrift --
Belief --
Belief bias --
Belief revision --
Benson Mates --
Bertrand Russell Society --
Biconditional elimination --
Biconditional introduction --
Bivalence and related laws --
Blue and Brown Books --
Boole's syllogistic --
Boolean algebra (logic) --
Boolean algebra (structure) --
Boolean network --
C
Canonical form --
Canonical form (Boolean algebra) --
Cartesian circle --
Case-based reasoning --
Categorical logic --
Categories (Aristotle) --
Categories (Peirce) --
Category mistake --
Catuṣkoṭi --
Circular definition --
Circular reasoning --
Circular reference --
Circular reporting --
Circumscription (logic) --
Circumscription (taxonomy) --
Classical logic --
Clocked logic --
Cognitive bias --
Cointerpretability --
Colorless green ideas sleep fu
|
https://en.wikipedia.org/wiki/Biochemical%20cascade
|
A biochemical cascade, also known as a signaling cascade or signaling pathway, is a series of chemical reactions that occur within a biological cell when initiated by a stimulus. This stimulus, known as a first messenger, acts on a receptor that is transduced to the cell interior through second messengers which amplify the signal and transfer it to effector molecules, causing the cell to respond to the initial stimulus. Most biochemical cascades are series of events, in which one event triggers the next, in a linear fashion. At each step of the signaling cascade, various controlling factors are involved to regulate cellular actions, in order to respond effectively to cues about their changing internal and external environments.
An example would be the coagulation cascade of secondary hemostasis which leads to fibrin formation, and thus, the initiation of blood coagulation. Another example, sonic hedgehog signaling pathway, is one of the key regulators of embryonic development and is present in all bilaterians. Signaling proteins give cells information to make the embryo develop properly. When the pathway malfunctions, it can result in diseases like basal cell carcinoma. Recent studies point to the role of hedgehog signaling in regulating adult stem cells involved in maintenance and regeneration of adult tissues. The pathway has also been implicated in the development of some cancers. Drugs that specifically target hedgehog signaling to fight diseases are being actively developed by a number of pharmaceutical companies.
Introduction
Signaling cascades
Cells require a full and functional cellular machinery to live. When they belong to complex multicellular organisms, they need to communicate among themselves and work for symbiosis in order to give life to the organism. These communications between cells triggers intracellular signaling cascades, termed signal transduction pathways, that regulate specific cellular functions. Each signal transduction occurs with a p
|
https://en.wikipedia.org/wiki/Biasing
|
In electronics, biasing is the setting of DC (direct current) operating conditions (current and voltage) of an electronic component that processes time-varying signals. Many electronic devices, such as diodes, transistors and vacuum tubes, whose function is processing time-varying (AC) signals, also require a steady (DC) current or voltage at their terminals to operate correctly. This current or voltage is called bias. The AC signal applied to them is superposed on this DC bias current or voltage.
The operating point of a device, also known as bias point, quiescent point, or Q-point, is the DC voltage or current at a specified terminal of an active device (a transistor or vacuum tube) with no input signal applied. A bias circuit is a portion of the device's circuit that supplies this steady current or voltage.
Overview
In electronics, 'biasing' usually refers to a fixed DC voltage or current applied to a terminal of an electronic component such as a diode, transistor or vacuum tube in a circuit in which AC signals are also present, in order to establish proper operating conditions for the component. For example, a bias voltage is applied to a transistor in an electronic amplifier to allow the transistor to operate in a particular region of its transconductance curve. For vacuum tubes, a grid bias voltage is often applied to the grid electrodes for the same reason.
In magnetic tape recording, the term bias is also used for a high-frequency signal added to the audio signal and applied to the recording head, to improve the quality of the recording on the tape. This is called tape bias.
Importance in linear circuits
Linear circuits involving transistors typically require specific DC voltages and currents for correct operation, which can be achieved using a biasing circuit. As an example of the need for careful biasing, consider a transistor amplifier. In linear amplifiers, a small input signal gives a larger output signal without any change in shape (low distortion
|
https://en.wikipedia.org/wiki/PC/TCP%20Packet%20Driver
|
PC/TCP Packet Driver is a networking API for MS-DOS, PC DOS, and later x86 DOS implementations such as DR-DOS, FreeDOS, etc. It implements the lowest levels of a TCP/IP stack, where the remainder is typically implemented either by terminate-and-stay-resident drivers or as a library linked into an application program. It was invented in 1983 at MIT's Lab for Computer Science (CSR/CSC group under Jerry Saltzer and David D. Clark), and was commercialized in 1986 by FTP Software.
A packet driver uses an x86 interrupt number (INT) between The number used is detected at runtime, it is most commonly 60h but may be changed to avoid application programs which use fixed interrupts for internal communications. The interrupt vector is used as a pointer (4-bytes little endian) to the address of a possible interrupt handler. If the null-terminated ASCII text string "PKT DRVR" (2 spaces in the middle!) is found within the first 12-bytes -- more specifically in bytes 3 through 11 -- immediately following the entry point then a driver has been located.
Packet drivers can implement many different network interfaces, including Ethernet, Token Ring, RS-232, Arcnet, and X.25.
Functions
Drivers
WinPKT is a driver that enables use of packet drivers under Microsoft Windows that moves around applications in memory.
W3C507 is a DLL to packet driver for the Microsoft Windows environment.
Support for Ethernet alike network interface over (using 8250 UART), CSLIP, , IPX, Token Ring, LocalTalk, ARCNET.
See also
Crynwr Collection - alternative free packet driver collection
Network Driver Interface Specification (NDIS) - developed by Microsoft and 3Com, free wrappers
Open Data-Link Interface (ODI) - developed by Apple and Novell
Universal Network Device Interface (UNDI) - used by Intel PXE
Uniform Driver Interface (UDI) - defunct
Preboot Execution Environment - network boot by Intel, widespread
|
https://en.wikipedia.org/wiki/Sclerobiont
|
Sclerobionts are collectively known as organisms living in or on any kind of hard substrate (Taylor and Wilson, 2003). A few examples of sclerobionts include Entobia borings, Gastrochaenolites borings, Talpina borings, serpulids, encrusting oysters, encrusting foraminiferans, Stomatopora bryozoans, and “Berenicea” bryozoans.
See also
Bioerosion
|
https://en.wikipedia.org/wiki/Proof%20by%20exhaustion
|
Proof by exhaustion, also known as proof by cases, proof by case analysis, complete induction or the brute force method, is a method of mathematical proof in which the statement to be proved is split into a finite number of cases or sets of equivalent cases, and where each type of case is checked to see if the proposition in question holds. This is a method of direct proof. A proof by exhaustion typically contains two stages:
A proof that the set of cases is exhaustive; i.e., that each instance of the statement to be proved matches the conditions of (at least) one of the cases.
A proof of each of the cases.
The prevalence of digital computers has greatly increased the convenience of using the method of exhaustion (e.g., the first computer-assisted proof of four color theorem in 1976), though such approaches can also be challenged on the basis of mathematical elegance. Expert systems can be used to arrive at answers to many of the questions posed to them. In theory, the proof by exhaustion method can be used whenever the number of cases is finite. However, because most mathematical sets are infinite, this method is rarely used to derive general mathematical results.
In the Curry–Howard isomorphism, proof by exhaustion and case analysis are related to ML-style pattern matching.
Example
Proof by exhaustion can be used to prove that if an integer is a perfect cube, then it must be either a multiple of 9, 1 more than a multiple of 9, or 1 less than a multiple of 9.
Proof:
Each perfect cube is the cube of some integer n, where n is either a multiple of 3, 1 more than a multiple of 3, or 1 less than a multiple of 3. So these three cases are exhaustive:
Case 1: If n = 3p, then n3 = 27p3, which is a multiple of 9.
Case 2: If n = 3p + 1, then n3 = 27p3 + 27p2 + 9p + 1, which is 1 more than a multiple of 9. For instance, if n = 4 then n3 = 64 = 9×7 + 1.
Case 3: If n = 3p − 1, then n3 = 27p3 − 27p2 + 9p − 1, which is 1 less than a multiple of 9. For instance, if n = 5 th
|
https://en.wikipedia.org/wiki/Northbound%20interface
|
In computer networking and computer architecture, a northbound interface of a component is an interface that allows the component to communicate with a higher level component, using the latter component's southbound interface. The northbound interface conceptualizes the lower level details (e.g., data or functions) used by, or in, the component, allowing the component to interface with higher level layers.
In architectural overviews, the northbound interface is normally drawn at the top of the component it is defined in; hence the name northbound interface. A southbound interface decomposes concepts in the technical details, mostly specific to a single component of the architecture. Southbound interfaces are drawn at the bottom of an architectural overview.
Typical use
A northbound interface is typically an output-only interface (as opposed to one that accepts user input) found in carrier-grade network and telecommunications network elements. The languages or protocols commonly used include SNMP and TL1. For example, a device that is capable of sending out syslog messages but that is not configurable by the user is said to implement a northbound interface. Other examples include SMASH, IPMI, WSMAN, and SOAP.
The term is also important for software-defined networking (SDN), to facilitate communication between the physical devices, the SDN software and applications running on the network.
|
https://en.wikipedia.org/wiki/Buchdahl%27s%20theorem
|
In general relativity, Buchdahl's theorem, named after Hans Adolf Buchdahl, makes more precise the notion that there is a maximal sustainable density for ordinary gravitating matter. It gives an inequality between the mass and radius that must be satisfied for static, spherically symmetric matter configurations under certain conditions. In particular, for areal radius , the mass must satisfy
where is the gravitational constant and is the speed of light. This inequality is often referred to as Buchdahl's bound. The bound has historically also been called Schwarzschild's limit as it was first noted by Karl Schwarzschild to exist in the special case of a constant density fluid. However, this terminology should not be confused with the Schwarzschild radius which is notably smaller than the radius at the Buchdahl bound.
Theorem
Given a static, spherically symmetric solution to the Einstein equations (without cosmological constant) with matter confined to areal radius that behaves as a perfect fluid with a density that does not increase outwards. (An areal radius corresponds to a sphere of surface area . In curved spacetime the proper radius of such a sphere is not necessarily .) Assumes in addition that the density and pressure cannot be negative. The mass of this solution must satisfy
For his proof of the theorem, Buchdahl uses the Tolman-Oppenheimer-Volkoff (TOV) equation.
Significance
The Buchdahl theorem is useful when looking for alternatives to black holes. Such attempts are often inspired by the information paradox; a way to explain (part of) the dark matter; or to criticize that observations of black holes are based on excluding known astrophysical alternatives (such as neutron stars) rather than direct evidence. However, to provide a viable alternative it is sometimes needed that the object should be extremely compact and in particular violate the Buchdahl inequality. This implies that one of the assumptions of Buchdahl's theorem must be invalid. A
|
https://en.wikipedia.org/wiki/List%20of%20scientific%20publications%20by%20Albert%20Einstein
|
Albert Einstein (1879–1955) was a renowned theoretical physicist of the 20th century, best known for his theories of special relativity and general relativity. He also made important contributions to statistical mechanics, especially his treatment of Brownian motion, his resolution of the paradox of specific heats, and his connection of fluctuations and dissipation. Despite his reservations about its interpretation, Einstein also made seminal contributions to quantum mechanics and, indirectly, quantum field theory, primarily through his theoretical studies of the photon.
Einstein's scientific publications are listed below in four tables: journal articles, book chapters, books and authorized translations. Each publication is indexed in the first column by its number in the Schilpp bibliography (Albert Einstein: Philosopher–Scientist, pp. 694–730) and by its article number in Einstein's Collected Papers. Complete references for these two bibliographies may be found below in the Bibliography section. The Schilpp numbers are used for cross-referencing in the Notes (the final column of each table), since they cover a greater time period of Einstein's life at present. The English translations of titles are generally taken from the published volumes of the Collected Papers. For some publications, however, such official translations are not available; unofficial translations are indicated with a § superscript. Although the tables are presented in chronological order by default, each table can be re-arranged in alphabetical order for any column by the reader clicking on the arrows at the top of that column. For illustration, to re-order a table by subject—e.g., to group together articles that pertain to "General relativity" or "Specific heats"—one need only click on the arrows in the "Classification and Notes" columns. To print out the re-sorted table, one may print it directly by using the web-browser Print option; the "Printable version" link at the left gives o
|
https://en.wikipedia.org/wiki/Computer%20architecture%20simulator
|
A computer architecture simulator is a program that simulates the execution of computer architecture.
Computer architecture simulators are used for the following purposes:
Lowering cost by evaluating hardware designs without building physical hardware systems.
Enabling access to unobtainable hardware.
Increasing the precision and volume of computer performance data.
Introducing abilities that are not normally possible on real hardware such as running code backwards when an error is detected or running in faster-than-real time.
Categories
Computer architecture simulators can be classified into many different categories depending on the context.
Scope: Microarchitecture simulators model the microprocessor and its components. Full-system simulators also model the processor, memory systems, and I/O devices.
Detail: Functional simulators, such as instruction set simulators, achieve the same function as modeled components. They can be simulated faster if timing is not considered. Timing simulators are functional simulators that also reproduce timing. Timing simulators can be further categorized into digital cycle-accurate and analog sub-cycle simulators.
Workload: Trace-driven simulators (also called event-driven simulators) react to pre-recorded streams of instructions with some fixed input. Execution-driven simulators allow dynamic change of instructions to be executed depending on different input data.
Full-system simulators
A full-system simulator is execution-driven architecture simulation at such a level of detail that complete software stacks from real systems can run on the simulator without any modification. A full system simulator provides virtual hardware that is independent of the nature of the host computer. The full-system model typically includes processor cores, peripheral devices, memories, interconnection buses, and network connections. Emulators are full system simulators that imitate obsolete hardware instead of under development hardware.
|
https://en.wikipedia.org/wiki/Valencia%20Koomson
|
Valencia Joyner Koomson is an associate professor in the Department of Electrical and Computer Engineering and adjunct professor in the Department of Computer Science at the Tufts University School of Engineering. Koomson is also the principal investigator for the Advanced Integrated Circuits and Systems Lab at Tufts University.
Background
Koomson was born in Washington, DC, and graduated from Benjamin Banneker Academic High School. Her parents, Otis and Vernese Joyner, moved to Washington DC during the Great Migration after living for years as sharecroppers in Wilson County, North Carolina. Her family history can be traced back to the antebellum period. Her oldest known relative is Hagar Atkinson, an enslaved African woman whose name is recorded in the will of a plantation owner in Johnston County, North Carolina.
Career
Koomson attended the Massachusetts Institute of Technology, graduating with a BS in Electrical Engineering and Computer Science in 1998 and a Masters of Engineering in 1999. Koomson subsequently earned her Master of Philosophy from the University of Cambridge in 2000, followed by her PhD in Electrical Engineering from the same institution in 2003.
Koomson was an adjunct professor at Howard University from 2004 to 2005, and during that period was a Senior Research Engineer at the University of Southern California's Information Sciences Institute (USC/ISI). She was a Visiting Professor at Rensselaer Polytechnic Institute and Boston University in 2008 and 2013, respectively. Koomson joined Tufts University in 2005 as an assistant professor, and became an associate professor in 2011. In 2020, Koomson was named an MLK Visiting Professor at MIT for the academic year 2020/2021.
Research
Koomson's research lies at the intersection of biology, medicine, and electrical engineering. Her interests are in nanoelectronic circuits, systems for wearable and implantable medical devices, semiconductors, and advanced nano-/microfluidic systems to probe int
|
https://en.wikipedia.org/wiki/Trademark%20%28computer%20security%29
|
A Trademark in computer security is a contract between code that verifies security properties of an object and code that requires that an object have certain security properties. As such it is useful in ensuring secure information flow. In object-oriented languages, trademarking is analogous to signing of data but can often be implemented without cryptography.
Operations
A trademark has two operations:
ApplyTrademark!(object)
This operation is analogous to the private key in a digital signature process, so must not be exposed to untrusted code.
It should only be applied to immutable objects, and makes sure that when VerifyTrademark? is called on the same value that it returns true.
VerifyTrademark?(object)
This operation is analogous to the public key in a digital signature process, so can be exposed to untrusted code.
Returns true if-and-only-if, ApplyTrademark! has been called with the given object.
Relationship to taint checking
Trademarking is the inverse of taint checking. Whereas taint checking is a black-listing approach that says that certain objects should not be trusted, trademarking is a white-listing approach that marks certain objects as having certain security properties.
Relationship to memoization
The apply trademark can be thought of as memoizing a verification process.
Relationship to contract verification
Sometimes a verification process does not need to be done because the fact that a value has a particular security property can be verified statically. In this case, the apply property is being used to assert that an object was produced by code that has been formally verified to only produce outputs with the particular security property.
Example
One way of applying a trademark in java:
public class Trademark {
/* Use a weak identity hash set
instead if a.equals(b) && check(a)
does not imply check(b). */
private final WeakHashSet<?> trademarked = ...;
public synchronized void apply(Object o) {
tradem
|
https://en.wikipedia.org/wiki/Calculator%20input%20methods
|
There are various ways in which calculators interpret keystrokes. These can be categorized into two main types:
On a single-step or immediate-execution calculator, the user presses a key for each operation, calculating all the intermediate results, before the final value is shown.
On an expression or formula calculator, one types in an expression and then presses a key, such as "=" or "Enter", to evaluate the expression. There are various systems for typing in an expression, as described below.
Immediate execution
The immediate execution mode of operation (also known as single-step, algebraic entry system (AES) or chain calculation mode) is commonly employed on most general-purpose calculators. In most simple four-function calculators, such as the Windows calculator in Standard mode and those included with most early operating systems, each binary operation is executed as soon as the next operator is pressed, and therefore the order of operations in a mathematical expression is not taken into account. Scientific calculators, including the Scientific mode in the Windows calculator and most modern software calculators, have buttons for brackets and can take order of operation into account. Also, for unary operations, like √ or x2, the number is entered first, then the operator; this is largely because the display screens on these kinds of calculators are generally composed entirely of seven-segment characters and thus capable of displaying only numbers, not the functions associated with them. This mode of operation also makes it impossible to change the expression being input without clearing the display entirely.
The first two examples have been given twice. The first version is for simple calculators, showing how it is necessary to rearrange operands in order to get the correct result. The second version is for scientific calculators, where operator precedence is observed. Different forms of operator precedence schemes exist. In the algebraic entry system with
|
https://en.wikipedia.org/wiki/Ship%20model%20basin
|
A ship model basin is a basin or tank used to carry out hydrodynamic tests with ship models, for the purpose of designing a new (full sized) ship, or refining the design of a ship to improve the ship's performance at sea. It can also refer to the organization (often a company) that owns and operates such a facility.
An engineering firm acts as a contractor to the relevant shipyards, and provides hydrodynamic model tests and numerical calculations to support the design and development of ships and offshore structures.
History
The eminent English engineer William Froude published a series of influential papers on ship designs for maximising stability in the 1860s. The Institution of Naval Architects eventually commissioned him to identify the most efficient hull shape. He validated his theoretical models with extensive empirical testing, using scale models for the different hull dimensions. He established a formula (now known as the Froude number) by which the results of small-scale tests could be used to predict the behaviour of full-sized hulls. He built a sequence of 3, 6 and (shown in the picture) 12 foot scale models and used them in towing trials to establish resistance and scaling laws. His experiments were later vindicated in full-scale trials conducted by the Admiralty and as a result the first ship model basin was built, at public expense, at his home in Torquay. Here he was able to combine mathematical expertise with practical experimentation to such good effect that his methods are still followed today.
Inspired by Froude's successful work, shipbuilding company William Denny and Brothers completed the world's first commercial example of a ship model basin in 1883. The facility was used to test models of a variety of vessels and explored various propulsion methods, including propellers, paddles and vane wheels. Experiments were carried out on models of the Denny-Brown stabilisers and the Denny hovercraft to gauge their feasibility. Tank staff also carr
|
https://en.wikipedia.org/wiki/Frequency%20band
|
A frequency band is an interval in the frequency domain, delimited by a lower frequency and an upper frequency. The term may refer to a radio band (such as wireless communication standards set by the International Telecommunication Union) or an interval of some other spectrum.
The frequency range of a system is the range over which it is considered to provide satisfactory performance, such as a useful level of signal with acceptable distortion characteristics. A listing of the upper and lower limits of frequency limits for a system is not useful without a criterion for what the range represents.
Many systems are characterized by the range of frequencies to which they respond. For example:
Musical instruments produce different ranges of notes within the hearing range.
The electromagnetic spectrum can be divided into many different ranges such as visible light, infrared or ultraviolet radiation, radio waves, X-rays and so on, and each of these ranges can in turn be divided into smaller ranges.
A radio communications signal must occupy a range of frequencies carrying most of its energy, called its bandwidth. A frequency band may represent one communication channel or be subdivided into many. Allocation of radio frequency ranges to different uses is a major function of radio spectrum allocation.
See also
|
https://en.wikipedia.org/wiki/Classification%20theorem
|
In mathematics, a classification theorem answers the classification problem "What are the objects of a given type, up to some equivalence?". It gives a non-redundant enumeration: each object is equivalent to exactly one class.
A few issues related to classification are the following.
The equivalence problem is "given two objects, determine if they are equivalent".
A complete set of invariants, together with which invariants are solves the classification problem, and is often a step in solving it.
A (together with which invariants are realizable) solves both the classification problem and the equivalence problem.
A canonical form solves the classification problem, and is more data: it not only classifies every class, but provides a distinguished (canonical) element of each class.
There exist many classification theorems in mathematics, as described below.
Geometry
Classification of Euclidean plane isometries
Classification theorems of surfaces
Classification of two-dimensional closed manifolds
Enriques–Kodaira classification of algebraic surfaces (complex dimension two, real dimension four)
Nielsen–Thurston classification which characterizes homeomorphisms of a compact surface
Thurston's eight model geometries, and the geometrization conjecture
Berger classification
Classification of Riemannian symmetric spaces
Classification of 3-dimensional lens spaces
Classification of manifolds
Algebra
Classification of finite simple groups
Classification of Abelian groups
Classification of Finitely generated abelian group
Classification of Rank 3 permutation group
Classification of 2-transitive permutation groups
Artin–Wedderburn theorem — a classification theorem for semisimple rings
Classification of Clifford algebras
Classification of low-dimensional real Lie algebras
Classification of Simple Lie algebras and groups
Classification of simple complex Lie algebras
Classification of simple real Lie algebras
Classification of centerless simple Lie gro
|
https://en.wikipedia.org/wiki/Software%20configuration%20management
|
In software engineering, software configuration management (SCM or S/W CM) is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine the "what, when, why and who" of the change. If a configuration is working well, SCM can determine how to replicate it across many hosts.
The acronym "SCM" is also expanded as source configuration management process and software change and configuration management. However, "configuration" is generally understood to cover changes typically made by a system administrator.
Purposes
The goals of SCM are generally:
Configuration identification - Identifying configurations, configuration items and baselines.
Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.
Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.
Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.
Build management - Managing the process and tools used for builds.
Process management - Ensuring adherence to the organization's development process.
Environment management - Managing the software and hardware that host the system.
Teamwork - Facilitate team interactions related to the process.
Defect tracking - Making sure every defect has traceability back to the source.
With the introduction of cloud computing and DevOps the purposes of SCM tools have become merged in some cases. The SCM tools themselves have become virtual appliances that can be instantiated as v
|
https://en.wikipedia.org/wiki/Basics%20of%20blue%20flower%20colouration
|
Blue flower colour was always associated with something unusual and desired. Blue roses especially were assumed to be a dream that cannot be realised. Blue colour in flower petals is caused by anthocyanins, which are members of flavonoid class metabolites. We can diversify three main classes of anthocyanin pigments: cyaniding type (two hydroxyl groups in the B-ring) responsible for red coloration, pelargonidin type (one hydroxyl group in the B-ring) responsible for orange colour and delphinidin type (three hydroxyl groups in the B-ring) responsible for violet/blue flower and fruits coloration. The main difference in the structure of listed anthocyanins type is the number of hydroxyl groups in the B-ring of the anthocyanin. Nevertheless, in the monomeric state anthocyanins never show blue colour in the weak acidic and neutral pH. The mechanism of blue colour formation are very complicated in most cases, presence of delphinidin type pigments is not sufficient, great role play also the pH and the formation of complexes of anthocyanins with flavones and metal ions.
Mechanisms
Self-association is correlated with the anthocyanin concentration. When concentration is higher we can observe change in the absorbance maximum and increase of colour intensity. Molecules of anthocyanins associate together what results in stronger and darker colour.
Co-pigmentation stabilizes and gives protection to anthocyanins in the complexes. Co-pigments are colourless or have slightly yellow colour. Co-pigments usually are flavonoids (flavones, flavonols, flavanons, flavanols), other polyphenols, alkaloids, amino acids or organic acids. The most efficient co-pigments are flavonols like rutin or quercetin and phenolic acids like sinapic acid or ferulic acid. Association of co-pigment with anthocyanin causes bathochromic effect, shift in absorption maximum to higher wavelength, in result we can observe change of the colour from red to blue. This phenomenon is also called bluing effect. We can
|
https://en.wikipedia.org/wiki/Natural%20landscape
|
A natural landscape is the original landscape that exists before it is acted upon by human culture. The natural landscape and the cultural landscape are separate parts of the landscape. However, in the 21st century, landscapes that are totally untouched by human activity no longer exist, so that reference is sometimes now made to degrees of naturalness within a landscape.
In Silent Spring (1962) Rachel Carson describes a roadside verge as it used to look: "Along the roads, laurel, viburnum and alder, great ferns and wildflowers delighted the traveler’s eye through much of the year" and then how it looks now following the use of herbicides: "The roadsides, once so attractive, were now lined with browned and withered vegetation as though swept by fire". Even though the landscape before it is sprayed is biologically degraded, and may well contains alien species, the concept of what might constitute a natural landscape can still be deduced from the context.
The phrase "natural landscape" was first used in connection with landscape painting, and landscape gardening, to contrast a formal style with a more natural one, closer to nature. Alexander von Humboldt (1769 – 1859) was to further conceptualize this into the idea of a natural landscape separate from the cultural landscape. Then in 1908 geographer Otto Schlüter developed the terms original landscape (Urlandschaft) and its opposite cultural landscape (Kulturlandschaft) in an attempt to give the science of geography a subject matter that was different from the other sciences. An early use of the actual phrase "natural landscape" by a geographer can be found in Carl O. Sauer's paper "The Morphology of Landscape" (1925).
Origins of the term
The concept of a natural landscape was first developed in connection with landscape painting, though the actual term itself was first used in relation to landscape gardening. In both cases it was used to contrast a formal style with a more natural one, that is closer to nature. Chu
|
https://en.wikipedia.org/wiki/Storage%20organ
|
A storage organ is a part of a plant specifically modified for storage of energy
(generally in the form of carbohydrates) or water. Storage organs often grow underground, where they are better protected from attack by herbivores. Plants that have an underground storage organ are called geophytes in the Raunkiær plant life-form classification system. Storage organs often, but not always, act as perennating organs which enable plants to survive adverse conditions (such as cold, excessive heat, lack of light or drought).
Relationship to perennating organ
Storage organs may act as perennating organs ('perennating' as in perennial, meaning "through the year", used in the sense of continuing beyond the year and in due course lasting for multiple years). These are used by plants to survive adverse periods in the plant's life-cycle (e.g. caused by cold, excessive heat, lack of light or drought). During these periods, parts of the plant die and then when conditions become favourable again, re-growth occurs from buds in the perennating organs. For example, geophytes growing in woodland under deciduous trees (e.g. bluebells, trilliums) die back to underground storage organs during summer when tree leaf cover restricts light and water is less available.
However, perennating organs need not be storage organs. After losing their leaves, deciduous trees grow them again from 'resting buds', which are the perennating organs of phanerophytes in the Raunkiær classification, but which do not specifically act as storage organs. Equally, storage organs need not be perennating organs. Many succulents have leaves adapted for water storage, which they retain in adverse conditions.
Underground storage organ
In common parlance, underground storage organs may be generically called roots, tubers, or bulbs, but to the botanist there is more specific technical nomenclature:
True roots:
Storage taproot — e.g. carrot
Tuberous root or root tuber – e.g. Dahlia
Modified stems:
Bulb (a shor
|
https://en.wikipedia.org/wiki/Bioelectrospray
|
Bio-electrospraying is a technology that enables the deposition of living cells on various targets with a resolution that depends on cell size and not on the jetting phenomenon. It is envisioned that "unhealthy cells would draw a different charge at the needle from healthy ones, and could be identified by the mass spectrometer", with tremendous implications in the health care industry.
The early versions of bio-electrosprays were employed in several areas of research, most notably self-assembly of carbon nanotubes. Although the self-assembly mechanism is not clear yet, "elucidating electrosprays as a competing nanofabrication route for forming self-assemblies with a wide range of nanomaterials in the nanoscale for top-down based bottom-up assembly of structures." Future research may reveal important interactions between migrating cells and self-assembled nanostructures. Such nano-assemblies formed by means of this top-down approach could be explored as a bottom-up methodology for encouraging cell migration to those architectures for forming cell patterns to nano-electronics, which are a few examples, respectively.
After initial exploration with a single protein, increasingly complex systems were studied by bio-electrosprays. These include, but are not limited to, neuronal cells, stem cells, and even whole embryos. The potential of the method was demonstrated by investigating cytogenetic and physiological changes of human lymphocyte cells as well as conducting comprehensive genetic, genomic and physiological state studies of human cells and cells of the model yeast Saccharomyces cerevisiae.
See also
Electrospray ionization
|
https://en.wikipedia.org/wiki/List%20of%20real%20analysis%20topics
|
This is a list of articles that are considered real analysis topics.
General topics
Limits
Limit of a sequence
Subsequential limit – the limit of some subsequence
Limit of a function (see List of limits for a list of limits of common functions)
One-sided limit – either of the two limits of functions of real variables x, as x approaches a point from above or below
Squeeze theorem – confirms the limit of a function via comparison with two other functions
Big O notation – used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions
Sequences and series
(see also list of mathematical series)
Arithmetic progression – a sequence of numbers such that the difference between the consecutive terms is constant
Generalized arithmetic progression – a sequence of numbers such that the difference between consecutive terms can be one of several possible constants
Geometric progression – a sequence of numbers such that each consecutive term is found by multiplying the previous one by a fixed non-zero number
Harmonic progression – a sequence formed by taking the reciprocals of the terms of an arithmetic progression
Finite sequence – see sequenceInfinite sequence – see sequenceDivergent sequence – see limit of a sequence or divergent seriesConvergent sequence – see limit of a sequence or convergent seriesCauchy sequence – a sequence whose elements become arbitrarily close to each other as the sequence progresses
Convergent series – a series whose sequence of partial sums converges
Divergent series – a series whose sequence of partial sums diverges
Power series – a series of the form
Taylor series – a series of the form
Maclaurin series – see Taylor seriesBinomial series – the Maclaurin series of the function f given by f(x) = (1 + x) α
Telescoping series
Alternating series
Geometric series
Divergent geometric series
Harmonic series
Fourier series
Lambert series
Summation methods
Ce
|
https://en.wikipedia.org/wiki/Imaging%20biomarker
|
An imaging biomarker is a biologic feature, or biomarker detectable in an image. In medicine, an imaging biomarker is a feature of an image relevant to a patient's diagnosis. For example, a number of biomarkers are frequently used to determine risk of lung cancer. First, a simple lesion in the lung detected by X-ray, CT, or MRI can lead to the suspicion of a neoplasm. The lesion itself serves as a biomarker, but the minute details of the lesion serve as biomarkers as well, and can collectively be used to assess the risk of neoplasm. Some of the imaging biomarkers used in lung nodule assessment include size, spiculation, calcification, cavitation, location within the lung, rate of growth, and rate of metabolism. Each piece of information from the image represents a probability. Spiculation increases the probability of the lesion being cancer. A slow rate of growth indicates benignity. These variables can be added to the patient's history, physical exam, laboratory tests, and pathology to reach a proposed diagnosis. Imaging biomarkers can be measured using several techniques, such as CT, electroencephalography, magnetoencephalography, and MRI.
History
Imaging biomarkers are as old as the X-ray itself. A feature of a radiograph that represent some kind of pathology was first coined "Roentgen signs" after Wilhelm Röntgen, the discoverer of the X-ray. As the field of medical imaging developed and expanded to include numerous imaging modalities, imaging biomarkers have grown as well, in both quantity and complexity as finally in chemical imaging.
Quantitative imaging biomarkers
A quantitative imaging biomarkers (QIB) is an objective characteristic derived from an in vivo image measured on a ratio or interval scale as indicators of normal biological processes, pathogenic processes or a response to a therapeutic intervention. advantage of QIB's over qualitative imaging biomarkers is that they are better suited to be used for follow-up of patients or in clinical trials
|
https://en.wikipedia.org/wiki/Signal
|
In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The IEEE Transactions on Signal Processing includes audio, video, speech, image, sonar, and radar as examples of signals. A signal may also be defined as observable change in a quantity over space or time (a time series), even if it does not carry information.
In nature, signals can be actions done by an organism to alert other organisms, ranging from the release of plant chemicals to warn nearby plants of a predator, to sounds or motions made by animals to alert other animals of food. Signaling occurs in all organisms even at cellular levels, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability of animals to communicate with each other by developing ways of signaling. In human engineering, signals are typically provided by a sensor, and often the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, and a speaker does the reverse.
Another important property of a signal is its entropy or information content. Information theory serves as the formal study of signals and their content. The information of a signal is often accompanied by noise, which primarily refers to unwanted modifications of signals, but is often extended to include unwanted signals conflicting with desired signals (crosstalk). The reduction of noise is covered in part under the heading of signal integrity. The separation of desired signals from background noise is the field of signal recovery, one branch of which is estimation theory, a probabilistic approach to suppressing random disturbances.
Engineering disciplines such as electrical engineering have advanced the design, study, and implementation of systems involving t
|
https://en.wikipedia.org/wiki/List%20of%20set%20identities%20and%20relations
|
This article lists mathematical properties and laws of sets, involving the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations.
The binary operations of set union () and intersection () satisfy many identities. Several of these identities or "laws" have well established names.
Notation
Throughout this article, capital letters (such as and ) will denote sets. On the left hand side of an identity, typically,
will be the eft most set,
will be the iddle set, and
will be the ight most set.
This is to facilitate applying identities to expressions that are complicated or use the same symbols as the identity.
For example, the identity
may be read as:
Elementary set operations
For sets and define:
and
where the is sometimes denoted by and equals:
One set is said to another set if Sets that do not intersect are said to be .
The power set of is the set of all subsets of and will be denoted by
Universe set and complement notation
The notation
may be used if is a subset of some set that is understood (say from context, or because it is clearly stated what the superset is).
It is emphasized that the definition of depends on context. For instance, had been declared as a subset of with the sets and not necessarily related to each other in any way, then would likely mean instead of
If it is needed then unless indicated otherwise, it should be assumed that denotes the universe set, which means that all sets that are used in the formula are subsets of
In particular, the complement of a set will be denoted by where unless indicated otherwise, it should be assumed that denotes the complement of in (the universe)
One subset involved
Assume
Identity:
Definition: is called a left identity element of a binary operator if fo
|
https://en.wikipedia.org/wiki/Mean%20inter-particle%20distance
|
Mean inter-particle distance (or mean inter-particle separation) is the mean distance between microscopic particles (usually atoms or molecules) in a macroscopic body.
Ambiguity
From the very general considerations, the mean inter-particle distance is proportional to the size of the per-particle volume , i.e.,
where is the particle density. However, barring a few simple cases such as the ideal gas model, precise calculations of the proportionality factor are impossible analytically. Therefore, approximate expressions are often used. One such an estimation is the Wigner–Seitz radius
which corresponds to the radius of a sphere having per-particle volume . Another popular definition is
,
corresponding to the length of the edge of the cube with the per-particle volume . The two definitions differ by a factor of approximately , so one has to exercise care if an article fails to define the parameter exactly. On the other hand, it is often used in qualitative statements where such a numeric factor is either irrelevant or plays an insignificant role, e.g.,
"a potential energy ... is proportional to some power n of the inter-particle distance r" (Virial theorem)
"the inter-particle distance is much larger than the thermal de Broglie wavelength" (Kinetic theory)
Ideal gas
Nearest neighbor distribution
We want to calculate probability distribution function of distance to the nearest neighbor (NN) particle. (The problem was first considered by Paul Hertz; for a modern derivation see, e.g.,.) Let us assume particles inside a sphere having volume , so that . Note that since the particles in the ideal gas are non-interacting, the probability to find a particle at a certain distance from another particle is the same as probability to find a particle at the same distance from any other point; we shall use the center of the sphere.
An NN particle at distance means exactly one of the particles resides at that distance while the rest
particles are at larger distanc
|
https://en.wikipedia.org/wiki/Genetic%20pollution
|
Genetic pollution is a term for uncontrolled gene flow into wild populations. It is defined as "the dispersal of contaminated altered genes from genetically engineered organisms to natural organisms, esp. by cross-pollination", but has come to be used in some broader ways. It is related to the population genetics concept of gene flow, and genetic rescue, which is genetic material intentionally introduced to increase the fitness of a population. It is called genetic pollution when it negatively impacts the fitness of a population, such as through outbreeding depression and the introduction of unwanted phenotypes which can lead to extinction.
Conservation biologists and conservationists have used the term to describe gene flow from domestic, feral, and non-native species into wild indigenous species, which they consider undesirable. They promote awareness of the effects of introduced invasive species that may "hybridize with native species, causing genetic pollution". In the fields of agriculture, agroforestry and animal husbandry, genetic pollution is used to describe gene flows between genetically engineered species and wild relatives. The use of the word "pollution" is meant to convey the idea that mixing genetic information is bad for the environment, but because the mixing of genetic information can lead to a variety of outcomes, "pollution" may not always be the most accurate descriptor.
Gene flow to wild population
Some conservation biologists and conservationists have used genetic pollution for a number of years as a term to describe gene flow from a non-native, invasive subspecies, domestic, or genetically-engineered population to a wild indigenous population.
Importance
The introduction of genetic material into the gene pool of a population by human intervention can have both positive and negative effects on populations. When genetic material is intentionally introduced to increase the fitness of a population, this is called genetic rescue. When genet
|
https://en.wikipedia.org/wiki/List%20of%20baryons
|
Baryons are composite particles made of three quarks, as opposed to mesons, which are composite particles made of one quark and one antiquark. Baryons and mesons are both hadrons, which are particles composed solely of quarks or both quarks and antiquarks. The term baryon is derived from the Greek "βαρύς" (barys), meaning "heavy", because, at the time of their naming, it was believed that baryons were characterized by having greater masses than other particles that were classed as matter.
Until a few years ago, it was believed that some experiments showed the existence of pentaquarks – baryons made of four quarks and one antiquark. Prior to 2006 the particle physics community as a whole did not view the existence of pentaquarks as likely. On 13 July 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom lambda baryons (Λ).
Since baryons are composed of quarks, they participate in the strong interaction. Leptons, on the other hand, are not composed of quarks and as such do not participate in the strong interaction. The best known baryons are protons and neutrons, which make up most of the mass of the visible matter in the universe, whereas electrons, the other major component of atoms, are leptons. Each baryon has a corresponding antiparticle, known as an antibaryon, in which quarks are replaced by their corresponding antiquarks. For example, a proton is made of two up quarks and one down quark, while its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark.
Baryon properties
These lists detail all known and predicted baryons in total angular momentum J = and J = configurations with positive parity.
Baryons composed of one type of quark (uuu, ddd, ...) can exist in J = configuration, but J = is forbidden by the Pauli exclusion principle.
Baryons composed of two types of quarks (uud, uus, ...) can exist in both J = and J = configurations.
Baryons composed o
|
https://en.wikipedia.org/wiki/Head-related%20transfer%20function
|
A head-related transfer function (HRTF), also known as anatomical transfer function (ATF), or a head shadow, is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ears, ear canal, density of the head, size and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Generally speaking, the HRTF boosts frequencies from 2–5 kHz with a primary resonance of +17 dB at 2,700 Hz. But the response curve is more complex than a single bump, affects a broad frequency spectrum, and varies significantly from person to person.
A pair of HRTFs for two ears can be used to synthesize a binaural sound that seems to come from a particular point in space. It is a transfer function, describing how a sound from a specific point will arrive at the ear (generally at the outer end of the auditory canal). Some consumer home entertainment products designed to reproduce surround sound from stereo (two-speaker) headphones use HRTFs. Some forms of HRTF processing have also been included in computer software to simulate surround sound playback from loudspeakers.
Sound localization
Humans have just two ears, but can locate sounds in three dimensions – in range (distance), in direction above and below (elevation), in front and to the rear, as well as to either side (azimuth). This is possible because the brain, inner ear, and the external ears (pinna) work together to make inferences about location. This ability to localize sound sources may have developed in humans and ancestors as an evolutionary necessity since the eyes can only see a fraction of the world around a viewer, and vision is hampered in darkness, while the ability to localize a sound source works in all directions, to varying accuracy,
regardless of the surrounding light.
Humans estimate the location of a source by taking cues derived from one ear (monaura
|
https://en.wikipedia.org/wiki/Cockpit%20display%20system
|
The Cockpit display systems (or CDS) provides the visible (and audible) portion of the Human Machine Interface (HMI) by which aircrew manage the modern Glass cockpit and thus interface with the aircraft avionics.
History
Prior to the 1970s, cockpits did not typically use any electronic instruments or displays (see Glass cockpit history). Improvements in computer technology, the need for enhancement of situational awareness in more complex environments, and the rapid growth of commercial air transportation, together with continued military competitiveness, led to increased levels of integration in the cockpit.
The average transport aircraft in the mid-1970s had more than one hundred cockpit instruments and controls, and the primary flight instruments were already crowded with indicators, crossbars, and symbols, and the growing number of cockpit elements were competing for cockpit space and pilot attention.
Architecture
Glass cockpits routinely include high-resolution multi-color displays (often LCD displays) that present information relating to the various aircraft systems (such as flight management) in an integrated way. Integrated Modular Avionics (IMA) architecture allows for the integration of the cockpit instruments and displays at the hardware and software level to be maximized.
CDS software typically uses API code to integrate with the platform (such as OpenGL to access the graphics drivers for example). This software may be written manually or with the help of COTS tools such as GL Studio, VAPS, VAPS XT or SCADE Display.
Standards such as ARINC 661 specify the integration of the CDS at the software level with the aircraft system applications (called User Applications or UA).
See also
Acronyms and abbreviations in avionics
Avionics software
Integrated Modular Avionics
|
https://en.wikipedia.org/wiki/Food%20powder
|
Food powder or powdery food is the most common format of dried solid food material that meets specific quality standards, such as moisture content, particle size, and particular morphology. Common powdery food products include milk powder, tea powder, cocoa powder, coffee powder, soybean flour, wheat flour, and chili powder. Powders are particulate discrete solid particles of size ranging from nanometres to millimetres that generally flow freely when shaken or tilted. The bulk powder properties are the combined effect of particle properties by the conversion of food products in solid state into powdery form for ease of use, processing and keeping quality. Various terms are used to indicate the particulate solids in bulk, such as powder, granules, flour and dust, though all these materials can be treated under powder category. These common terminologies are based on the size or the source of the materials.
The particle size, distribution, shape and surface characteristics and the density of the powders are highly variable and depend on both the characteristics of the raw materials and processing conditions during their formations. These parameters contribute to the functional properties of powders, including flowability, packaging density, ease of handling, dust forming, mixing, compressibility and surface activity.
Characteristics
Microstructure
Food powder may be amorphous or crystalline in their molecular level structure. Depending on the process applied, the powders can be produced in either of these forms. Powders in crystalline state possess defined molecular alignment in the long-range order, while amorphous state is disordered, more open and porous. Common powders found in crystalline states are salts, sugars and organic acids. Meanwhile, many food products such as dairy powders, fruit and vegetable powders, honey powders and hydrolysed protein powders are normally in amorphous state. The properties of food powders including their functionality and their
|
https://en.wikipedia.org/wiki/Legion%20%28taxonomy%29
|
The legion, in biological classification, is a non-obligatory taxonomic rank within the Linnaean hierarchy sometimes used in zoology.
Taxonomic rank
In zoological taxonomy, the legion is:
subordinate to the class
superordinate to the cohort.
consists of a group of related orders
Legions may be grouped into superlegions or subdivided into sublegions, and these again into infralegions.
Use in zoology
Legions and their super/sub/infra groups have been employed in some classifications of birds and mammals. Full use is made of all of these (along with cohorts and supercohorts) in, for example, McKenna and Bell's classification of mammals.
See also
Linnaean taxonomy
Mammal classification
|
https://en.wikipedia.org/wiki/Classical%20fluid
|
Classical fluids are systems of particles which retain a definite volume, and are at sufficiently high temperatures (compared to their Fermi energy) that quantum effects can be neglected. A system of hard spheres, interacting only by hard collisions (e.g., billiards, marbles), is a model classical fluid. Such a system is well described by the Percus–Yevik equation. Common liquids, e.g., liquid air, gasoline etc., are essentially mixtures of classical fluids. Electrolytes, molten salts, salts dissolved in water, are classical charged fluids. A classical fluid when cooled undergoes a freezing transition. On heating it undergoes an evaporation transition and becomes a classical gas that obeys Boltzmann statistics.
A system of charged classical particles moving in a uniform positive neutralizing background is known as a one-component plasma (OCP). This is well described by the Hyper-netted chain equation (see CHNC).
An essentially very accurate way of determining the properties of classical fluids is provided by the method of molecular dynamics.
An electron gas confined in a metal is not a classical fluid, whereas a very high-temperature plasma of electrons could behave as a classical fluid. Such non-classical Fermi systems, i.e., quantum fluids, can be studied using quantum Monte Carlo methods, Feynman path integral equation methods, and approximately via CHNC integral-equation methods.
See also
Bose–Einstein condensate
Fermi liquid
Many-body theory
Quantum fluid
|
https://en.wikipedia.org/wiki/A%20Disappearing%20Number
|
A Disappearing Number is a 2007 play co-written and devised by the Théâtre de Complicité company and directed and conceived by English playwright Simon McBurney. It was inspired by the collaboration during the 1910s between the pure mathematicians Srinivasa Ramanujan from India, and the Cambridge University don G.H. Hardy.
It was a co-production between the UK-based theatre company Complicite and Theatre Royal, Plymouth, and Ruhrfestspiele, Wiener Festwochen, and the Holland Festival. A Disappearing Number premiered in Plymouth in March 2007, toured internationally, and played at The Barbican Centre in Autumn 2007 and 2008 and at Lincoln Center in July 2010. It was directed by Simon McBurney with music by Nitin Sawhney. The production is 110 minutes with no intermission.
The piece was co-devised and written by the cast and company. The cast in order of appearance: Firdous Bamji, Saskia Reeves, David Annen, Paul Bhattacharjee, Shane Shambu, Divya Kasturi and Chetna Pandya.
Plot
Ramanujan first attracted Hardy's attention by writing him a letter in which he proved that
where the notation indicates a Ramanujan summation.
Hardy realised that this confusing presentation of the series 1 + 2 + 3 + 4 + ⋯ was an application of the Riemann zeta function with . Ramanujan's work became one of the foundations of bosonic string theory, a precursor of modern string theory.
The play includes live tabla playing, which "morphs seductively into pure mathematics", as the Financial Times review put it, "especially when … its rhythms shade into chants of number sequences reminiscent of the libretto to Philip Glass's Einstein on the Beach. One can hear the beauty of the sequences without grasping the rules that govern them."
The play has two strands of narrative and presents strong visual and physical theatre. It interweaves the passionate intellectual relationship between Hardy and the more intuitive Ramanujan, with the present-day story of Ruth, an English maths lecturer, and
|
https://en.wikipedia.org/wiki/Instruction%20window
|
An instruction window in computer architecture refers to the set of instructions which can execute out-of-order in a speculative processor.
In particular, in a conventional design, the instruction window consists of all instructions which are in the re-order buffer (ROB). In such a processor, any instruction within the instruction window can be executed when its operands are ready. Out-of-order processors derive their name because this may occur out-of-order (if operands to a younger instruction are ready before those of an older instruction).
The instruction window has a finite size, and new instructions can enter the window (usually called dispatch or allocate) only when other instructions leave the window (usually called retire or commit). Instructions enter and leave the instruction window in program order, and an instruction can only leave the window when it is the oldest instruction in the window and it has been completed. Hence, the instruction window can be seen as a sliding window in which the instructions can become out-of-order. All execution within the window is speculative (i.e., side-effects are not applied outside the CPU) until it is committed in order to support asynchronous exception handling like interrupts.
This paradigm is also known as restricted dataflow because instructions within the window execute in dataflow order (not necessarily in program order) but the window in which this occurs is restricted (of finite size).
The instruction window is distinct from pipelining: instructions in an in-order pipeline are not in an instruction window in the conventionally understood sense, because they cannot execute out of order with respect to one another. Out-of-order processors are usually built around pipelines, but many of the pipeline stages (e.g., front-end instruction fetch and decode stages) are not considered to be part of the instruction window.
See also
Superscalar processor
|
https://en.wikipedia.org/wiki/Ergodic%20process
|
In physics, statistics, econometrics and signal processing, a stochastic process is said to be in an ergodic regime if an observable's ensemble average equals the time average. In this regime, any collection of random samples from a process must represent the average statistical properties of the entire regime. Conversely, a process that is not in ergodic regime is said to be in non-ergodic regime.
Specific definitions
One can discuss the ergodicity of various statistics of a stochastic process. For example, a wide-sense stationary process has constant mean
and autocovariance
that depends only on the lag and not on time . The properties and
are ensemble averages (calculated over all possible sample functions ), not time averages.
The process is said to be mean-ergodic or mean-square ergodic in the first moment
if the time average estimate
converges in squared mean to the ensemble average as .
Likewise,
the process is said to be autocovariance-ergodic or d moment
if the time average estimate
converges in squared mean to the ensemble average , as .
A process which is ergodic in the mean and autocovariance is sometimes called ergodic in the wide sense.
Discrete-time random processes
The notion of ergodicity also applies to discrete-time random processes
for integer .
A discrete-time random process is ergodic in mean if
converges in squared mean
to the ensemble average ,
as .
Examples
Ergodicity means the ensemble average equals the time average. Following are examples to illustrate this principle.
Call centre
Each operator in a call centre spends time alternately speaking and listening on the telephone, as well as taking breaks between calls. Each break and each call are of different length, as are the durations of each 'burst' of speaking and listening, and indeed so is the rapidity of speech at any given moment, which could each be modelled as a random process.
Take N call centre operators (N should be a very large integer) and plot the
|
https://en.wikipedia.org/wiki/Application-specific%20integrated%20circuit
|
An application-specific integrated circuit (ASIC ) is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use, such as a chip designed to run in a digital voice recorder or a high-efficiency video codec. Application-specific standard product chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series. ASIC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology, as MOS integrated circuit chips.
As feature sizes have shrunk and chip design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.
Field-programmable gate arrays (FPGA) are the modern-day technology improvement on breadboards, meaning that they are not made to be application-specific as opposed to ASICs. Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost-effective than an ASIC design, even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers typically prefer FPGAs for prototyping and devices with low production volume and ASICs for very large production volumes where NRE costs can be amortized across many devices.
History
Early ASICs used gate array technology. By 1967, Ferranti and Interdesign were manufacturing early bipolar gate arrays. In 1967, Fairchild Semiconductor introduced the Micromatrix family of bipolar diode–t
|
https://en.wikipedia.org/wiki/List%20of%20accelerators%20in%20particle%20physics
|
A list of particle accelerators used for particle physics experiments. Some early particle accelerators that more properly did nuclear physics, but existed prior to the separation of particle physics from that field, are also included. Although a modern accelerator complex usually has several stages of accelerators, only accelerators whose output has been used directly for experiments are listed.
Early accelerators
These all used single beams with fixed targets. They tended to have very briefly run, inexpensive, and unnamed experiments.
Cyclotrons
[1] The magnetic pole pieces and return yoke from the 60-inch cyclotron were later moved to UC Davis and incorporated into a 76-inch isochronous cyclotron which is still in use today
Other early accelerator types
Synchrotrons
Fixed-target accelerators
More modern accelerators that were also run in fixed target mode; often, they will also have been run as colliders, or accelerated particles for use in subsequently built colliders.
High intensity hadron accelerators (Meson and neutron sources)
Electron and low intensity hadron accelerators
Colliders
Electron–positron colliders
Hadron colliders
Electron-proton colliders
Light sources
Hypothetical accelerators
Besides the real accelerators listed above, there are hypothetical accelerators often used
as hypothetical examples or optimistic projects by particle physicists.
Eloisatron (Eurasiatic Long Intersecting Storage Accelerator) was a project of INFN headed by Antonio Zichichi at the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Sicily. The center-of-mass energy was planned to be 200 TeV, and the size was planned to span parts of Europe and Asia.
Fermitron was an accelerator sketched by Enrico Fermi on a notepad in the 1940s proposing an accelerator in stable orbit around the Earth.
The undulator radiation collider is a design for an accelerator with a center-of-mass energy around the GUT scale. It would be light-weeks across a
|
https://en.wikipedia.org/wiki/Seccomp
|
seccomp (short for secure computing mode) is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a "secure" state where it cannot make any system calls except exit(), sigreturn(), read() and write() to already-open file descriptors. Should it attempt any other system calls, the kernel will either just log the event or terminate the process with SIGKILL or SIGSYS. In this sense, it does not virtualize the system's resources but isolates the process from them entirely.
seccomp mode is enabled via the system call using the PR_SET_SECCOMP argument, or (since Linux kernel 3.17) via the system call. seccomp mode used to be enabled by writing to a file, /proc/self/seccomp, but this method was removed in favor of prctl(). In some kernel versions, seccomp disables the RDTSC x86 instruction, which returns the number of elapsed processor cycles since power-on, used for high-precision timing.
seccomp-bpf is an extension to seccomp that allows filtering of system calls using a configurable policy implemented using Berkeley Packet Filter rules. It is used by OpenSSH and vsftpd as well as the Google Chrome/Chromium web browsers on ChromeOS and Linux. (In this regard seccomp-bpf achieves similar functionality, but with more flexibility and higher performance, to the older systrace—which seems to be no longer supported for Linux.)
Some consider seccomp comparable to OpenBSD pledge(2) and FreeBSD capsicum(4).
History
seccomp was first devised by Andrea Arcangeli in January 2005 for use in public grid computing and was originally intended as a means of safely running untrusted compute-bound programs. It was merged into the Linux kernel mainline in kernel version 2.6.12, which was released on March 8, 2005.
Software using seccomp or seccomp-bpf
Android uses a seccomp-bpf filter in the zygote since Android 8.0 Oreo.
systemd's sandboxing options are based on seccomp.
QEMU, the Quick Emulator, the core component to the
|
https://en.wikipedia.org/wiki/Beta%20encoder
|
A beta encoder is an analog-to-digital conversion (A/D) system in which a real number in the unit interval is represented by a finite representation of a sequence in base beta, with beta being a real number between 1 and 2. Beta encoders are an alternative to traditional approaches to pulse-code modulation.
As a form of non-integer representation, beta encoding contrasts with traditional approaches to binary quantization, in which each value is mapped to the first N bits of its base-2 expansion. Rather than using base 2, beta encoders use base beta as a beta-expansion.
In practice, beta encoders have attempted to exploit the redundancy provided by the non-uniqueness of the expansion in base beta to produce more robust results. An early beta encoder, the Golden ratio encoder used the golden ratio base for its value of beta, but was susceptible to hardware errors. Although integrator leaks in hardware elements make some beta encoders imprecise, specific algorithms can be used to provide exponentially accurate approximations for the value of beta, despite the imprecise results provided by some circuit components.
An alternative design called the negative beta encoder (called so due to the negative eigenvalue of the transition probability matrix) has been proposed to further reduce the quantization error.
See also
Pulse-code modulation
Quantization (signal processing)
Sampling (signal processing)
|
https://en.wikipedia.org/wiki/Mathematics%20of%20paper%20folding
|
The discipline of origami or paper folding has received a considerable amount of mathematical study. Fields of interest include a given paper model's flat-foldability (whether the model can be flattened without damaging it), and the use of paper folds to solve up-to cubic mathematical equations.
Computational origami is a recent branch of computer science that is concerned with studying algorithms that solve paper-folding problems. The field of computational origami has also grown significantly since its inception in the 1990s with Robert Lang's TreeMaker algorithm to assist in the precise folding of bases. Computational origami results either address origami design or origami foldability. In origami design problems, the goal is to design an object that can be folded out of paper given a specific target configuration. In origami foldability problems, the goal is to fold something using the creases of an initial configuration. Results in origami design problems have been more accessible than in origami foldability problems.
History
In 1893, Indian civil servant T. Sundara Row published Geometric Exercises in Paper Folding which used paper folding to demonstrate proofs of geometrical constructions. This work was inspired by the use of origami in the kindergarten system. Row demonstrated an approximate trisection of angles and implied construction of a cube root was impossible.
In 1922, Harry Houdini published "Houdini's Paper Magic," which described origami techniques that drew informally from mathematical approaches that were later formalized.
In 1936 Margharita P. Beloch showed that use of the 'Beloch fold', later used in the sixth of the Huzita–Hatori axioms, allowed the general cubic equation to be solved using origami.
In 1949, R C Yeates' book "Geometric Methods" described three allowed constructions corresponding to the first, second, and fifth of the Huzita–Hatori axioms.
The Yoshizawa–Randlett system of instruction by diagram was introduced in 1961.
I
|
https://en.wikipedia.org/wiki/Floral%20formula
|
A floral formula is a notation for representing the structure of particular types of flowers. Such notations use numbers, letters and various symbols to convey significant information in a compact form. They may represent the floral form of a particular species, or may be generalized to characterize higher taxa, usually giving ranges of numbers of organs. Floral formulae are one of the two ways of describing flower structure developed during the 19th century, the other being floral diagrams. The format of floral formulae differs according to the tastes of particular authors and periods, yet they tend to convey the same information.
A floral formula is often used along with a floral diagram.
History
Floral formulae were developed at the beginning of the 19th century. The first authors using them were Cassel (1820) who first devised lists of integers to denote numbers of parts in named whorls; and Martius (1828). Grisebach (1854) used 4-integer series to represent the 4 whorls of floral parts in his textbook to describe characteristics of floral families, stating numbers of different organs separated by commas and highlighting fusion. Sachs (1873) used them together with floral diagrams, he noted their advantage of being composed of "ordinary typeface".
Although Eichler widely used floral diagrams in his Blüthendiagramme, he used floral formulae sparingly, mainly for families with simple flowers. Sattler's Organogenesis of Flowers (1973) takes advantage of floral formulae and diagrams to describe the ontogeny of 50 plant species. Newer books containing formulae include Plant Systematics by Judd et al. (2002) and Simpson (2010). Prenner et al. devised an extension of the existing model to broaden the descriptive capability of the formula and argued that formulae should be included in formal taxonomic descriptions. Ronse De Craene (2010) partially utilized their way of writing the formulae in his book Floral Diagrams.
Contained information
Organ numbers and fus
|
https://en.wikipedia.org/wiki/Bode%20plot
|
In electrical engineering and control theory, a Bode plot is a graph of the frequency response of a system. It is usually a combination of a Bode magnitude plot, expressing the magnitude (usually in decibels) of the frequency response, and a Bode phase plot, expressing the phase shift.
As originally conceived by Hendrik Wade Bode in the 1930s, the plot is an asymptotic approximation of the frequency response, using straight line segments.
Overview
Among his several important contributions to circuit theory and control theory, engineer Hendrik Wade Bode, while working at Bell Labs in the 1930s, devised a simple but accurate method for graphing gain and phase-shift plots. These bear his name, Bode gain plot and Bode phase plot. "Bode" is often pronounced , although the Dutch pronunciation is Bo-duh. ().
Bode was faced with the problem of designing stable amplifiers with feedback for use in telephone networks. He developed the graphical design technique of the Bode plots to show the gain margin and phase margin required to maintain stability under variations in circuit characteristics caused during manufacture or during operation. The principles developed were applied to design problems of servomechanisms and other feedback control systems. The Bode plot is an example of analysis in the frequency domain.
Definition
The Bode plot for a linear, time-invariant system with transfer function ( being the complex frequency in the Laplace domain) consists of a magnitude plot and a phase plot.
The Bode magnitude plot is the graph of the function of frequency (with being the imaginary unit). The -axis of the magnitude plot is logarithmic and the magnitude is given in decibels, i.e., a value for the magnitude is plotted on the axis at .
The Bode phase plot is the graph of the phase, commonly expressed in degrees, of the transfer function as a function of . The phase is plotted on the same logarithmic -axis as the magnitude plot, but the value for the phase is pl
|
https://en.wikipedia.org/wiki/RF%20CMOS
|
RF CMOS is a metal–oxide–semiconductor (MOS) integrated circuit (IC) technology that integrates radio-frequency (RF), analog and digital electronics on a mixed-signal CMOS (complementary MOS) RF circuit chip. It is widely used in modern wireless telecommunications, such as cellular networks, Bluetooth, Wi-Fi, GPS receivers, broadcasting, vehicular communication systems, and the radio transceivers in all modern mobile phones and wireless networking devices. RF CMOS technology was pioneered by Pakistani engineer Asad Ali Abidi at UCLA during the late 1980s to early 1990s, and helped bring about the wireless revolution with the introduction of digital signal processing in wireless communications. The development and design of RF CMOS devices was enabled by van der Ziel's FET RF noise model, which was published in the early 1960s and remained largely forgotten until the 1990s.
History
Pakistani engineer Asad Ali Abidi, while working at Bell Labs and then UCLA during the 1980s1990s, pioneered radio research in metal–oxide–semiconductor (MOS) technology and made seminal contributions to radio architecture based on complementary MOS (CMOS) switched-capacitor (SC) technology. In the early 1980s, while working at Bell, he worked on the development of sub-micron MOSFET (MOS field-effect transistor) VLSI (very large-scale integration) technology, and demonstrated the potential of sub-micron NMOS integrated circuit (IC) technology in high-speed communication circuits. Abidi's work was initially met with skepticism from proponents of GaAs and bipolar junction transistors, the dominant technologies for high-speed communication circuits at the time. In 1985 he joined the University of California, Los Angeles (UCLA), where he pioneered RF CMOS technology during the late 1980s to early 1990s. His work changed the way in which RF circuits would be designed, away from discrete bipolar transistors and towards CMOS integrated circuits.
Abidi was researching analog CMOS circuits for s
|
https://en.wikipedia.org/wiki/GreenPAK
|
GreenPAK™ is a Renesas Electronics' family of mixed-signal integrated circuits and development tools. GreenPAK circuits are classified as configurable mixed-signal ICs. This category is characterized by analog and digital blocks that can be configured through programmable non-volatile memory. These devices also have a "Connection Matrix", which supports routing signals between the various blocks. These devices can include multiple components within a single IC.
Also, the company developed the Go Configure™ Software Hub for IC design creation, chip emulation, and programming.
History
The GreenPAK technology was developed by Silego Technology Inc. The company was established in 2001. The GreenPAK product line was introduced in April 2010. Then, the first generation of ICs was released. Later, Silego was acquired by Dialog Semiconductor PLC in 2017. Officially, the trademark for the GreenPAK title was registered in 2019.
Currently, in the market, the sixth generation of GreenPAK ICs was already released. Over 6 billion GreenPAK ICs have been shipped to Dialog's customers all over the world.
In 2021, Dialog was acquired by Renesas Electronics, therefore the GreenPAK technology is currently officially owned by Renesas.
GreenPAK Integrated Circuits
There are a few categories of ICs developed within the GreenPAK technology:
Dual Supply GreenPAK – provides level translation from higher or lower voltage domains.
GreenPAK with Power Switches – includes single and dual power switches up to 2A.
GreenPAK with Asynchronous State Machine – allows developing customized state machine IC designs.
GreenPAK with Low Power Dropout Regulators – enables a user to divide power loads using the unique concept of "Flexible Power Islands" devoted to wearable devices.
GreenPAK with In-System Programmability – can be reprogrammed up to 1000 times using the I2C serial interface.
Automotive GreenPAK – allows multiple system functions in a single IC used for automotive circuit designs.
|
https://en.wikipedia.org/wiki/Network-neutral%20data%20center
|
A network-neutral data center (or carrier-neutral data center) is a data center (or carrier hotel) which allows interconnection between multiple telecommunication carriers and/or colocation providers. Network-neutral data centers exist all over the world and vary in size and power.
While some data centers are owned and operated by a telecommunications or Internet service provider, the majority of network-neutral data centers are operated by a third party who has little or no part in providing Internet service to the end-user. This encourages competition and diversity as a server in a colocation centre can have one provider, multiple providers or only connect back to the headquarters of the company who owns the server. It has become increasingly more common for telecommunication operators to provide network neutral data centers.
One benefit of hosting in a network-neutral data center is the ability to switch providers without physically moving the server to another location.
|
https://en.wikipedia.org/wiki/System%20requirements%20specification
|
A System Requirements Specification (SyRS) (abbreviated SysRS to be distinct from a software requirements specification (SRS)) is a structured collection of information that embodies the requirements of a system.
A business analyst (BA), sometimes titled system analyst, is responsible for analyzing the business needs of their clients and stakeholders to help identify business problems and propose solutions. Within the systems development life cycle domain, the BA typically performs a liaison function between the business side of an enterprise and the information technology department or external service providers.
See also
Business analysis
Business process reengineering
Business requirements
Concept of operations
Data modeling
Information technology
Process modeling
Requirement
Requirements analysis
Software requirements specification
Systems analysis
Use case
|
https://en.wikipedia.org/wiki/Wahoo%20Fitness
|
Wahoo Fitness is a fitness technology company based in Atlanta. Its CEO is Mike Saturnia. Founded in 2009 by Chip Hawkins, Wahoo Fitness has offices in London, Berlin, Tokyo, Boulder and Brisbane.
Wahoo's portfolio of cycling industry products includes the KICKR family of Indoor Cycling Trainers and Accessories, the ELEMNT family of GPS Cycling Computers and sport watches, the TICKR family of Heart Rate Monitors, SPEEDPLAY Advanced Road Pedal systems and the Wahoo SYSTM Training App.
Main products
Indoor trainers & smart bikes
KICKR Direct Drive Smart Trainer
KICKR CORE Direct Drive Smart Trainer
KICKR SNAP Wheel-On Smart Trainer
KICKR BIKE Indoor Smart Bike
KICKR ROLLR Smart Trainer
GPS cycling computers & smart watches
ELEMNT ROAM GPS Bike Computer
ELEMNT BOLT GPS Bike Computer
ELEMNT RIVAL GPS Multisport Watch
Heart rate monitors
TICKR Heart Rate Monitor
TICKR FIT Heart Rate Armband
TICKR X Heart Rate Monitor
Cycling sensors
RPM Cadence Sensor
RPM Speed Sensor
RPM Sensor Bundle
BLUE SC Speed and Cadence Sensor
Indoor training accessories
KICKR HEADWIND Smart Fan
KICKR CLIMB Indoor Grade Simulator
KICKR AXIS Action Feet
KICKR Indoor Training Desk
KICKR Floormat
Pedals
POWRLINK ZERO Power Pedal System
SPEEDPLAY AERO Stainless Steel Aerodynamic Road Pedals
SPEEDPLAY NANO Titanium Road Pedals
SPEEDPLAY ZERO Stainless Steel Road Pedals
SPEEDPLAY COMP Chromoly Road Pedals
Standard Tension Cleat
Easy Tension Cleat
Training
Wahoo SYSTM Training App
Acquisitions
September 2019 – Pedal manufacturer, Speedplay
July 2019 – Indoor training platform, The Sufferfest, later rebranded to Wahoo SYSTM
April 2022 – Indoor training platform, RGT (Road Grand Tour) later rebranded to Wahoo RGT
Funding and investment
2010 – Private Investment
July 2018 – Norwest Equity Partners
Q3 2021 – Rhône Group
May 17, 2023 – Wahoo announces Wahoo Fitness Founder Buys Company Back from Banks
Team sponsorship
Wahoo is an official sponsor for:
Women's cy
|
https://en.wikipedia.org/wiki/Ultra-low-voltage%20processor
|
Ultra-low-voltage processors (ULV processors) are a class of microprocessor that are deliberately underclocked to consume less power (typically 17 W or below), at the expense of performance.
These processors are commonly used in subnotebooks, netbooks, ultraportables and embedded devices, where low heat dissipation and long battery life are required.
Notable examples
Intel Atom – Up to 2.0 GHz at 2.4 W (Z550)
Intel Pentium M – Up to 1.3 GHz at 5 W (ULV 773)
Intel Core 2 Solo – Up to 1.4 GHz at 5.5 W (SU3500)
Intel Core Solo – Up to 1.3 GHz at 5.5 W (U1500)
Intel Celeron M – Up to 1.2 GHz at 5.5 W (ULV 722)
VIA Eden – Up to 1.5 GHz at 7.5 W
VIA C7 – Up to 1.6 GHz at 8 W (C7-M ULV)
VIA Nano – Up to 1.3 GHz at 8 W (U2250)
AMD Athlon Neo – Up to 1 GHz at 8 W (Sempron 200U)
AMD Geode – Up to 1 GHz at 9 W (NX 1500)
Intel Core 2 Duo – Up to 1.3 GHz at 10 W (U7700)
Intel Core i3/i5/i7 – Up to 1.5 GHz at 13 W (Core i7 3689Y)
AMD A Series – Up to 3.2 GHz at 15 W (A10-7300P)
See also
Consumer Ultra-Low Voltage – a low power platform developed by Intel
|
https://en.wikipedia.org/wiki/Cranial%20evolutionary%20allometry
|
Cranial evolutionary allometry (CREA) is a scientific theory regarding trends in the shape of mammalian skulls during the course of evolution in accordance with body size (i.e., allometry). Specifically, the theory posits that there is a propensity among closely related mammalian groups for the skulls of the smaller species to be short and those of the larger species to be long. This propensity appears to hold true for placental as well as non-placental mammals, and is highly robust. Examples of groups which exhibit this characteristic include antelopes, fruit bats, mongooses, squirrels and kangaroos as well as felids.
It is believed that the reason for this trend has to do with size-related constraints on the formation and development of the mammalian skull. Facial length is one of the best known examples of heterochrony.
|
https://en.wikipedia.org/wiki/Molecular%20gastronomy
|
Molecular gastronomy is the scientific approach of cuisine from primarily the perspective of chemistry. The composition (molecular structure), properties (mass, viscosity, etc) and transformations (chemical reactions, reactant products) of an ingredient are addressed and utilized in the preparation and appreciation of the ingested products. It is a branch of food science that approaches the preparation and enjoyment of nutrition from the perspective of a scientist at the scale of atoms, molecules, and mixtures.
Nicholas Kurti, Hungarian physicist, and Hervé This, at the INRA in France, coined "Molecular and Physical Gastronomy" in 1988.
Examples
Eponymous recipes
New dishes named after famous scientists include:
Gibbs – infusing vanilla pods in egg white with sugar, adding olive oil and then microwave cooking. Named after physicist Josiah Willard Gibbs (1839–1903).
Vauquelin – using orange juice or cranberry juice with added sugar when whipping eggs to increase the viscosity and to stabilize the foam, and then microwave cooking. Named after Nicolas Vauquelin (1763–1829), one of Lavoisier's teachers.
Baumé – soaking a whole egg for a month in alcohol to create a coagulated egg. Named after the French chemist Antoine Baumé (1728–1804).
History
There are many branches of food science that study different aspects of food, such as safety, microbiology, preservation, chemistry, engineering, and physics. Until the advent of molecular gastronomy, there was no branch dedicated to studying the chemical processes of cooking in the home and in restaurants. Food science has primarily been concerned with industrial food production and, while the disciplines may overlap, they are considered separate areas of investigation.
The creation of the discipline of molecular gastronomy was intended to bring together what had previously been fragmented and isolated investigations into the chemical and physical processes of cooking into an organized discipline within food science, to
|
https://en.wikipedia.org/wiki/SpiNNaker
|
SpiNNaker (spiking neural network architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors (specifically ARM968) and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain (see Human Brain Project).
The completed design is housed in 10 19-inch racks, with each rack holding over 100,000 cores. The cards holding the chips are held in 5 blade enclosures, and each core emulates 1,000 neurons. In total, the goal is to simulate the behaviour of aggregates of up to a billion neurons in real time. This machine requires about 100 kW from a 240 V supply and an air-conditioned environment.
SpiNNaker is being used as one component of the neuromorphic computing platform for the Human Brain Project.
On 14 October 2018 the HBP announced that the million core milestone had been achieved.
On 24 September 2019 HBP announced that an 8 million euro grant, that will fund construction of the second generation machine, (called SpiNNcloud) has been given to TU Dresden.
|
https://en.wikipedia.org/wiki/List%20of%20alternative%20set%20theories
|
In mathematical logic, an alternative set theory is any of the alternative mathematical approaches to the concept of set and any alternative to the de facto standard set theory described in axiomatic set theory by the axioms of Zermelo–Fraenkel set theory.
Alternative set theories
Alternative set theories include:
Vopěnka's alternative set theory
Von Neumann–Bernays–Gödel set theory
Morse–Kelley set theory
Tarski–Grothendieck set theory
Ackermann set theory
Type theory
New Foundations
Positive set theory
Internal set theory
Naive set theory
S (set theory)
Kripke–Platek set theory
Scott–Potter set theory
Constructive set theory
Zermelo set theory
General set theory
See also
Non-well-founded set theory
Notes
Systems of set theory
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Mathematics%20Subject%20Classification
|
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
|
https://en.wikipedia.org/wiki/Hydrobiology
|
Hydrobiology is the science of life and life processes in water. Much of modern hydrobiology can be viewed as a sub-discipline of ecology but the sphere of hydrobiology includes taxonomy, economic and industrial biology, morphology, and physiology. The one distinguishing aspect is that all fields relate to aquatic organisms. Most work is related to limnology and can be divided into lotic system ecology (flowing waters) and lentic system ecology (still waters).
One of the significant areas of current research is eutrophication. Special attention is paid to biotic interactions in plankton assemblage including the microbial loop, the mechanism of influencing algal blooms, phosphorus load, and lake turnover. Another subject of research is the acidification of mountain lakes. Long-term studies are carried out on changes in the ionic composition of the water of rivers, lakes and reservoirs in connection with acid rain and fertilization. One goal of current research is elucidation of the basic environmental functions of the ecosystem in reservoirs, which are important for water quality management and water supply.
Much of the early work of hydrobiologists concentrated on the biological processes utilized in sewage treatment and water purification especially slow sand filters. Other historically important work sought to provide biotic indices for classifying waters according to the biotic communities that they supported. This work continues to this day in Europe in the development of classification tools for assessing water bodies for the EU water framework directive.
A hydrobiologist technician conducts field analysis for hydrobiology. They identify plants and living species, locate their habitat, and count them. They also identify pollutants and nuisances that can affect the aquatic fauna and flora. They take the samples and write reports of their observations for publications.
A hydrobiologist engineer intervenes more in the process of the study. They define the inte
|
https://en.wikipedia.org/wiki/Shaheen-III
|
The Shaheen-III ( ; lit. Falcon), is a supersonic and land-based medium range ballistic missile, which was test fired for the first time by military service on 9 March 2015
.
Development began in secrecy in the early 2000s in response to India's Agni-III, Shaheen was successfully tested on 9 March 2015 with a range of , which enables it to strike all of India and reach deep into the Middle East parts of North Africa. The Shaheen-III, according to its program manager, the Strategic Plans Division, is "18 times faster than speed of sound and designed to reach the Indian islands of Andaman and Nicobar so that India cannot use them as "strategic bases”" to establish a second strike capability.”
The Shaheen program is composed of the solid-fuel system in contrast to the Ghauri program that is primarily based on liquid-fuel system. With the successful launch of the Shaheen-III, it surpasses the range of Shaheen-II— hence, it is the longest-range missile to be launched by the military.
Its deployment has not been commented by the Pakistani military but Shaheen-III is currently deemed as operational in the strategic command of Pakistan Army.
Overview
Development history
Development of a long-range space launch vehicle began in 1999 with an aim of a rocket engines reaching the range of to . The Indian military had moved its strategic commands to east and the range of was determined by a need to be able to target the Nicobar and Andaman Islands in the eastern part of the Indian Ocean that are "developed as strategic bases" where "Indian military might think of putting its weapons”, according to Shaheen-III's program manager, the Special Planning Division. With this mission, Shaheen-III was actively pursued alongside with [[Ghauri-III|Ghauri-III''']].
In 2000, the Space Research Commission concluded at least two design studies for its space launch vehicle. Initially, there were two earlier designs were shown in IDEAS held in 2002 and its design was centered on develo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.