source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Outline%20of%20Gottfried%20Wilhelm%20Leibniz
The following outline is provided as an overview of and topical guide to Gottfried Wilhelm Leibniz: Gottfried Wilhelm (von) Leibniz (1 July 1646 [O.S. 21 June] – 14 November 1716); German polymath, philosopher logician, mathematician. Developed differential and integral calculus at about the same time and independently of Isaac Newton. Leibniz earned his keep as a lawyer, diplomat, librarian, and genealogist for the House of Hanover, and contributed to diverse areas. His impact continues to reverberate, especially his original contributions in logic and binary representations. Achievements and contributions Devices Leibniz calculator Logic Alphabet of human thought Calculus ratiocinator Mathematics Calculus General Leibniz rule Leibniz formula for Leibniz integral rule Philosophy Best of all possible worlds Characteristica universalis Identity of indiscernibles Pre-established harmony Principle of sufficient reason Physics Personal life Leibniz's political views Leibniz's religious views Family Major works by Leibniz De Arte Combinatoria Discourse on Metaphysics, (text at wikisource) Monadology, (text at wikisource) New Essays on Human Understanding Nova Methodus pro Maximis et Minimis Protogaea Théodicée Manuscript archives and translations of Leibniz's works Leibniz Archive (Hannover) at the Leibniz Research Center - Hannover Leibniz Archive (Potsdam) at the Brandenburg Academy of Humanities and Sciences Leibniz Archive (Munster), Leibniz-Forschungsstelle Münster digital edition Leibniz Archive (Berlin), digital edition Donald Rutherford's translations at UCSD Lloyd Strickland's translations at leibniz-translations.com Journals focused on Leibniz studies The Leibniz Review Studia Leibnitiana Organizations named after Leibniz Leibniz Association Leibniz College, affiliated with the University of Tübingen Leibniz Institute of European History Leibniz Institute for Polymer Research Leibniz Society of Nor
https://en.wikipedia.org/wiki/Remanence
Remanence or remanent magnetization or residual magnetism is the magnetization left behind in a ferromagnetic material (such as iron) after an external magnetic field is removed. Colloquially, when a magnet is "magnetized", it has remanence. The remanence of magnetic materials provides the magnetic memory in magnetic storage devices, and is used as a source of information on the past Earth's magnetic field in paleomagnetism. The word remanence is from remanent + -ence, meaning "that which remains". The equivalent term residual magnetization is generally used in engineering applications. In transformers, electric motors and generators a large residual magnetization is not desirable (see also electrical steel) as it is an unwanted contamination, for example a magnetization remaining in an electromagnet after the current in the coil is turned off. Where it is unwanted, it can be removed by degaussing. Sometimes the term retentivity is used for remanence measured in units of magnetic flux density. Types Saturation remanence The default definition of magnetic remanence is the magnetization remaining in zero field after a large magnetic field is applied (enough to achieve saturation). The effect of a magnetic hysteresis loop is measured using instruments such as a vibrating sample magnetometer; and the zero-field intercept is a measure of the remanence. In physics this measure is converted to an average magnetization (the total magnetic moment divided by the volume of the sample) and denoted in equations as Mr. If it must be distinguished from other kinds of remanence, then it is called the saturation remanence or saturation isothermal remanence (SIRM) and denoted by Mrs. In engineering applications the residual magnetization is often measured using a B-H analyzer, which measures the response to an AC magnetic field (as in Fig. 1). This is represented by a flux density Br. This value of remanence is one of the most important parameters characterizing permanent ma
https://en.wikipedia.org/wiki/Protistology
Protistology is a scientific discipline devoted to the study of protists, a highly diverse group of eukaryotic organisms. All eukaryotes apart from animals, plants and fungi are considered protists. Its field of study therefore overlaps with the more traditional disciplines of phycology, mycology, and protozoology, just as protists embrace mostly unicellular organisms described as algae, some organisms regarded previously as primitive fungi, and protozoa ("animal" motile protists lacking chloroplasts). They are a paraphyletic group with very diverse morphologies and lifestyles. Their sizes range from unicellular picoeukaryotes only a few micrometres in diameter to multicellular marine algae several metres long. History The history of the study of protists has its origins in the 17th century. Since the beginning, the study of protists has been intimately linked to developments in microscopy, which have allowed important advances in the understanding of these organisms due to their generally microscopic nature. Among the pioneers was Anton van Leeuwenhoek, who observed a variety of free-living protists and in 1674 named them “very little animalcules”. During the 18th century studies on the Infusoria were dominated by Christian Gottfried Ehrenberg and Félix Dujardin. The term "protozoology" has become dated as understanding of the evolutionary relationships of the eukaryotes has improved, and is frequently replaced by the term "protistology". For example, the Society of Protozoologists, founded in 1947, was renamed International Society of Protistologists in 2005. However, the older term is retained in some cases (e.g., the Polish journal Acta Protozoologica). Journals and societies Dedicated academic journals include: Archiv für Protistenkunde, 1902-1998, Germany (renamed Protist, 1998-); Archives de la Societe Russe de Protistologie, 1922-1928, Russia; Journal of Protozoology, 1954-1993, USA (renamed Journal of Eukaryotic Microbiology, 1993-); Acta Protoz
https://en.wikipedia.org/wiki/Ergodic%20hypothesis
In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., that all accessible microstates are equiprobable over a long period of time. Liouville's theorem states that, for a Hamiltonian system, the local density of microstates following a particle path through phase space is constant as viewed by an observer moving with the ensemble (i.e., the convective time derivative is zero). Thus, if the microstates are uniformly distributed in phase space initially, they will remain so at all times. But Liouville's theorem does not imply that the ergodic hypothesis holds for all Hamiltonian systems. The ergodic hypothesis is often assumed in the statistical analysis of computational physics. The analyst would assume that the average of a process parameter over time and the average over the statistical ensemble are the same. This assumption—that it is as good to simulate a system over a long time as it is to make many independent realizations of the same system—is not always correct. (See, for example, the Fermi–Pasta–Ulam–Tsingou experiment of 1953.) Assumption of the ergodic hypothesis allows proof that certain types of perpetual motion machines of the second kind are impossible. Systems that are ergodic are said to have the property of ergodicity; a broad range of systems in geometry, physics, and probability are ergodic. Ergodic systems are studied in ergodic theory. Phenomenology In macroscopic systems, the timescales over which a system can truly explore the entirety of its own phase space can be sufficiently large that the thermodynamic equilibrium state exhibits some form of ergodicity breaking. A common example is that of spontaneous magnetisation in ferromagnetic systems, whereby below the Curie temperature the system preferentially adopts a non-zero magnetisation even though the er
https://en.wikipedia.org/wiki/Field-replaceable%20unit
A field-replaceable unit (FRU) is a printed circuit board, part, or assembly that can be quickly and easily removed from a computer or other piece of electronic equipment, and replaced by the user or a technician without having to send the entire product or system to a repair facility. FRUs allow a technician lacking in-depth product knowledge to isolate faults and replace faulty components. The granularity of FRUs in a system impacts total cost of ownership and support, including the costs of stocking spare parts, where spares are deployed to meet repair time goals, how diagnostic tools are designed and implemented, levels of training for field personnel, whether end-users can do their own FRU replacement, etc. Other equipment FRUs are not strictly confined to computers but are also part of many high-end, lower-volume consumer and commercial products. For example, in military aviation, electronic components of line-replaceable units, typically known as shop-replaceable units (SRUs), are repaired at field-service backshops, usually by a "remove and replace" repair procedure, with specialized repair performed at centralized depot or by the OEM. History Many vacuum tube computers had FRUs: Pluggable units containing one or more vacuum tubes and various passive components Most transistorized and integrated circuit-based computers had FRUs: Computer modules, circuit boards containing discrete transistors and various passive components. Examples: IBM SMS cards DEC System Building Blocks cards DEC Flip-Chip cards Circuit boards containing monolithic ICs and/or hybrid ICs, such as IBM SLT cards. Vacuum tubes themselves are usually FRUs. For a short period starting in the late 1960s, some television set manufacturers made solid-state televisions with FRUs instead of a single board attached to the chassis. However modern televisions put all the electronics on one large board to reduce manufacturing costs. Trends As the sophistication and complexity of multi-replaceable
https://en.wikipedia.org/wiki/Sampson%20%28horse%29
Sampson (later renamed Mammoth) was a Shire horse gelding born in 1846 and bred by Thomas Cleaver at Toddington Mills, Bedfordshire, England. According to Guinness World Records (1986) he was the tallest horse ever recorded, by 1850 measuring or 21.25 hands in height. His peak weight was estimated at See also List of historical horses
https://en.wikipedia.org/wiki/Kew%20Rule
The Kew Rule was used by some authors to determine the application of synonymous names in botanical nomenclature up to about 1906, but was and still is contrary to codes of botanical nomenclature including the International Code of Nomenclature for algae, fungi, and plants. Index Kewensis, a publication that aimed to list all botanical names for seed plants at the ranks of species and genus, used the Kew Rule until its Supplement IV was published in 1913 (prepared 1906–1910). The Kew Rule applied rules of priority in a more flexible way, so that when transferring a species to a new genus, there was no requirement to retain the epithet of the original species name, and future priority of the new name was counted from the time the species was transferred to the new genus. The effect has been summarized as "nomenclature used by an established monographer or in a major publication should be adopted". This is contrary to the modern article 11.4 of the Code of Nomenclature. History Beginnings The first discussion in print of what was to become known as the Kew Rule appears to have occurred in 1877 between Henry Trimen and Alphonse Pyramus de Candolle. Trimen did not think it was reasonable for older names discovered in the literature to destabilize the nomenclature that had been well accepted:Probably all botanists are agreed that it is very desirable to retain when possible old specific names, but some of the best authors do not certainly consider themselves bound by any generally accepted rule in this matter. Still less will they be inclined to allow that a writer is at liberty, as M. de Candolle thinks, to reject the specific appellations made by an author whose genera are accepted, in favour of older ones in other genera. It will appear to such that to do this is to needlessly create in each case another synonym. The end The first botanical code of nomenclature that declared itself to be binding was the 1906 publication that followed from the 1905 International
https://en.wikipedia.org/wiki/Hat%20notation
A "hat" (circumflex (ˆ)), placed over a symbol is a mathematical notation with various uses. Estimated value In statistics, a circumflex (ˆ), called a "hat", is used to denote an estimator or an estimated value. For example, in the context of errors and residuals, the "hat" over the letter indicates an observable estimate (the residuals) of an unobservable quantity called (the statistical errors). Another example of the hat operator denoting an estimator occurs in simple linear regression. Assuming a model of , with observations of independent variable data and dependent variable data , the estimated model is of the form where is commonly minimized via least squares by finding optimal values of and for the observed data. Hat matrix In statistics, the hat matrix H projects the observed values y of response variable to the predicted values ŷ: Cross product In screw theory, one use of the hat operator is to represent the cross product operation. Since the cross product is a linear transformation, it can be represented as a matrix. The hat operator takes a vector and transforms it into its equivalent matrix. For example, in three dimensions, Unit vector In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in (pronounced "v-hat"). Fourier transform The Fourier transform of a function is traditionally denoted by . See also Exterior algebra Top-hat filter Circumflex, noting that precomposed glyphs [letter-with-circumflex] do not exist for all letters.
https://en.wikipedia.org/wiki/List%20of%20theorems
This is a list of notable theorems. Lists of theorems and similar statements include: List of fundamental theorems List of lemmas List of conjectures List of inequalities List of mathematical proofs List of misnamed theorems Most of the results below come from pure mathematics, but some are from theoretical physics, economics, and other applied fields. 0–9 2-factor theorem (graph theory) 15 and 290 theorems (number theory) 2π theorem (Riemannian geometry) A B C D E F G H I J K L M N O P Q R S T U V W Z Theorems
https://en.wikipedia.org/wiki/Neutron-velocity%20selector
A neutron-velocity selector is a device that allows neutrons of defined velocity to pass while absorbing all other neutrons, to produce a monochromatic neutron beam. It has the appearance of a many-bladed turbine. The blades are coated with a strongly neutron-absorbing material, such as Boron-10. Neutron-velocity selectors are commonly used in neutron research facility to produce a monochromatic beam of neutrons. Due to physical limitations of materials and motors, limiting the maximum speed of rotation of the blades, these devices are only useful for relatively slow neutrons.
https://en.wikipedia.org/wiki/Zero-point%20energy
Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle. Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity. The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Physicists Richard Feynman and John Wheeler calculated the zero-point radiation of the vacuum to be an order of magnitude greater than nuclear energy, with a single light bulb containing enough energy to boil all the world's oceans. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. A popular proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point
https://en.wikipedia.org/wiki/Teacher%20Institute%20for%20Evolutionary%20Science
The Teacher Institute for Evolutionary Science (TIES) is a project of the Richard Dawkins Foundation for Reason and Science and a program of the Center for Inquiry which provides free workshops and materials to elementary, middle school, and, more recently, high school science teachers to enable them to effectively teach evolution based on the Next Generation Science Standards. History In 2013, Bertha Vazquez, TIES director and middle school science teacher in Miami, met Richard Dawkins at the University of Miami and discussed evolution education with him and a number of science professors. The discussion surrounded the issue of teachers feeling unprepared to teach evolution. This encounter and the understanding that teachers learn the most from each other inspired her to conduct workshops on evolution for her fellow teachers. After hearing about Vazquez's work, Dawkins followed up with a visit to Vazquez's school in 2014 to speak to teachers from the Miami-Dade County school district. Dawkins eventually asked Vazquez if she would be willing to take her workshop project nationwide. With the encouragement of Dawkins and funding from his foundation, and also with encouragement from Robyn Blumner of the Center for Inquiry, the Teacher Institute for Evolutionary Science began offering workshops in 2015. Activity The first TIES workshop was in April 2015 in collaboration with the Miami Science Museum. A total of ten workshops took place in 2015. Since then, the program has expanded, as of 2020, to over 200 workshops in all 50 states. While Bertha Vazquez presented many of the workshops earlier on, over 80 presenters are now active in the nationwide program. Presenters are usually high school or college biology educators in the states in which their workshops take place, and workshops take into account the given state's evolution education standards. Workshops vary in length, and in cases of longer workshops or webinars, scientists and other relevant guests are also
https://en.wikipedia.org/wiki/Irreducibility%20%28mathematics%29
In mathematics, the concept of irreducibility is used in several ways. A polynomial over a field may be an irreducible polynomial if it cannot be factored over that field. In abstract algebra, irreducible can be an abbreviation for irreducible element of an integral domain; for example an irreducible polynomial. In representation theory, an irreducible representation is a nontrivial representation with no nontrivial proper subrepresentations. Similarly, an irreducible module is another name for a simple module. Absolutely irreducible is a term applied to mean irreducible, even after any finite extension of the field of coefficients. It applies in various situations, for example to irreducibility of a linear representation, or of an algebraic variety; where it means just the same as irreducible over an algebraic closure. In commutative algebra, a commutative ring R is irreducible if its prime spectrum, that is, the topological space Spec R, is an irreducible topological space. A matrix is irreducible if it is not similar via a permutation to a block upper triangular matrix (that has more than one block of positive size). (Replacing non-zero entries in the matrix by one, and viewing the matrix as the adjacency matrix of a directed graph, the matrix is irreducible if and only if such directed graph is strongly connected.) A detailed definition is given here. Also, a Markov chain is irreducible if there is a non-zero probability of transitioning (even if in more than one step) from any state to any other state. In the theory of manifolds, an n-manifold is irreducible if any embedded (n − 1)-sphere bounds an embedded n-ball. Implicit in this definition is the use of a suitable category, such as the category of differentiable manifolds or the category of piecewise-linear manifolds. The notions of irreducibility in algebra and manifold theory are related. An n-manifold is called prime, if it cannot be written as a connected sum of two n-manifolds (neither of
https://en.wikipedia.org/wiki/Maximum%20theorem
The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959. The theorem is primarily used in mathematical economics and optimal control. Statement of theorem Maximum Theorem. Let and be topological spaces, be a continuous function on the product , and be a compact-valued correspondence such that for all . Define the marginal function (or value function) by and the set of maximizers by . If is continuous (i.e. both upper and lower hemicontinuous) at , then is continuous and is upper hemicontinuous with nonempty and compact values. As a consequence, the may be replaced by . The maximum theorem can be used for minimization by considering instead. Interpretation The theorem is typically interpreted as providing conditions for a parametric optimization problem to have continuous solutions with regard to the parameter. In this case, is the parameter space, is the function to be maximized, and gives the constraint set that is maximized over. Then, is the maximized value of the function and is the set of points that maximize . The result is that if the elements of an optimization problem are sufficiently continuous, then some, but not all, of that continuity is preserved in the solutions. Proof Throughout this proof we will use the term neighborhood to refer to an open set containing a particular point. We preface with a preliminary lemma, which is a general fact in the calculus of correspondences. Recall that a correspondence is closed if its graph is closed. Lemma. If are correspondences, is upper hemicontinuous and compact-valued, and is closed, then defined by is upper hemicontinuous. Let , and suppose is an open set containing . If , then the result follows immediately. Otherwise, observe that for each we have , and since is closed there is a neighborhood of in which whenever . The collecti
https://en.wikipedia.org/wiki/Underwood%20Dudley
Underwood Dudley (born January 6, 1937) is an American mathematician and writer. His popular works include several books describing crank mathematics by pseudomathematicians who incorrectly believe they have squared the circle or done other impossible things. Career Dudley was born in New York City. He received bachelor's and master's degrees from the Carnegie Institute of Technology and a PhD from the University of Michigan. His academic career consisted of two years at Ohio State University followed by 37 at DePauw University, from which he retired in 2004. He edited the College Mathematics Journal and the Pi Mu Epsilon Journal, and was a Pólya Lecturer for the Mathematical Association of America (MAA) for two years. He is the discoverer of the Dudley triangle. Publications Dudley's popular books include Mathematical Cranks (MAA 1992, ), The Trisectors (MAA 1996, ), and Numerology: Or, What Pythagoras Wrought (MAA 1997, ). Dudley won the Trevor Evans Award for expository writing from the MAA in 1996. Dudley has also written and edited straightforward mathematical works such as Readings for Calculus (MAA 1993, ) and Elementary Number Theory (W.H. Freeman 1978, ). In 2009, he authored "A Guide to Elementary Number Theory" (MAA, 2009, ), published under Mathematical Association of America's Dolciani Mathematical Expositions. Lawsuit In 1995, Dudley was one of several people sued by William Dilworth for defamation because Mathematical Cranks included an analysis of Dilworth's "A correction in set theory", an attempted refutation of Cantor's diagonal method. The suit was dismissed in 1996 due to failure to state a claim. The dismissal was upheld on appeal in a decision written by jurist Richard Posner. From the decision: "A crank is a person inexplicably obsessed by an obviously unsound idea—a person with a bee in his bonnet. To call a person a crank is to say that because of some quirk of temperament he is wasting his time pursuing a line of thought that is
https://en.wikipedia.org/wiki/Vivaldi%20coordinates
Vivaldi Coordinate System is a decentralized Network Coordinate System, that allows for distributed systems such as peer-to-peer networks to estimate round-trip time (RTT) between arbitrary nodes in a network. Through this scheme, network topology awareness can be used to tune the network behavior to more efficiently distribute data. For example, in a peer-to-peer network, more responsive identification and delivery of content can be achieved. In the Azureus application, Vivaldi is used to improve the performance of the distributed hash table that facilitates query matches. Design The algorithm behind Vivaldi is an optimization algorithm that figures out the most stable configuration of points in a euclidean space such that distances between the points are as close as possible to real-world measured distances. In effect, the algorithm attempts to embed the multi-dimensional space that is latency measurements between computers into a low-dimensional euclidean space. A good analogy might be a spring-and-mass system in 3D space where each node is a mass and each connection between nodes are springs. The default lengths of the springs are the measured RTTs between nodes, and when the system is simulated, the coordinates of nodes correspond to the resulting 3D positions of the masses in the lowest energy state of the system. This design is taken from previous work in the field, the contribution that Vivaldi makes is to make this algorithm run in parallel across all the nodes in the network. Advantages Vivaldi can theoretically can scale indefinitely. The Vivaldi algorithm is relatively simple implement. Drawbacks Vivaldi's coordinates are points in a euclidean space, which requires the predicted distances to obey the triangle inequality as well as euclidean symmetry. However, there are many triangle inequality violations (TIVs) and symmetry violations on the Internet, mostly because of inefficient routing or distance distortion because connections on the inter
https://en.wikipedia.org/wiki/Anal%20pore
The anal pore or cytoproct is a structure in various single-celled eukaryotes where waste is ejected after the nutrients from food have been absorbed into the cytoplasm. In ciliates, the anal pore (cytopyge) and cytostome are the only regions of the pellicle that are not covered by ridges, cilia or rigid covering. They serve as analogues of, respectively, the anus and mouth of multicellular organisms. The cytopyge's thin membrane allows vacuoles to be merged into the cell wall and emptied. Location The anal pore is an exterior opening of microscopic organisms through which undigested food waste, water, or gas are expelled from the body. It is also referred to as a cytoproct.  This structure is found in different unicellular eukaryotes like paramecium organelles. The anal pore is located on the ventral surface, usually in the posterior half of the cell. The anal pore itself is actually a structure made up of two components: piles of fibres, and microtubules. Function Digested nutrients from the vacuole pass into the cytoplasm, making the vacuole shrink and moves to the anal pore, where it ruptures to release the waste content to the environment on the outside of the cell. The cytoproct is used for the excretion of indigestible debris contained in the food vacuoles. In paramecium, the anal pore is a region of pellicle that is not covered by ridges and cilia, and the area has thin pellicles that allow the vacuoles to be merged into the cell surface to be emptied. Most micro-organisms possess an anal pore for excretion and are usually an opening on the pellicle to eject out indigestible debris. The opening and closing of the cytoproct resemble a reversible ring of tissue fusion occurring between the inner and outer layers located at the aboral end. An anal pore is not a permanently visible structure as it appears at defecation and disappears afterward. In ciliates, the anal cytostomes and cytopyge pore regions are not covered by either ridges or cilia or hard co
https://en.wikipedia.org/wiki/Fault%20Tolerant%20Ethernet
Fault Tolerant Ethernet (FTE) is proprietary protocol created by Honeywell. Designed to provide rapid network redundancy, on top of spanning tree protocol. Each node is connected twice to a single LAN through the dual network interface controllers. The driver and the FTE enabled components allow network communication to occur over an alternate path when the primary path fails. Default time before failure is detected, is Diagnostic Interval (1000ms) multiplier with Disjoin Multiplier (3), for a 3000ms recovery time. Similar to Switch Fault Tolerance (SFT) in windows and mode=1 (active-backup) in Linux. Supported hardware and software Windows 7/2003 or newer Honeywell Control Firewall (CF9) Honeywell C300 Controller Honeywell Series 8 I/O Technical overview Uses Multicast ( 234.5.6.7), for FTE community. Recommended maximum of 300 FTE nodes and 200 single connected Ethernet nodes (A machine with to network cards is considered as two separate single connected Ethernet nodes). Recommended to have separate broadcast/multicast domain , for different FTE communities. Recommended maximum of 3 tiers of switches. Default UDP Source Port: 47837 Default UDP Destination Port : 51966
https://en.wikipedia.org/wiki/Index%20of%20information%20theory%20articles
This is a list of information theory topics. A Mathematical Theory of Communication algorithmic information theory arithmetic coding channel capacity Communication Theory of Secrecy Systems conditional entropy conditional quantum entropy confusion and diffusion cross-entropy data compression entropic uncertainty (Hirchman uncertainty) entropy encoding entropy (information theory) Fisher information Hick's law Huffman coding information bottleneck method information theoretic security information theory joint entropy Kullback–Leibler divergence lossless compression negentropy noisy-channel coding theorem (Shannon's theorem) principle of maximum entropy quantum information science range encoding redundancy (information theory) Rényi entropy self-information Shannon–Hartley theorem Information theory Information theory topics
https://en.wikipedia.org/wiki/Tunneling%20protocol
In computer networks, a tunneling protocol is a communication protocol which allows for the movement of data from one network to another. It involves allowing private network communications to be sent across a public network (such as the Internet) through a process called encapsulation. Because tunneling involves repackaging the traffic data into a different form, perhaps with encryption as standard, it can hide the nature of the traffic that is run through a tunnel. The tunneling protocol works by using the data portion of a packet (the payload) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of the OSI or TCP/IP protocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol. Uses A tunneling protocol may, for example, allow a foreign protocol to run over a network that does not support that particular protocol, such as running IPv6 over IPv4. Another important use is to provide services that are impractical or unsafe to be offered using only the underlying network services, such as providing a corporate network address to a remote user whose physical network address is not part of the corporate network. Circumventing firewall policy Users can also use tunneling to "sneak through" a firewall, using a protocol that the firewall would normally block, but "wrapped" inside a protocol that the firewall does not block, such as HTTP. If the firewall policy does not specifically exclude this kind of "wrapping", this trick can function to get around the intended firewall policy (or any set of interlocked firewall policies). Another HTTP-based tunneling method uses the HTTP CONNECT method/command. A client issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP connection to a particular server:port, an
https://en.wikipedia.org/wiki/Aerobic%20organism
An aerobic organism or aerobe is an organism that can survive and grow in an oxygenated environment. The ability to exhibit aerobic respiration may yield benefits to the aerobic organism, as aerobic respiration yields more energy than anaerobic respiration. Energy production of the cell involves the synthesis of ATP by an enzyme called ATP synthase. In aerobic respiration, ATP synthase is coupled with an electron transport chain in which oxygen acts as a terminal electron acceptor. In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically poor sediments, up to 101.5 million years old, 250 feet below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found. Types Obligate aerobes need oxygen to grow. In a process known as cellular respiration, these organisms use oxygen to oxidize substrates (for example sugars and fats) and generate energy. Facultative anaerobes use oxygen if it is available, but also have anaerobic methods of energy production. Microaerophiles require oxygen for energy production, but are harmed by atmospheric concentrations of oxygen (21% O2). Aerotolerant anaerobes do not use oxygen but are not harmed by it. When an organism is able to survive in both oxygen and anaerobic environments, the use of the Pasteur effect can distinguish between facultative anaerobes and aerotolerant organisms. If the organism is using fermentation in an anaerobic environment, the addition of oxygen will cause facultative anaerobes to suspend fermentation and begin using oxygen for respiration. Aerotolerant organisms must continue fermentation in the presence of oxygen. Facultative organisms grow in both oxygen rich media and oxygen free media. Aerobic Respiration Aerobic organisms use a process called aerobic respiration to create ATP from ADP and a phosphate. Glucose (a monosaccharide) is oxidized to power the
https://en.wikipedia.org/wiki/Biomineralization
Biomineralization, also written biomineralisation, is the process by which living organisms produce minerals, often resulting in hardened or stiffened mineralized tissues. It is an extremely widespread phenomenon: all six taxonomic kingdoms contain members that are able to form minerals, and over 60 different minerals have been identified in organisms. Examples include silicates in algae and diatoms, carbonates in invertebrates, and calcium phosphates and carbonates in vertebrates. These minerals often form structural features such as sea shells and the bone in mammals and birds. Organisms have been producing mineralized skeletons for the past 550 million years. Calcium carbonates and calcium phosphates are usually crystalline, but silica organisms (sponges, diatoms...) are always non-crystalline minerals. Other examples include copper, iron, and gold deposits involving bacteria. Biologically formed minerals often have special uses such as magnetic sensors in magnetotactic bacteria (Fe3O4), gravity-sensing devices (CaCO3, CaSO4, BaSO4) and iron storage and mobilization (Fe2O3•H2O in the protein ferritin). In terms of taxonomic distribution, the most common biominerals are the phosphate and carbonate salts of calcium that are used in conjunction with organic polymers such as collagen and chitin to give structural support to bones and shells. The structures of these biocomposite materials are highly controlled from the nanometer to the macroscopic level, resulting in complex architectures that provide multifunctional properties. Because this range of control over mineral growth is desirable for materials engineering applications, there is interest in understanding and elucidating the mechanisms of biologically-controlled biomineralization. Types Mineralization can be subdivided into different categories depending on the following: the organisms or processes that create chemical conditions necessary for mineral formation, the origin of the substrate at the site of m
https://en.wikipedia.org/wiki/Routing%20bridge
A routing bridge or RBridge, also known as a TRILL switch, is a network device that implements the TRILL protocol, as specified by the IETF and should not be confused with BRouters (Bridging Routers). RBridges are compatible with previous IEEE 802.1 customer bridges as well as IPv4 and IPv6 routers and end nodes. They are invisible to current IP routers and, like routers, RBridges terminate the bridge spanning tree protocol. The RBridges in a campus share connectivity information amongst themselves using the IS-IS link-state protocol. A link-state protocol is one in which connectivity is broadcast to all the RBridges, so that each RBridge knows about all the other RBridges, and the connectivity between them. This gives RBridges enough information to compute pair-wise optimal paths for unicast, and calculate distribution trees for delivery of frames either to destinations whose location is unknown or to multicast or broadcast groups. IS-IS was chosen as for this purpose because: it runs directly over Layer 2, so it can be run without configuration (no IP addresses need to be assigned) it is easy to extend by defining new TLV (type-length-value) data elements and sub-elements for carrying TRILL information. To mitigate temporary loop issues, RBridges forward based on a header with a hop count. RBridges also specify the next hop RBridge as the frame destination when forwarding unicast frames across a shared-media link, which avoids spawning additional copies of frames during a temporary loop. A Reverse Path Forwarding Check and other checks are performed on multi-destination frames to further control potentially looping traffic.
https://en.wikipedia.org/wiki/Operational%20design%20domain
Operational design domain (ODD) is a term for a set of operating conditions for an automated system, often used in the field of autonomous vehicles. These operating conditions include environmental, geographical and time of day constraints, traffic and roadway characteristics. The ODD is used by manufacturers to indicate where their product will operate safely. The concept of ODD indicates that autonomated systems have limitations and that they should operate within predefined restrictions to ensure safety and performance. Defining an ODD is important for developers and regulators to establish clear expectations and communicate the intended operating conditions of automated systems. Beyond self-driving cars, ODD is also used for autonomous ships, autonomous trains, agricultural robots, and other robots. ODD definition by standards Structure of ODD A report by US Department of Transportation subdivides an ODD description into six top-level categories and further immediate subcategories. The top-level categories are the physical infrastructure, operational constraints, objects, connectivity, environemental conditions and zones. The physical infrastructure includes subcategories for roadway types, surfaces, edges and geometry. The operational constraints include subcategories for speed limits and traffic conditions. Environmental conditions include weather, illumination, and similar sub-categories. Zones include subcategories like regions, states, school areas, construction sites and similar. Examples In 2022, Mercedes Benz announced a product with a new ODD, which is Level 3 autonomous driving at 130 km/h. See also Scenario (vehicular automation)
https://en.wikipedia.org/wiki/Beamforming
Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array. Beamforming can be used for radio or sound waves. It has found numerous applications in radar, sonar, seismology, wireless communications, radio astronomy, acoustics and biomedicine. Adaptive beamforming is used to detect and estimate the signal of interest at the output of a sensor array by means of optimal (e.g. least-squares) spatial filtering and interference rejection. Techniques To change the directionality of the array when transmitting, a beamformer controls the phase and relative amplitude of the signal at each transmitter, in order to create a pattern of constructive and destructive interference in the wavefront. When receiving, information from different sensors is combined in a way where the expected pattern of radiation is preferentially observed. For example, in sonar, to send a sharp pulse of underwater sound towards a ship in the distance, simply simultaneously transmitting that sharp pulse from every sonar projector in an array fails because the ship will first hear the pulse from the speaker that happens to be nearest the ship, then later pulses from speakers that happen to be further from the ship. The beamforming technique involves sending the pulse from each projector at slightly different times (the projector closest to the ship last), so that every pulse hits the ship at exactly the same time, producing the effect of a single strong pulse from a single powerful projector. The same technique can be c
https://en.wikipedia.org/wiki/Priority%20inversion
In computer science, priority inversion is a scenario in scheduling in which a high-priority task is indirectly superseded by a lower-priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high-priority tasks can only be prevented from running by higher-priority tasks. Inversion occurs when there is a resource contention with a low-priority task that is then preempted by a medium-priority task. Formulation Consider two tasks H and L, of high and low priority respectively, either of which can acquire exclusive use of a shared resource R. If H attempts to acquire R after L has acquired it, then H becomes blocked until L relinquishes the resource. Sharing an exclusive-use resource (R in this case) in a well-designed system typically involves L relinquishing R promptly so that H (a higher-priority task) does not stay blocked for excessive periods of time. Despite good design, however, it is possible that a third task M of medium priority becomes runnable during L's use of R. At this point, M being higher in priority than L, preempts L (since M does not depend on R), causing L to not be able to relinquish R promptly, in turn causing H—the highest-priority process—to be unable to run (that is, H suffers unexpected blockage indirectly caused by lower-priority tasks like M). Consequences In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high-priority task goes unnoticed, and eventually, the low-priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high-priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watchdog timer resetting the entire system. The trouble experienced by the Mars Pathfinder lander in 1997 is a classic example of problems caused by priority inversion in rea
https://en.wikipedia.org/wiki/In%20situ%20adaptive%20tabulation
In situ adaptive tabulation (ISAT) is an algorithm for the approximation of nonlinear relationships. ISAT is based on multiple linear regressions that are dynamically added as additional information is discovered. The technique is adaptive as it adds new linear regressions dynamically to a store of possible retrieval points. ISAT maintains error control by defining finer granularity in regions of increased nonlinearity. A binary tree search transverses cutting hyper-planes to locate a local linear approximation. ISAT is an alternative to artificial neural networks that is receiving increased attention for desirable characteristics, namely: scales quadratically with increased dimension approximates functions with discontinuities maintains explicit bounds on approximation error controls local derivatives of the approximating function delivers new data training without re-optimization ISAT was first proposed by Stephen B. Pope for computational reduction of turbulent combustion simulation and later extended to model predictive control. It has been generalized to an ISAT framework that operates based on any input and output data regardless of the application. An improved version of the algorithm was proposed just over a decade later of the original publication, including new features that allow you to improve the efficiency of the search for tabulated data, as well as error control. See also Predictive analytics Radial basis function network Recurrent neural networks Support vector machine Tensor product network
https://en.wikipedia.org/wiki/Open%20Compute%20Project
The Open Compute Project (OCP) is an organization that shares designs of data center products and best practices among companies, including Arm, Meta, IBM, Wiwynn, Intel, Nokia, Google, Microsoft, Seagate Technology, Dell, Rackspace, Hewlett Packard Enterprise, NVIDIA, Cisco, Goldman Sachs, Fidelity, Lenovo and Alibaba Group. Project structure The Open Compute Project Foundation is a 501(c)(6) non-profit incorporated in the state of Delaware. Rocky Bullock serves as the Foundation's CEO and has a seat on the board of directors. As of July 2020, there are 7 members who serve on the board of directors which is made up of one individual member and six organizational members. Mark Roenigk (Facebook) is the Foundation's president and chairman. Andy Bechtolsheim is the individual member. In addition to Mark Roenigk who represents Facebook, other organizations on the Open Compute board of directors include Intel (Rebecca Weekly), Microsoft (Kushagra Vaid), Google (Partha Ranganathan), and Rackspace (Jim Hawkins). A current list of members can be found on the opencompute.org website. History The Open Compute Project began in Facebook as an internal project in 2009 called "Project Freedom". The hardware designs and engineering team were led by Amir Michael (Manager, Hardware Design) and sponsored by Jonathan Heiliger (VP, Technical Operations) and Frank Frankovsky (Director, Hardware Design and Infrastructure). The three would later open source the designs of Project Freedom and co-found the Open Compute Project. The project was announced at a press event at Facebook's headquarters in Palo Alto on April 7, 2011. OCP projects The Open Compute Project Foundation maintains a number of OCP projects, such as: Server designs Two years after Open Compute Project had started, with regards to a more modular server design, it was admitted that "the new design is still a long way from live data centers". However, some aspects published were used in Facebook's Prineville dat
https://en.wikipedia.org/wiki/Research%20software%20engineering
Research software engineering is the use of software engineering practices in research applications. The term was proposed in a research paper in 2010 in response to an empirical survey on tools used for software development in research projects. It started to be used in United Kingdom in 2012, when it was needed to define the type of software development needed in research. This focuses on reproducibility, reusability, and accuracy of data analysis and applications created for research. Support Various type of associations and organisations have been created around this role to support the creation of posts in universities and research institutes. In 2014 a Research Software Engineer Association was created in UK, which attracted 160 members in the first three months. Other countries like the Netherlands, Germany, and the USA followed creating similar communities and there are similar efforts being pursued in Asia, Australia, Canada, New Zealand, the Nordic countries, and Belgium. In January 2021 the International Council of RSE Associations was introduced. UK counts almost 30 universities and institutes with groups that provide access to software expertise to different areas of research. Additionally, the Engineering and Physical Sciences Research Council created a Research Software Engineer fellowship to promote this role and help the creation of RSE groups across UK, with calls in 2015, 2017, and 2020. The world first RSE conference took place in UK in September 2016, it was repeated in 2017, 2018 and 2019, and is planned again for 2020. In 2019 the first national RSE conferences in Germany and the Netherlands were held, next editions were planned for 2020 and then cancelled. The SORSE (A Series of Online Research Software Events) community was established in late‑2020 in response to the COVID-19 pandemic and ran its first online event on 2September 2020. See also Open Energy Modelling Initiative — relevant here because the bulk of the development occur
https://en.wikipedia.org/wiki/Data%20center
A data center (American English) or data centre (Commonwealth English) is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems. Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town. Estimated global data center electricity consumption in 2022 was 240-340 TWh, or roughly 1-1.3% of global electricity demand. This excludes energy used for cryptocurrency mining, which was estimated to be around 110 TWh in 2022, or another 0.4% of global electricity demand. Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers. History Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised. During the boom of the microcomputer industry, and especia
https://en.wikipedia.org/wiki/List%20of%20longest-living%20organisms
This is a list of the longest-living biological organisms: the individual(s) (or in some instances, clones) of a species with the longest natural maximum life spans. For a given species, such a designation may include: The oldest known individual(s) that are currently alive, with verified ages. Verified individual record holders, such as the longest-lived human, Jeanne Calment, or the longest-lived domestic cat, Creme Puff. The definition of "longest-living" used in this article considers only the observed or estimated length of an individual organism's natural lifespan – that is, the duration of time between its birth or conception, or the earliest emergence of its identity as an individual organism, and its death – and does not consider other conceivable interpretations of "longest-living", such as the length of time between the earliest appearance of a species in the fossil record and the present (the historical "age" of the species as a whole), the time between a species' first speciation and its extinction (the phylogenetic "lifespan" of the species), or the range of possible lifespans of a species' individuals. This list includes long-lived organisms that are currently still alive as well as those that are dead. Determining the length of an organism's natural lifespan is complicated by many problems of definition and interpretation, as well as by practical difficulties in reliably measuring age, particularly for extremely old organisms and for those that reproduce by asexual cloning. In many cases the ages listed below are estimates based on observed present-day growth rates, which may differ significantly from the growth rates experienced thousands of years ago. Identifying the longest-living organisms also depends on defining what constitutes an "individual" organism, which can be problematic, since many asexual organisms and clonal colonies defy one or both of the traditional colloquial definitions of individuality (having a distinct genotype and havin
https://en.wikipedia.org/wiki/Planetary%20protection
Planetary protection is a guiding principle in the design of an interplanetary mission, aiming to prevent biological contamination of both the target celestial body and the Earth in the case of sample-return missions. Planetary protection reflects both the unknown nature of the space environment and the desire of the scientific community to preserve the pristine nature of celestial bodies until they can be studied in detail. There are two types of interplanetary contamination. Forward contamination is the transfer of viable organisms from Earth to another celestial body. Back contamination is the transfer of extraterrestrial organisms, if they exist, back to the Earth's biosphere. History The potential problem of lunar and planetary contamination was first raised at the International Astronautical Federation VIIth Congress in Rome in 1956. In 1958 the U.S. National Academy of Sciences (NAS) passed a resolution stating, “The National Academy of Sciences of the United States of America urges that scientists plan lunar and planetary studies with great care and deep concern so that initial operations do not compromise and make impossible forever after critical scientific experiments.” This led to creation of the ad hoc Committee on Contamination by Extraterrestrial Exploration (CETEX), which met for a year and recommended that interplanetary spacecraft be sterilized, and stated, “The need for sterilization is only temporary. Mars and possibly Venus need to remain uncontaminated only until study by manned ships becomes possible”. In 1959, planetary protection was transferred to the newly formed Committee on Space Research (COSPAR). COSPAR in 1964 issued Resolution 26 affirming that: In 1967, the US, USSR, and UK ratified the United Nations Outer Space Treaty. The legal basis for planetary protection lies in Article IX of this treaty: This treaty has since been signed and ratified by 104 nation-states. Another 24 have signed but not ratified. All the current spa
https://en.wikipedia.org/wiki/Contracted%20Bianchi%20identities
In general relativity and tensor calculus, the contracted Bianchi identities are: where is the Ricci tensor, the scalar curvature, and indicates covariant differentiation. These identities are named after Luigi Bianchi, although they had been already derived by Aurel Voss in 1880. In the Einstein field equations, the contracted Bianchi identity ensures consistency with the vanishing divergence of the matter stress–energy tensor. Proof Start with the Bianchi identity Contract both sides of the above equation with a pair of metric tensors: The first term on the left contracts to yield a Ricci scalar, while the third term contracts to yield a mixed Ricci tensor, The last two terms are the same (changing dummy index n to m) and can be combined into a single term which shall be moved to the right, which is the same as Swapping the index labels l and m on the left side yields See also Bianchi identities Einstein tensor Einstein field equations General theory of relativity Ricci calculus Tensor calculus Riemann curvature tensor Notes
https://en.wikipedia.org/wiki/NetFPGA
The NetFPGA project is an effort to develop open-source hardware and software for rapid prototyping of computer network devices. The project targeted academic researchers, industry users, and students. It was not the first platform of its kind in the networking community. NetFPGA used an FPGA-based approach to prototyping networking devices. This allows users to develop designs that are able to process packets at line-rate, a capability generally unafforded by software based approaches. NetFPGA focused on supporting developers that can share and build on each other's projects and IP building blocks. History The project began in 2007 as a research project at Stanford University called the NetFPGA-1G. The 1G was originally designed as a tool to teach students about networking hardware architecture and design. The 1G platform consisted of a PCI board with a Xilinx Virtex-II pro FPGA and 4 x 1GigE interfaces feeding into it, along with a downloadable code repository containing an IP library and a few example designs. The project grew and by the end of 2010 more than 1,800 1G boards sold to over 150 educational institutions spanning 15 countries. During that growth the 1G not only gained popularity as a tool for education, but increasingly as a tool for research. By 2011 over 46 academic papers had been published regarding research that used the NetFPGA-1G platform. Additionally, over 40 projects were contributed to the 1G code repository by the end of 2010. In 2009 work began in secrecy on the NetFPGA-10G with 4 x 10 GigE interfaces. The 10G board was also designed with a much larger FPGA, more memory, and a number of other upgrades. The first release of the platform, codenamed “Howth”, was planned for December 24, 2010, and includes a repository similar to that of the 1G, containing a small IP library and two reference designs. From a platform design perspective, the 10G is diverging in a few significant ways from the 1G platform. For instance, the interface standa
https://en.wikipedia.org/wiki/Logic%20synthesis
In computer engineering, logic synthesis is a process by which an abstract specification of desired circuit behavior, typically at register transfer level (RTL), is turned into a design implementation in terms of logic gates, typically by a computer program called a synthesis tool. Common examples of this process include synthesis of designs specified in hardware description languages, including VHDL and Verilog. Some synthesis tools generate bitstreams for programmable logic devices such as PALs or FPGAs, while others target the creation of ASICs. Logic synthesis is one step in circuit design in the electronic design automation, the others are place and route and verification and validation. History The roots of logic synthesis can be traced to the treatment of logic by George Boole (1815 to 1864), in what is now termed Boolean algebra. In 1938, Claude Shannon showed that the two-valued Boolean algebra can describe the operation of switching circuits. In the early days, logic design involved manipulating the truth table representations as Karnaugh maps. The Karnaugh map-based minimization of logic is guided by a set of rules on how entries in the maps can be combined. A human designer can typically only work with Karnaugh maps containing up to four to six variables. The first step toward automation of logic minimization was the introduction of the Quine–McCluskey algorithm that could be implemented on a computer. This exact minimization technique presented the notion of prime implicants and minimum cost covers that would become the cornerstone of two-level minimization. Nowadays, the much more efficient Espresso heuristic logic minimizer has become the standard tool for this operation. Another area of early research was in state minimization and encoding of finite-state machines (FSMs), a task that was the bane of designers. The applications for logic synthesis lay primarily in digital computer design. Hence, IBM and Bell Labs played a pivotal role in the early
https://en.wikipedia.org/wiki/Methyl%20green
Methyl green (CI 42585) is a cationic or positive charged stain related to Ethyl Green that has been used for staining DNA since the 19th century. It has been used for staining cell nuclei either as a part of the classical Unna-Pappenheim stain or as a nuclear counterstain ever since. In recent years, its fluorescent properties, when bound to DNA, have positioned it as useful for far-red imaging of live cell nuclei. Fluorescent DNA staining is routinely used in cancer prognosis. Methyl green also emerges as an alternative stain for DNA in agarose gels, fluorometric assays, and flow cytometry. It has also been shown that it can be used as an exclusion viability stain for cells. Its interaction with DNA has been shown to be non-intercalating, in other words, not inserting itself into the DNA, but instead electrostatic with the DNA major groove. It is used in combination with pyronin in the methyl green–pyronin stain, which stains and differentiates DNA and RNA. When excited at 244 or 388 nm in a neutral aqueous solution, methyl green produces a fluorescent emission at 488 or 633 nm, respectively. The presence or absence of DNA does not affect these fluorescence behaviors. When binding DNA under neutral aqueous conditions, methyl green also becomes fluorescent in the far red with an excitation maximum of 633 nm and an emission maximum of 677 nm. Commercial Methyl green preparations are often contaminated with Crystal violet. Crystal violet can be removed by chloroform extraction.
https://en.wikipedia.org/wiki/Programmer%20%28hardware%29
A programmer, device programmer, chip programmer, device burner, or PROM writer is a piece of electronic equipment that arranges written software or firmware to configure programmable non-volatile integrated circuits, called programmable devices. The target devices include PROM, EPROM, EEPROM, Flash memory, eMMC, MRAM, FeRAM, NVRAM, PLDs, PLAs, PALs, GALs, CPLDs, FPGAs, and microcontrollers. Function Programmer hardware has two variants. One is configuring the target device itself with a socket on the programmer. Another is configuring the device on a printed circuit board. In the former case, the target device is inserted into a socket (usually ZIF) on top of the programmer. If the device is not a standard DIP packaging, a plug-in adapter board, which converts the footprint with another socket, is used. In the latter case, device programmer is directly connected to the printed circuit board by a connector, usually with a cable. This way is called on-board programming, in-circuit programming, or in-system programming. Afterwards the data is transferred from the programmer into the device by applying signals through the connecting pins. Some devices have a serial interface for receiving the programming data (including JTAG interface). Other devices require the data on parallel pins, followed by a programming pulse with a higher voltage for programming the data into the device. Usually device programmers are connected to a personal computer through a parallel port, USB port, or LAN interface. A software program on the computer then transfers the data to the programmer, selects the device and interface type, and starts the programming process to read/ write/ erase/ blank the data inside the device. Types There are four general types of device programmers: Automated programmers (multi-programming sites, having a set of sockets) for mass production. These systems utilize robotic pick and place handlers with on-board sites. This allows for high volume and compl
https://en.wikipedia.org/wiki/Hydrodynamic%20delivery
Hydrodynamic Delivery (HD) is a method of DNA insertion in rodent models. Genes are delivered via injection into the bloodstream of the animal, and are expressed in the liver. This protocol is helpful to determine gene function, regulate gene expression, and develop pharmaceuticals in vivo. Methods Hydrodynamic Delivery was developed as a way to insert genes without viral infection (transfection). The procedure requires a high-volume DNA solution to be inserted into the veins of the rodent using a high-pressure needle. The volume of the DNA is typically 8-10% equal to 8-10% of the animal's body weight, and is injected within 5-7 seconds. The pressure of the insertion leads to cardiac congestion (increased pressure in the heart), allowing the DNA solution to flow through the bloodstream and accumulate in the liver. The pressure expands the pores in the cell membrane, forcing the DNA molecules into the parenchyma, or the functional cells of the organ. In the liver, these cells are the hepatocytes. In less than two minutes after the injection, the pressure returns to natural levels, and the pores shrink back, trapping the DNA inside of the cell. After injection, the majority of genes are expressed in the liver of the animal over a long period of time. Originally developed to insert DNA, further developments in HD have enabled the insertion of RNA, proteins, and short oligonucleotides into cells. Applications The development of Hydrodynamic Delivery methods allows an alternative way to study in vivo experiments. This method has shown to be effective in small mammals, without the potential risks and complications of viral transfection. Applications of these studies include: testing regulatory elements, generating antibodies, analyzing gene therapy techniques, and developing models for diseases. Typically, genes are expressed in the liver, but the procedure can be altered to express genes in kidneys, lungs, muscles, heart, and pancreas. Gene therapy Hydrodynamic De
https://en.wikipedia.org/wiki/Kobon%20triangle%20problem
The Kobon triangle problem is an unsolved problem in combinatorial geometry first stated by Kobon Fujimura (1903-1983). The problem asks for the largest number N(k) of nonoverlapping triangles whose sides lie on an arrangement of k lines. Variations of the problem consider the projective plane rather than the Euclidean plane, and require that the triangles not be crossed by any other lines of the arrangement. Known upper bounds Saburo Tamura proved that the number of nonoverlapping triangles realizable by lines is at most . G. Clément and J. Bader proved more strongly that this bound cannot be achieved when is congruent to 0 or 2 (mod 6). The maximum number of triangles is therefore at most one less in these cases. The same bounds can be equivalently stated, without use of the floor function, as: Solutions yielding this number of triangles are known when is 3, 4, 5, 6, 7, 8, 9, 13, 15 or 17. For k = 10, 11 and 12, the best solutions known reach a number of triangles one less than the upper bound. Known constructions Given an optimal solution with k0 > 3 lines, other Kobon triangle solution numbers can be found for all ki-values where by using the procedure by D. Forge and J. L. Ramirez Alfonsin. For example, the solution for k0 = 5 leads to the maximal number of nonoverlapping triangles for k = 5, 9, 17, 33, 65, .... Examples See also Roberts's triangle theorem, on the minimum number of triangles that lines can form
https://en.wikipedia.org/wiki/Making%20Mathematics%20with%20Needlework
Making Mathematics with Needlework: Ten Papers and Ten Projects is an edited volume on mathematics and fiber arts. It was edited by Sarah-Marie Belcastro and Carolyn Yackel, and published in 2008 by A K Peters, based on a meeting held in 2005 in Atlanta by the American Mathematical Society. Topics The book includes ten different mathematical fiber arts projects, by eight contributors. An introduction provides a history of the connections between mathematics, mathematics education, and the fiber arts. Each of its ten project chapters is illustrated by many color photographs and diagrams, and is organized into four sections: an overview of the project, a section on the mathematics connected to it, a section of ideas for using the project as a teaching activity, and directions for constructing the project. Although there are some connections between topics, they can be read independently of each other, in any order. The thesis of the book is that directed exercises in fiber arts construction can help teach both mathematical visualization and concepts from three-dimensional geometry. The book uses knitting, crochet, sewing, and cross-stitch, but deliberately avoids weaving as a topic already well-covered in mathematical fiber arts publications. Projects in the book include a quilt in the form of a Möbius strip, a "bidirectional hat" connected to the theory of Diophantine equations, a shawl with a fractal design, a knitted torus connecting to discrete approximations of curvature, a sampler demonstrating different forms of symmetry in wallpaper group, "algebraic socks" with connections to modular arithmetic and the Klein four-group, a one-sided purse sewn together following a description by Lewis Carroll, a demonstration of braid groups on a cable-knit pillow, an embroidered graph drawing of an Eulerian graph, and topological pants. Beyond belcastro and Yackel, the contributors to the book include Susan Goldstine, Joshua Holden, Lana Holden, Mary D. Shepherd, Amy F. Sz
https://en.wikipedia.org/wiki/HCMOS
HCMOS ("high-speed CMOS") is the set of specifications for electrical ratings and characteristics, forming the 74HC00 family, a part of the 7400 series of integrated circuits. The 74HC00 family followed, and improved upon, the 74C00 series (which provided an alternative CMOS logic family to the 4000 series but retained the part number scheme and pinouts of the standard 7400 series (especially the 74LS00 series)) . Some specifications include: DC supply voltage DC input voltage range DC output voltage range input rise and fall times output rise and fall times HCMOS also stands for high-density CMOS. The term was used to describe microprocessors, and other complex integrated circuits, which use a smaller manufacturing processes, producing more transistors per area. The Freescale 68HC11 is an example of a popular HCMOS microcontroller. Variations HCT stands for high-speed CMOS with transistor–transistor logic voltages. These devices are similar to the HCMOS types except they will operate at standard TTL power supply voltages and logic input levels. This allows for direct pin-to-pin compatible CMOS replacements to reduce power consumption without loss of speed. HCU stands for high-speed CMOS un-buffered. This type of CMOS contains no buffer and is ideal for crystals and other ceramic oscillators needing linearity. VHCMOS, or AHC, stands for very high-speed CMOS or advanced high-speed CMOS. Typical propagation delay time is between 3 ns and 4 ns. The speed is similar to Bipolar Schottky transistor TTL. AHCT stands for advanced high-speed CMOS with TTL inputs. Typical propagation delay time is between 5 ns and 6 ns.
https://en.wikipedia.org/wiki/Universal%20dielectric%20response
In physics and electrical engineering, the universal dielectric response, or UDR, refers to the observed emergent behaviour of the dielectric properties exhibited by diverse solid state systems. In particular this widely observed response involves power law scaling of dielectric properties with frequency under conditions of alternating current, AC. First defined in a landmark article by A. K. Jonscher in Nature published in 1977, the origins of the UDR were attributed to the dominance of many-body interactions in systems, and their analogous RC network equivalence. The universal dielectric response manifests in the variation of AC Conductivity with frequency and is most often observed in complex systems consisting of multiple phases of similar or dissimilar materials. Such systems, which can be called heterogenous or composite materials, can be described from a dielectric perspective as a large network consisting of resistor and capacitor elements, known also as an RC network. At low and high frequencies, the dielectric response of heterogeneous materials is governed by percolation pathways. If a heterogeneous material is represented by a network in which more than 50% of the elements are capacitors, percolation through capacitor elements will occur. This percolation results in conductivity at high and low frequencies that is directly proportional to frequency. Conversely, if the fraction of capacitor elements in the representative RC network (Pc) is lower than 0.5, dielectric behavior at low and high frequency regimes is independent of frequency. At intermediate frequencies, a very broad range of heterogeneous materials show a well-defined emergent region, in which power law correlation of admittance to frequency is observed. The power law emergent region is the key feature of the UDR. In materials or systems exhibiting UDR, the overall dielectric response from high to low frequencies is symmetrical, being centered at the middle point of the emergent region, whic
https://en.wikipedia.org/wiki/Kill%20pill
In computing, a kill pill is a mechanism or a technology designed to render systems useless either by user command, or under a predefined set of circumstances. Kill pill technology is most commonly used to disable lost or stolen devices for security purposes, but can also be used for the enforcement of rules and contractual obligations. Applications Lost and stolen devices Kill pill technology is used prominently in smartphones, especially in the disablement of lost or stolen devices. A notable example is Find My iPhone, a service that allows the user to password protect or wipe their iDevice(s) remotely, aiding in the protection of private data. Similar applications exist for other smartphone operating systems, including Android, BlackBerry, and Windows Phone. Anti-piracy measure Kill pill technology has been notably used as an anti-piracy measure. Windows Vista was released with the ability to severely limit its own functionality if it was determined that the copy was obtained through piracy. The feature was later dropped after complaints that false positives caused genuine copies of Vista to act as though they were pirated. Removal of malicious software The concept of a kill pill is also applied to the remote removal by a server of malicious files or applications from a client's system. Such technology is a standard component of most handheld computing devices, mainly due to their generally more limited operating systems and means of obtaining applications. Such functionality is also reportedly available to applications downloaded from the Windows Store on Windows 8 operating systems. Vehicles Kill pill technology is used frequently in vehicles for a variety of reasons. Remote vehicle disablement can be used to prevent a vehicle from starting, to prevent it from moving, and to prevent the vehicle's continued operation. Non-remotely, vehicles can require driver recognition before starting or moving, such as asking for a password or some form of biometrics fro
https://en.wikipedia.org/wiki/Ethnomathematics
In mathematics education, ethnomathematics is the study of the relationship between mathematics and culture. Often associated with "cultures without written expression", it may also be defined as "the mathematics which is practised among identifiable cultural groups". It refers to a broad cluster of ideas ranging from distinct numerical and mathematical systems to multicultural mathematics education. The goal of ethnomathematics is to contribute both to the understanding of culture and the understanding of mathematics, and mainly to lead to an appreciation of the connections between the two. Development and meaning The term "ethnomathematics" was introduced by the Brazilian educator and mathematician Ubiratan D'Ambrosio in 1977 during a presentation for the American Association for the Advancement of Science. Since D'Ambrosio put forth the term, people - D'Ambrosio included - have struggled with its meaning ("An etymological abuse leads me to use the words, respectively, ethno and mathema for their categories of analysis and tics from (from techne)".). The following is a sampling of some of the definitions of ethnomathematics proposed between 1985 and 2006: "The mathematics which is practiced among identifiable cultural groups such as national-tribe societies, labour groups, children of certain age brackets and professional classes". "The mathematics implicit in each practice". "The study of mathematical ideas of a non-literate culture". "The codification which allows a cultural group to describe, manage and understand reality". "Mathematics…is conceived as a cultural product which has developed as a result of various activities". "The study and presentation of mathematical ideas of traditional peoples". "Any form of cultural knowledge or social activity characteristic of a social group and/or cultural group that can be recognized by other groups such as Western anthropologists, but not necessarily by the group of origin, as mathematical knowledge or mathematica
https://en.wikipedia.org/wiki/Flexible%20organic%20light-emitting%20diode
A flexible organic light-emitting diode (FOLED) is a type of organic light-emitting diode (OLED) incorporating a flexible plastic substrate on which the electroluminescent organic semiconductor is deposited. This enables the device to be bent or rolled while still operating. Currently the focus of research in industrial and academic groups, flexible OLEDs form one method of fabricating a rollable display. Technical details and applications An OLED emits light due to the electroluminescence of thin films of organic semiconductors approximately 100 nm thick. Regular OLEDs are usually fabricated on a glass substrate, but by replacing glass with a flexible plastic such as polyethylene terephthalate (PET) among others, OLEDs can be made both bendable and lightweight. Such materials may not be suitable for comparable devices based on inorganic semiconductors due to the need for lattice matching and the high temperature fabrication procedure involved. In contrast, flexible OLED devices can be fabricated by deposition of the organic layer onto the substrate using a method derived from inkjet printing, allowing the inexpensive and roll-to-roll fabrication of printed electronics. Flexible OLEDs may be used in the production of rollable displays, electronic paper, or bendable displays which can be integrated into clothing, wallpaper or other curved surfaces. Prototype displays have been exhibited by companies such as Sony, which are capable of being rolled around the width of a pencil. Disadvantages Both flexible substrate itself as well as the process of bending the device introduce stress into the materials. There may be residual stress from the deposition of layers onto a flexible substrate, thermal stresses due to the different coefficient of thermal expansion of materials in the device, in addition to the external stress from the bending of the device. Stress introduced into the organic layers may lower the efficiency or brightness of the device as it is deformed
https://en.wikipedia.org/wiki/Embedded%20Java
Embedded Java refers to versions of the Java program language that are designed for embedded systems. Since 2010 embedded Java implementations have come closer to standard Java, and are now virtually identical to the Java Standard Edition. Since Java 9 customization of the Java Runtime through modularization removes the need for specialized Java profiles targeting embedded devices. History Although in the past some differences existed between embedded Java and traditional PC based Java, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (such as consumers, industrial, white goods, healthcare, metering, smart markets in general) CORE embedded Java API for a unified Embedded Java ecosystem In order for a software component to run on any Java system, it must target the core minimal API provided by the different providers of the embedded Java ecosystem. Companies share the same eight packages of pre-written programs. The packages (java.lang, java.io, java.util, ... ) form the CORE Embedded Java API, which means that embedded programmers using the Java language can use them in order to make any worthwhile use of the Java language. Old distinctions between SE embedded API and ME embedded API from ORACLE Java SE embedded is based on desktop Java Platform, Standard Edition. It is designed to be used on systems with at least 32 MB of RAM, and can work on Linux ARM, x86, or Power ISA, and Windows XP and Windows XP Embedded architectures. Java ME embedded used to be based on the Connected Device Configuration subset of Java Platform, Micro Edition. It is designed to be used on systems with at least 8 MB of RAM, and can work on Linux ARM, PowerPC, or MIPS architecture. See also Excelsio
https://en.wikipedia.org/wiki/Tutte%20homotopy%20theorem
In mathematics, the Tutte homotopy theorem, introduced by , generalises the concept of "path" from graphs to matroids, and states roughly that closed paths can be written as compositions of elementary closed paths, so that in some sense they are homotopic to the trivial closed path. Statement A matroid on a set Q is specified by a class of non-empty subsets M of Q, called circuits, such that no element of M contains another, and if X and Y are in M, a is in X and Y, b is in X but not in Y, then there is some Z in M containing b but not a and contained in X∪Y. The subsets of Q that are unions of circuits are called flats (this is the language used in Tutte's original paper, however in modern usage the flats of a matroid mean something different). The elements of M are called 0-flats, the minimal non-empty flats that are not 0-flats are called 1-flats, the minimal nonempty flats that are not 0-flats or 1-flats are called 2-flats, and so on. A path is a finite sequence of 0-flats such that any two consecutive elements of the path lie in some 1-flat. An elementary path is one of the form (X,Y,X), or (X,Y,Z,X) with X,Y,Z all lying in some 2-flat. Two paths P and Q such that the last 0-flat of P is the same as the first 0-flat of Q can be composed in the obvious way to give a path PQ. Two paths are called homotopic if one can be obtained from the other by the operations of adding or removing elementary paths inside a path, in other words changing a path PR to PQR or vice versa, where Q is elementary. A weak form of Tutte's homotopy theorem states that any closed path is homotopic to the trivial path. A stronger form states a similar result for paths not meeting certain "convex" subsets.
https://en.wikipedia.org/wiki/Comparison%20of%20vector%20algebra%20and%20geometric%20algebra
Geometric algebra is an extension of vector algebra, providing additional algebraic structures on vector spaces, with geometric interpretations. Vector algebra uses all dimensions and signatures, as does geometric algebra, notably 3+1 spacetime as well as 2 dimensions. Basic concepts and operations Geometric algebra (GA) is an extension or completion of vector algebra (VA). The reader is herein assumed to be familiar with the basic concepts and operations of VA and this article will mainly concern itself with operations in the GA of 3D space (nor is this article intended to be mathematically rigorous). In GA, vectors are not normally written boldface as the meaning is usually clear from the context. The fundamental difference is that GA provides a new product of vectors called the "geometric product". Elements of GA are graded multivectors: scalars are grade 0, usual vectors are grade 1, bivectors are grade 2 and the highest grade (3 in the 3D case) is traditionally called the pseudoscalar and designated . The ungeneralized 3D vector form of the geometric product is: that is the sum of the usual dot (inner) product and the outer (exterior) product (this last is closely related to the cross product and will be explained below). In VA, entities such as pseudovectors and pseudoscalars need to be bolted on, whereas in GA the equivalent bivector and pseudovector respectively exist naturally as subspaces of the algebra. For example, applying vector calculus in 2 dimensions, such as to compute torque or curl, requires adding an artificial 3rd dimension and extending the vector field to be constant in that dimension, or alternately considering these to be scalars. The torque or curl is then a normal vector field in this 3rd dimension. By contrast, geometric algebra in 2 dimensions defines these as a pseudoscalar field (a bivector), without requiring a 3rd dimension. Similarly, the scalar triple product is ad hoc, and can instead be expressed uniformly using the ex
https://en.wikipedia.org/wiki/Kepler%E2%80%93Bouwkamp%20constant
In plane geometry, the Kepler–Bouwkamp constant (or polygon inscribing constant) is obtained as a limit of the following sequence. Take a circle of radius 1. Inscribe a regular triangle in this circle. Inscribe a circle in this triangle. Inscribe a square in it. Inscribe a circle, regular pentagon, circle, regular hexagon and so forth. The radius of the limiting circle is called the Kepler–Bouwkamp constant. It is named after Johannes Kepler and , and is the inverse of the polygon circumscribing constant. Numerical value The decimal expansion of the Kepler–Bouwkamp constant is The natural logarithm of the Kepler-Bouwkamp constant is given by where is the Riemann zeta function. If the product is taken over the odd primes, the constant is obtained .
https://en.wikipedia.org/wiki/Gajski%E2%80%93Kuhn%20chart
The Gajski–Kuhn chart (or Y diagram) depicts the different perspectives in VLSI hardware design. Mostly, it is used for the development of integrated circuits. Daniel Gajski and Robert Kuhn developed it in 1983. In 1985, Robert Walker and Donald Thomas refined it. According to this model, the development of hardware is perceived within three domains that are depicted as three axis and produce a Y. Along these axis, the abstraction levels that describe the degree of abstraction. The outer shells are generalisations, the inner ones refinements of the same subject. The issue in hardware development is most often a top-down design problem. This is perceived by the three domains of behaviour, structure, and the layout that goes top-down to more detailed abstraction levels. The designer can select one of the perspectives and then switch from one view to another. Generally, the design process is not following a specific sequence in this diagram. On the system level, basic properties of an electronic system are determined. For the behavioural description, block diagrams are used by making abstractions of signals and their time response. Blocks used in the structure domain are CPUs, memory chip, etc. The algorithmic level is defined by the definition of concurrent algorithms (signals, loops, variables, assignments). In the structural domain, blocks like ALUs are in use. The register-transfer level (RTL) is a more detailed abstraction level on which the behaviour between communicating registers and logic units is described. Here, data structures and data flows are defined. In the geometric view, the design step of the floorplan is located. The logical level is described in the behaviour perspective by boolean equations. In the structural view, this is displayed with gates and flip-flops. In the geometric domain, the logical level is described by standard cells. The behaviour of the circuit level is described by mathematics using differential equations or logical equa
https://en.wikipedia.org/wiki/Taphotaxon
A taphotaxon (from the Greek ταφος, taphos meaning burial and ταξις, taxis meaning ordering) is an invalid taxon based on fossils remains that have been altered in a characteristic way during burial and diagenesis. The fossils so altered have distinctive characteristics that make them appear to be a new taxon, but these characteristics are spurious and do not reflect any significant taxonomic distinction from an existing fossil taxon. The term was first proposed by Spencer G. Lucas in 2001, who particularly applied it to spurious ichnotaxons, but it has since been applied to body fossils such as Nuia (interpreted as cylindrical oncolites formed around filamentous cyanobacteria) or Ivanovia (thought to be a taphotaxon of Anchicondium or Eugonophyllum); conulariids, and crustaceans. In his original definition of the term, Lucas emphasized that he was not seeking to create a new field of taphotaxonomy. The term is intended simply as a useful description of a particular type of invalid taxon. It should not be used indiscriminately, particularly with ichnotaxons, where the fact that an ichnotaxon derives part of its morphology from taphonomic processes may not always render it an invalid ichnotaxon.
https://en.wikipedia.org/wiki/Legendre%20transformation
In mathematics, the Legendre transformation (or Legendre transform), first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function. In physical problems, it is used to convert functions of one quantity (such as position, pressure, or temperature) into functions of the conjugate quantity (momentum, volume, and entropy, respectively). In this way, it is commonly used in classical mechanics to derive the Hamiltonian formalism out of the Lagrangian formalism (or vice versa) and in thermodynamics to derive the thermodynamic potentials, as well as in the solution of differential equations of several variables. For sufficiently smooth functions on the real line, the Legendre transform of a function can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other. This can be expressed in Euler's derivative notation as where is an operator of differentiation, represents an argument or input to the associated function, is an inverse function such that , or equivalently, as and in Lagrange's notation. The generalization of the Legendre transformation to affine spaces and non-convex functions is known as the convex conjugate (also called the Legendre–Fenchel transformation), which can be used to construct a function's convex hull. Definition Let be an interval, and a convex function; then the Legendre transform of is the function defined by where denotes the supremum over , e.g., in is chosen such that is maximized at each , or is such that as a bounded value throughout exists (e.g., when is a linear function). The transform is always well-defined when is convex. Th
https://en.wikipedia.org/wiki/Lemniscate%20constant
In mathematics, the lemniscate constant is a transcendental mathematical constant that is the ratio of the perimeter of Bernoulli's lemniscate to its diameter, analogous to the definition of for the circle. Equivalently, the perimeter of the lemniscate is . The lemniscate constant is closely related to the lemniscate elliptic functions and approximately equal to 2.62205755. The symbol is a cursive variant of ; see Pi § Variant pi. Gauss's constant, denoted by G, is equal to . John Todd named two more lemniscate constants, the first lemniscate constant and the second lemniscate constant . Sometimes the quantities or are referred to as the lemniscate constant. History Gauss's constant is named after Carl Friedrich Gauss, who calculated it via the arithmetic–geometric mean as . By 1799, Gauss had two proofs of the theorem that where is the lemniscate constant. The lemniscate constant and first lemniscate constant were proven transcendental by Theodor Schneider in 1937 and the second lemniscate constant and Gauss's constant were proven transcendental by Theodor Schneider in 1941. In 1975, Gregory Chudnovsky proved that the set is algebraically independent over , which implies that and are algebraically independent as well. But the set (where the prime denotes the derivative with respect to the second variable) is not algebraically independent over . In fact, Forms Usually, is defined by the first equality below. where is the complete elliptic integral of the first kind with modulus , is the beta function, is the gamma function and is the Riemann zeta function. The lemniscate constant can also be computed by the arithmetic–geometric mean , Moreover, which is analogous to where is the Dirichlet beta function and is the Riemann zeta function. Gauss's constant is typically defined as the reciprocal of the arithmetic–geometric mean of 1 and the square root of 2, after his calculation of published in 1800: Gauss's constant is equal t
https://en.wikipedia.org/wiki/Classification%20of%20discontinuities
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function. The oscillation of a function at a point quantifies these discontinuities as follows: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits of the two sides); in an essential discontinuity, oscillation measures the failure of a limit to exist; the limit is constant. A special case is if the function diverges to infinity or minus infinity, in which case the oscillation is not defined (in the extended real numbers, this is a removable discontinuity). Classification For each of the following, consider a real valued function of a real variable defined in a neighborhood of the point at which is discontinuous. Removable discontinuity Consider the piecewise function The point is a removable discontinuity. For this kind of discontinuity: The one-sided limit from the negative direction: and the one-sided limit from the positive direction: at both exist, are finite, and are equal to In other words, since the two one-sided limits exist and are equal, the limit of as approaches exists and is equal to this same value. If the actual value of is not equal to then is called a . This discontinuity can be removed to make continuous at or more precisely, the function is continuous at The term removable discontinuity is sometimes broadened to include a removable singularity, in which the limits in both directions exist and are equal, while the function is undefined at the point This use is an abuse of terminology b
https://en.wikipedia.org/wiki/Bioclimatology
Bioclimatology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or longer (in contrast to biometeorology). Examples of relevant processes Climate processes largely control the distribution, size, shape and properties of living organisms on Earth. For instance, the general circulation of the atmosphere on a planetary scale broadly determines the location of large deserts or the regions subject to frequent precipitation, which, in turn, greatly determine which organisms can naturally survive in these environments. Furthermore, changes in climates, whether due to natural processes or to human interferences, may progressively modify these habitats and cause overpopulation or extinction of indigenous species. The biosphere, for its part, and in particular continental vegetation, which constitutes over 99% of the total biomass, has played a critical role in establishing and maintaining the chemical composition of the Earth's atmosphere, especially during the early evolution of the planet (See History of Earth for more details on this topic). Currently, the terrestrial vegetation exchanges some 60 billion tons of carbon with the atmosphere on an annual basis (through processes of carbon fixation and carbon respiration), thereby playing a critical role in the carbon cycle. On a global and annual basis, small imbalances between these two major fluxes, as do occur through changes in land cover and land use, contribute to the current increase in atmospheric carbon dioxide.
https://en.wikipedia.org/wiki/Analog%20device
Analog devices are a combination of both analog machine and analog media that can together measure, record, reproduce, receive or broadcast continuous information, for example, the almost infinite number of grades of transparency, voltage, resistance, rotation, or pressure. In theory, the continuous information in an analog signal has an infinite number of possible values with the only limitation on resolution being the accuracy of the analog device. Analog media are materials with analog properties, such as photographic film, which are used in analog devices, such as cameras. Example devices Non-electrical There are notable non-electrical analog devices, such as some clocks (sundials, water clocks), the astrolabe, slide rules, the governor of a steam engine, the planimeter (a simple device that measures the surface area of a closed shape), Kelvin's mechanical tide predictor, acoustic rangefinders, servomechanisms (e.g. the thermostat), a simple mercury thermometer, a weighing scale, and the speedometer. Electrical The telautograph is an analogue precursor to the modern fax machine. It transmits electrical impulses recorded by potentiometers to stepping motors attached to a pen, thus being able to reproduce a drawing or signature made by the sender at the receiver's station. It was the first such device to transmit drawings to a stationary sheet of paper; previous inventions in Europe used rotating drums to make such transmissions. An analog synthesizer is a synthesizer that uses analog circuits and analog computer techniques to generate sound electronically. The analog television encodes television and transports the picture and sound information as an analog signal, that is, by varying the amplitude and/or frequencies of the broadcast signal. All systems preceding digital television, such as NTSC, PAL, and SECAM are analog television systems. An analog computer is a form of computer that uses electrical, mechanical, or hydraulic phenomena to model the probl
https://en.wikipedia.org/wiki/Bootstrapping%20%28electronics%29
In the field of electronics, a technique where part of the output of a system is used at startup can be described as bootstrapping. A bootstrap circuit is one where part of the output of an amplifier stage is applied to the input, so as to alter the input impedance of the amplifier. When applied deliberately, the intention is usually to increase rather than decrease the impedance. In the domain of MOSFET circuits, bootstrapping is commonly used to mean pulling up the operating point of a transistor above the power supply rail. The same term has been used somewhat more generally for dynamically altering the operating point of an operational amplifier (by shifting both its positive and negative supply rail) in order to increase its output voltage swing (relative to the ground). In the sense used in this paragraph, bootstrapping an operational amplifier means "using a signal to drive the reference point of the op-amp's power supplies". A more sophisticated use of this rail bootstrapping technique is to alter the non-linear C/V characteristic of the inputs of a JFET op-amp in order to decrease its distortion. Input impedance In analog circuit designs, a bootstrap circuit is an arrangement of components deliberately intended to alter the input impedance of a circuit. Usually it is intended to increase the impedance, by using a small amount of positive feedback, usually over two stages. This was often necessary in the early days of bipolar transistors, which inherently have quite a low input impedance. Because the feedback is positive, such circuits can suffer from poor stability and noise performance compared to ones that don't bootstrap. Negative feedback may alternatively be used to bootstrap an input impedance, causing the apparent impedance to be reduced. This is seldom done deliberately, however, and is normally an unwanted result of a particular circuit design. A well-known example of this is the Miller effect, in which an unavoidable feedback capacitance
https://en.wikipedia.org/wiki/Atwater%20system
The Atwater system, named after Wilbur Olin Atwater, or derivatives of this system are used for the calculation of the available energy of foods. The system was developed largely from the experimental studies of Atwater and his colleagues in the later part of the 19th century and the early years of the 20th at Wesleyan University in Middletown, Connecticut. Its use has frequently been the cause of dispute, but few alternatives have been proposed. As with the calculation of protein from total nitrogen, the Atwater system is a convention and its limitations can be seen in its derivation. Derivation Available energy (as used by Atwater) is equivalent to the modern usage of the term metabolisable energy (ME). In most studies on humans, losses in secretions and gases are ignored. The gross energy (GE) of a food, as measured by bomb calorimetry is equal to the sum of the heats of combustion of the components – protein (GEp), fat (GEf) and carbohydrate (GEcho) (by difference) in the proximate system. Atwater considered the energy value of feces in the same way. By measuring coefficients of availability or in modern terminology apparent digestibility, Atwater derived a system for calculating faecal energy losses. where Dp, Df, and Dcho are respectively the digestibility coefficients of protein, fat and carbohydrate calculated as for the constituent in question. Urinary losses were calculated from the energy to nitrogen ratio in urine. Experimentally this was 7.9 kcal/g (33 kJ/g) urinary nitrogen and thus his equation for metabolisable energy became Gross energy values Atwater collected values from the literature and also measured the heat of combustion of proteins, fats and carbohydrates. These vary slightly depending on sources and Atwater derived weighted values for the gross heat of combustion of the protein, fat and carbohydrate in the typical mixed diet of his time. It has been argued that these weighted values are invalid for individual foods and for diets who
https://en.wikipedia.org/wiki/Actel%20SmartFusion
SmartFusion is a family of microcontrollers with an integrated FPGA of Actel. The device includes an ARM Cortex-M3 hard processor core (with up to 512kB of flash and 64kB of RAM) and analog peripherals such as a multi-channel ADC and DACs in addition to their flash-based FPGA fabric. Models Development Hardware Actel also sells two development boards that include an SmartFusion chip. One is the SmartFusion Evaluation Kit which is a low cost board with an SmartFusion A2F200 and sold for $99. Another is the SmartFusion Development Kit which is a fully featured board with an SmartFusion A2F500 and is sold for $999 . Development tools Documentation The amount of documentation for all ARM chips is daunting, especially for newcomers. The documentation for microcontrollers from past decades would easily be inclusive in a single document, but as chips have evolved so has the documentation grown. The total documentation is especially hard to grasp for all ARM chips since it consists of documents from the IC manufacturer (Actel) and documents from CPU core vendor (ARM Holdings). A typical top-down documentation tree is: manufacturer website, manufacturer marketing slides, manufacturer datasheet for the exact physical chip, manufacturer detailed reference manual that describes common peripherals and aspects of a physical chip family, ARM core generic user guide, ARM core technical reference manual, ARM architecture reference manual that describes the instruction set(s). SmartFusion documentation tree (top to bottom) SmartFusion website. SmartFusion marketing slides. SmartFusion datasheets. SmartFusion reference manuals. ARM core website. ARM core generic user guide. ARM core technical reference manual. ARM architecture reference manual. Actel has additional documents, such as: evaluation board user manuals, application notes, getting started guides, software library documents, errata, and more. See External Links section for links to official STM32 and ARM d
https://en.wikipedia.org/wiki/Traffic%20flow%20%28computer%20networking%29
In packet switching networks, traffic flow, packet flow or network flow is a sequence of packets from a source computer to a destination, which may be another host, a multicast group, or a broadcast domain. RFC 2722 defines traffic flow as "an artificial logical equivalent to a call or connection." RFC 3697 defines traffic flow as "a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination that the source desires to label as a flow. A flow could consist of all packets in a specific transport connection or a media stream. However, a flow is not necessarily 1:1 mapped to a transport connection." Flow is also defined in RFC 3917 as "a set of IP packets passing an observation point in the network during a certain time interval." Packet flow temporal efficiency can be affected by one-way delay (OWD) that is described as a combination of the following components: Processing delay (the time taken to process a packet in a network node) Queuing delay (the time a packet waits in a queue until it can be transmitted) Transmission delay (the amount of time necessary to push all the packet into the wire) Propagation delay (amount of time it takes the signal’s header to travel from the sender to the receiver) Utility for network administration Packets from one flow need to be handled differently from others, by means of separate queues in switches, routers and network adapters, to achieve traffic shaping, policing, fair queueing or quality of service. It is also a concept used in Queueing Network Analyzers (QNAs) or in packet tracing. Applied to Internet routers, a flow may be a host-to-host communication path, or a socket-to-socket communication identified by a unique combination of source and destination addresses and port numbers, together with transport protocol (for example, UDP or TCP). In the TCP case, a flow may be a virtual circuit, also known as a virtual connection or a byte stream. In packet switches, the fl
https://en.wikipedia.org/wiki/Avionics
Avionics (a blend of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform. History The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics". Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy, so they required two-seat aircraft with a second crewman to tap on a telegraph key to spell out messages by Morse code. During World War I, AM voice two way radio sets were made possible in 1917 by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying. Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnet
https://en.wikipedia.org/wiki/Abacus
The abacus (: abaci or abacuses), also called a counting frame, is a hand-operated calculating tool of unknown origin used since ancient times in the ancient Near East, Europe, China, and Russia, millennia before the adoption of the Hindu-Arabic numeral system. The abacus consists of a two-dimensional array of slidable beads (or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Each rod typically represents one digit of a multi-digit number laid out using a positional numeral system such as base ten (though some cultures used different numerical bases). Roman and East Asian abacuses use a system resembling bi-quinary coded decimal, with a top deck (containing one or two beads) representing fives and a bottom deck (containing four or five beads) representing ones. Natural numbers are normally used, but some allow simple fractional components (e.g. , , and in Roman abacus), and a decimal point can be imagined for fixed-point arithmetic. Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations). In the ancient world, abacuses were a practical calculating tool. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring s
https://en.wikipedia.org/wiki/Univariate
In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinction between the univariate and multivariate cases is fundamental; for example, the fundamental theorem of algebra and Euclid's algorithm for polynomials are fundamental properties of univariate polynomials that cannot be generalized to multivariate polynomials. In statistics, a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis, the whole time series is the "variable": a univariate time series is the series of values over time of a single quantity. Correspondingly, a "multivariate time series" characterizes the changing values over time of several quantities. In some cases, the terminology is ambiguous, since the values within a univariate time series may be treated using certain types of multivariate statistical analyses and may be represented using multivariate distributions. In addition to the question of scaling, a criterion (variable) in univariate statistics can be described by two important measures (also key figures or parameters): Location & Variation. Measures of Location Scales (e.g. mode, median, arithmetic mean) describe in which area the data is arranged centrally. Measures of Variation (e.g. span, interquartile distance, standard deviation) describe how similar or different the data are scattered. See also Arity Bivariate (disambiguation) Multivariate (disambiguation) Univariate analysis Univariate binary model Univariate distribution
https://en.wikipedia.org/wiki/Instinet
Instinet Incorporated is an institutional, agency-model broker that also serves as the independent equity trading arm of its parent, Nomura Group. It executes trades for asset management firms, hedge funds, insurance companies, mutual funds and pension funds. Headquartered in New York City, the company provides sales trading services and trading technologies such as the Newport EMS, algorithms, trade cost analytics, commission management, independent research and dark pools. However, Instinet is best known for being the first off-exchange trading alternatives, with its "green screen" terminals prevalent in the 1980s and 1990s, and as the founder of electronic communication networks, Chi-X Europe and Chi-X Global. According to industry research group Markit, in 2015 Instinet was the 3rd-largest cash equities broker in Europe. History Early history Instinet was founded by Jerome M. Pustilnik and Herbert R. Behrens and was incorporated in 1969 as Institutional Networks Corp. The founders aimed to compete with the New York Stock Exchange by means of computer links between major institutions, such as banks, mutual funds, and insurance companies, with no delays or intervening specialists. Through the Instinet system, which went live in December 1969, the company provided computer services and a communications network for the automated buying and selling of equity securities on an anonymous, confidential basis. Uptake of the platform was slow through the 1970s, and in 1983 Instinet turned to William A. "Bill" Lupien, a former Pacific Stock Exchange specialist, to run the company. Bill Lupien decided to market the system more aggressively to the broker community, rather than focus exclusively on the buyside as his predecessors had. To expand its market, Lupien brought on board Fredric W. Rittereiser, formerly of Troster Singer and the Sherwood Group, as President and Chief Operating Officer and David N. Rosensaft as Vice President (later SVP) of New Products Developme
https://en.wikipedia.org/wiki/CMS-2
CMS-2 is an embedded systems programming language used by the United States Navy. It was an early attempt to develop a standardized high-level computer programming language intended to improve code portability and reusability. CMS-2 was developed primarily for the US Navy’s tactical data systems (NTDS). CMS-2 was developed by RAND Corporation in the early 1970s and stands for "Compiler Monitor System". The name "CMS-2" is followed in literature by a letter designating the type of target system. For example, CMS-2M targets Navy 16-bit processors, such as the AN/AYK-14. History CMS-2 was developed for FCPCPAC (Fleet Computer Programming Center - Pacific) in San Diego, CA. It was implemented by Computer Sciences Corporation in 1968 with design assistance from Intermetrics. The language continued to be developed, eventually supporting a number of computers including the AN/UYK-7 and AN/UYK-43 and UYK-20 and UYK-44 computers. Language features CMS-2 was designed to encourage program modularization, permitting independent compilation of portions of a total system. The language is statement oriented. The source is free-form and may be arranged for programming convenience. Data types include fixed-point, floating-point, boolean, character and status. Direct reference to, and manipulation of character and bit strings is permitted. Symbolic machine code may be included, known as direct code. Program structure A CMS-2 program is composed of statements. Statements are made up of symbols separated by delimiters. The categories of symbols include operators, identifiers, and constants. The operators are language primitives assigned by the compiler for specific operations or definitions in a program. Identifiers are unique names assigned by the programmer to data units, program elements and statement labels. Constants are known values that may be numeric, Hollerith strings, status values or Boolean. CMS-2 statements are free form and terminated by a dollar sign. A statemen
https://en.wikipedia.org/wiki/Liana
A liana is a long-stemmed, woody vine that is rooted in the soil at ground level and uses trees, as well as other means of vertical support, to climb up to the canopy in search of direct sunlight. The word liana does not refer to a taxonomic grouping, but rather a habit of plant growth – much like tree or shrub. It comes from standard French liane, itself from an Antilles French dialect word meaning to sheave. Ecology Lianas are characteristic of tropical moist broadleaf forests (especially seasonal forests), but may be found in temperate rainforests and temperate deciduous forests. There are also temperate lianas, for example the members of the Clematis or Vitis (wild grape) genera. Lianas can form bridges amidst the forest canopy, providing arboreal animals with paths across the forest. These bridges can protect weaker trees from strong winds. Lianas compete with forest trees for sunlight, water and nutrients from the soil. Forests without lianas grow 150% more fruit; trees with lianas have twice the probability of dying. Some lianas attain to great length, such as Bauhinia sp. in Surinam which has grown as long as 600 meters. Hawkins has accepted a length of 1.5 km for an Entada phaseoloides. The longest monocot liana is Calamus manan (or Calamus ornatus) at exactly 240 meters. Lianas may be found in many different plant families. One way of distinguishing lianas from trees and shrubs is based on the stiffness, specifically, the Young's modulus of various parts of the stem. Trees and shrubs have young twigs and smaller branches which are quite flexible and older growth such as trunks and large branches which are stiffer. A liana often has stiff young growths and older, more flexible growth at the base of the stem. Habitat Lianas compete intensely with trees, greatly reducing tree growth and tree reproduction, greatly increasing tree mortality, preventing tree seedlings from establishing, altering the course of regeneration in forests, and ultimately affecti
https://en.wikipedia.org/wiki/UPC%20and%20NPC
Usage Parameter Control (UPC) and Network Parameter Control (NPC) are functions that may be performed in a computer network. UPC may be performed at the input to a network "to protect network resources from malicious as well as unintentional misbehaviour". NPC is the same and done for the same reasons as UPC, but at the interface between two networks. UPC and NPC may involve traffic shaping, where traffic is delayed until it conforms to the expected levels and timing, or traffic policing, where non-conforming traffic is either discarded immediately, or reduced in priority so that it may be discarded downstream in the network if it would cause or add to congestion. Uses In ATM The actions for UPC and NPC in the ATM protocol are defined in ITU-T Recommendation I.371 Traffic control and congestion control in B ISDN and the ATM Forum's User-Network Interface (UNI) Specification. These provide a conformance definition, using a form of the leaky bucket algorithm called the Generic Cell Rate Algorithm (GCRA), which specifies how cells are checked for conformance with a cell rate, or its reciprocal emission interval, and jitter tolerance: either a Cell Delay Variation tolerance (CDVt) for testing conformance to the Peak Cell Rate (PCR) or a Burst Tolerance or Maximum Burst Size (MBS) for testing conformance to the Sustainable Cell Rate (SCR). UPC and NPC define a Maximum Burst Size (MBS) parameter on the average or Sustained Cell Rate (SCR), and a Cell Delay Variation tolerance (CDVt) on the Peak Cell Rate (PCR) at which the bursts are transmitted. This MBS can be derived from or used to derive the maximum variation between the arrival time of traffic in the bursts from the time it would arrive at the SCR, i.e. a jitter about that SCR. UPC and NPC are normally performed on a per Virtual Channel (VC) or per Virtual Path (VP) basis, i.e. the intervals are measured between cells bearing the same virtual channel identifier (VCI) and or virtual path identifier (VPI). I
https://en.wikipedia.org/wiki/Nutraceutical
A nutraceutical is a pharmaceutical alternative which claims physiological benefits. In the US, nutraceuticals are largely unregulated, as they exist in the same category as dietary supplements and food additives by the FDA, under the authority of the Federal Food, Drug, and Cosmetic Act. The word "nutraceutical" is a portmanteau term, blending the words "nutrition" and "pharmaceutical". Regulation Nutraceuticals are treated differently in different jurisdictions. Canada Under Canadian law, a nutraceutical can either be marketed as a food or as a drug; the terms "nutraceutical" and "functional food" have no legal distinction, referring to "a product isolated or purified from foods that is generally sold in medicinal forms not usually associated with food [and] is demonstrated to have a physiological benefit or provide protection against chronic disease." United States The term "nutraceutical" is not defined by US law. Depending on its ingredients and the claims with which it is marketed, a product is regulated as a drug, dietary supplement, food ingredient, or food. Other sources In the global market, there are significant product quality issues. Nutraceuticals from the international market may claim to use organic or exotic ingredients, yet the lack of regulation may compromise the safety and effectiveness of products. Companies looking to create a wide profit margin may create unregulated products overseas with low-quality or ineffective ingredients. Classification of nutraceuticals Nutraceuticals are products derived from food sources that are purported to provide extra health benefits, in addition to the basic nutritional value found in foods. Depending on the jurisdiction, products may claim to prevent chronic diseases, improve health, delay the aging process, increase life expectancy, or support the structure or function of the body. Dietary supplements In the United States, the Dietary Supplement Health and Education Act (DSHEA) of 1994 defined the t
https://en.wikipedia.org/wiki/Inter%20University%20Center%20for%20Bioscience
Inter University Centre for Bioscience (IUCB) was established at the School of Life Sciences, Kannur University, Kerala, India, by the Higher Education Department, Government of Kerala, to be a global center of excellence for research in biological sciences. Former Vice-President of India Mohammad Hamid Ansari inaugurated the centre on July 10, 2010. IUCB also have a herbal garden in its premises named after E.K. Janaki Ammal, renowned ethnobotanist from Thalassery who was the former Director-General of the Botanical Survey of India. The School of Life Sciences together with Inter University Center for Bioscience have active research collaborations with different research Institutes and industries across the country. Research Highlights
https://en.wikipedia.org/wiki/List%20of%20NP-complete%20problems
This is a list of some of the more commonly known problems that are NP-complete when expressed as decision problems. As there are hundreds of such problems known, this list is in no way comprehensive. Many problems of this type can be found in . Graphs and hypergraphs Graphs occur frequently in everyday applications. Examples include biological or social networks, which contain hundreds, thousands and even billions of nodes in some cases (e.g. Facebook or LinkedIn). 1-planarity 3-dimensional matching Bandwidth problem Bipartite dimension Capacitated minimum spanning tree Route inspection problem (also called Chinese postman problem) for mixed graphs (having both directed and undirected edges). The program is solvable in polynomial time if the graph has all undirected or all directed edges. Variants include the rural postman problem. Clique cover problem Clique problem Complete coloring, a.k.a. achromatic number Cycle rank Degree-constrained spanning tree Domatic number Dominating set, a.k.a. domination number NP-complete special cases include the edge dominating set problem, i.e., the dominating set problem in line graphs. NP-complete variants include the connected dominating set problem and the maximum leaf spanning tree problem. Feedback vertex set Feedback arc set Graph coloring Graph homomorphism problem Graph partition into subgraphs of specific types (triangles, isomorphic subgraphs, Hamiltonian subgraphs, forests, perfect matchings) are known NP-complete. Partition into cliques is the same problem as coloring the complement of the given graph. A related problem is to find a partition that is optimal terms of the number of edges between parts. Grundy number of a directed graph. Hamiltonian completion Hamiltonian path problem, directed and undirected. Graph intersection number Longest path problem Maximum bipartite subgraph or (especially with weighted edges) maximum cut. Maximum common subgraph isomorphism problem Maximum independent set Maximum Induced pat
https://en.wikipedia.org/wiki/Protocol%20engineering
Protocol engineering is the application of systematic methods to the development of communication protocols. It uses many of the principles of software engineering, but it is specific to the development of distributed systems. History When the first experimental and commercial computer networks were developed in the 1970s, the concept of protocols was not yet well developed. These were the first distributed systems. In the context of the newly adopted layered protocol architecture (see OSI model), the definition of the protocol of a specific layer should be such that any entity implementing that specification in one computer would be compatible with any other computer containing an entity implementing the same specification, and their interactions should be such that the desired communication service would be obtained. On the other hand, the protocol specification should be abstract enough to allow different choices for the implementation on different computers. It was recognized that a precise specification of the expected service provided by the given layer was important. It is important for the verification of the protocol, which should demonstrate that the communication service is provided if both protocol entities implement the protocol specification correctly. This principle was later followed during the standardization of the OSI protocol stack, in particular for the transport layer. It was also recognized that some kind of formalized protocol specification would be useful for the verification of the protocol and for developing implementations, as well as test cases for checking the conformance of an implementation against the specification. While initially mainly finite-state machine were used as (simplified) models of a protocol entity, in the 1980s three formal specification languages were standardized, two by ISO and one by ITU. The latter, called SDL, was later used in industry and has been merged with UML state machines. Principles The followi
https://en.wikipedia.org/wiki/Relative%20locality
Relative locality is a proposed physical phenomenon in which different observers would disagree on whether two space-time events are coincident. This is in contrast to special relativity and general relativity in which different observers may disagree on whether two distant events occur at the same time but if an observer infers that two events are at the same spacetime position then all observers will agree. When a light signal exchange procedure is used to infer spacetime coordinates of distant events from the travel time of photons, information about the photon's energy is discarded with the assumption that the frequency of light doesn't matter. It is also usually assumed that distant observers construct the same spacetime. This assumption of absolute locality implies that momentum space is flat. However research into quantum gravity has indicated that momentum space might be curved which would imply relative locality. To regain an absolute arena for invariance one would combine spacetime and momentum space into a phase space.
https://en.wikipedia.org/wiki/Power%20cycling
Power cycling is the act of turning a piece of equipment, usually a computer, off and then on again. Reasons for power cycling include having an electronic device reinitialize its set of configuration parameters or recover from an unresponsive state of its mission critical functionality, such as in a crash or hang situation. Power cycling can also be used to reset network activity inside a modem. It can also be among the first steps for troubleshooting an issue. Overview Power cycling can be done manually, usually using a switch on the device to be cycled; automatically, through some type of device, system, or network management monitoring and control; or by remote control; through a communication channel. In the data center environment, remote control power cycling can usually be done through a power distribution unit, over TCP/IP. In the home environment, this can be done through home automation powerline communications or IP protocols. Most Internet Service Providers publish a "how-to" on their website showing their customers the correct procedure to power cycle their devices. Power cycling is a standard diagnostic procedure usually performed first when the computer freezes. However, frequently power cycling a computer can cause thermal stress. Reset has an equal effect on the software but may be less problematic for the hardware as power is not interrupted. Historical uses On all Apollo missions to the moon, the landing radar was required to acquire the surface before a landing could be attempted. But on Apollo 14, the landing radar was unable to lock on. Mission control told the astronauts to cycle the power. They did, the radar locked on just in time, and the landing was completed. During the Rosetta mission to comet 67P/Churyumov–Gerasimenko, the Philae lander did not return the expected telemetry on awakening after arrival at the comet. The problem was diagnosed as "somehow a glitch in the electronics", engineers cycled the power, and the lander aw
https://en.wikipedia.org/wiki/Sonic%20artifact
In sound and music production, sonic artifact, or simply artifact, refers to sonic material that is accidental or unwanted, resulting from the editing or manipulation of a sound. Types Because there are always technical restrictions in the way a sound can be recorded (in the case of acoustic sounds) or designed (in the case of synthesised or processed sounds), sonic errors often occur. These errors are termed artifacts (or sound/sonic artifacts), and may be pleasing or displeasing. A sonic artifact is sometimes a type of digital artifact, and in some cases is the result of data compression (not to be confused with dynamic range compression, which also may create sonic artifacts). Often an artifact is deliberately produced for creative reasons. For example to introduce a change in timbre of the original sound or to create a sense of cultural or stylistic context. A well-known example is the overdriving of an electric guitar or electric bass signal to produce a clipped, distorted guitar tone or fuzz bass. Editing processes that deliberately produce artifacts often involve technical experimentation. A good example of the deliberate creation of sonic artifacts is the addition of grainy pops and clicks to a recent recording in order to make it sound like a vintage vinyl record. Flanging and distortion were originally regarded as sonic artifacts; as time passed they became a valued part of pop music production methods. Flanging is added to electric guitar and keyboard parts. Other magnetic tape artifacts include wow, flutter, saturation, hiss, noise, and print-through. It is valid to consider the genuine surface noise such as pops and clicks that are audible when a vintage vinyl recording is played back or recorded onto another medium as sonic artifacts, although not all sonic artifacts must contain in their meaning or production a sense of "past", more so a sense of "by-product". Other vinyl record artifacts include turntable rumble, ticks, crackles and groove ec
https://en.wikipedia.org/wiki/Very%20High%20Speed%20Integrated%20Circuit%20Program
The Very High Speed Integrated Circuit (VHSIC) Program was a United States Department of Defense (DOD) research program that ran from 1980 to 1990. Its mission was to research and develop very high-speed integrated circuits for the United States Armed Forces. VHSIC was launched in 1980 as a joint tri-service (Army/Navy/Air Force) program. The program led to advances in integrated circuit materials, lithography, packaging, testing, and algorithms, and created numerous computer-aided design (CAD) tools. A well-known part of the program's contribution is VHDL (VHSIC Hardware Description Language), a hardware description language (HDL). The program also redirected the military's interest in GaAs ICs back toward the commercial mainstream of CMOS circuits. More than $1 billion in total was spent for the VHSIC program for silicon integrated circuit technology development. A DARPA project which ran concurrently, the VLSI Project, having begun two years earlier in 1978, contributed BSD Unix, the RISC processor, the MOSIS research design fab, and greatly furthered the Mead and Conway revolution in VLSI design automation. By contrast, the VHSIC program was comparatively less cost-effective for the funds invested over a contemporaneous time frame, though the projects had different final objectives and are not entirely comparable for that reason. The program didn't succeed at producing high-speed ICs as commercial processors by that time were well ahead of what the DOD expected to produce.
https://en.wikipedia.org/wiki/Biological%20system
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from
https://en.wikipedia.org/wiki/Test%20compression
Test compression is a technique used to reduce the time and cost of testing integrated circuits. The first ICs were tested with test vectors created by hand. It proved very difficult to get good coverage of potential faults, so Design for testability (DFT) based on scan and automatic test pattern generation (ATPG) were developed to explicitly test each gate and path in a design. These techniques were very successful at creating high-quality vectors for manufacturing test, with excellent test coverage. However, as chips got bigger and more complex the ratio of logic to be tested per pin increased dramatically, and the volume of scan test data started causing a significant increase in test time, and required tester memory. This raised the cost of testing. Test compression was developed to help address this problem. When an ATPG tool generates a test for a fault, or a set of faults, only a small percentage of scan cells need to take specific values. The rest of the scan chain is don't care, and are usually filled with random values. Loading and unloading these vectors is not a very efficient use of tester time. Test compression takes advantage of the small number of significant values to reduce test data and test time. In general, the idea is to modify the design to increase the number of internal scan chains, each of shorter length. These chains are then driven by an on-chip decompressor, usually designed to allow continuous flow decompression where the internal scan chains are loaded as the data is delivered to the decompressor. Many different decompression methods can be used. One common choice is a linear finite state machine, where the compressed stimuli are computed by solving linear equations corresponding to internal scan cells with specified positions in partially specified test patterns. Experimental results show that for industrial circuits with test vectors and responses with very low fill rates, ranging from 3% to 0.2%, the test compression
https://en.wikipedia.org/wiki/Fecundity%20selection
Fecundity selection, also known as fertility selection, is the fitness advantage resulting from selection on traits that increases the number of offspring (i.e. fecundity). Charles Darwin formulated the theory of fecundity selection between 1871 and 1874 to explain the widespread evolution of female-biased sexual size dimorphism (SSD), where females were larger than males. Along with the theories of natural selection and sexual selection, fecundity selection is a fundamental component of the modern theory of Darwinian selection. Fecundity selection is distinct in that large female size relates to the ability to accommodate more offspring, and a higher capacity for energy storage to be invested in reproduction. Darwin's theory of fecundity selection predicts the following: Fecundity depends on variation in female size, which is associated with fitness. Strong fecundity selection favors large female size, which creates asymmetrical female-biased sexual size dimorphism. Although sexual selection and fecundity selection are distinct, it still may be difficult to interpret whether sexual dimorphism in nature is due to fecundity selection, or to sexual selection. Examples of fecundity selection in nature include self-incompatibility flowering plants, where pollen of some potential mates are not effective in forming seed, as well as bird, lizard, fly, and butterfly and moth species that are spread across an ecological gradient. Moreau-Lack's rule Moreau (1944) suggested that in more seasonal environments or higher latitudes, fecundity depends on high mortality. Lack (1954) suggested differential food availability and management across latitudes play a role in offspring and parental fitness. Lack also highlighted that more opportunities for parents to collect food due to an increase in day-length towards the poles is an advantage. This means that moderately higher altitudes provide more successful conditions to produce more offspring. However, extreme day-lengths (
https://en.wikipedia.org/wiki/Algorithmic%20state%20machine
The algorithmic state machine (ASM) is a method for designing finite state machines (FSMs) originally developed by Thomas E. Osborne at the University of California, Berkeley (UCB) since 1960, introduced to and implemented at Hewlett-Packard in 1968, formalized and expanded since 1967 and written about by Christopher R. Clare since 1970. It is used to represent diagrams of digital integrated circuits. The ASM diagram is like a state diagram but more structured and, thus, easier to understand. An ASM chart is a method of describing the sequential operations of a digital system. ASM method The ASM method is composed of the following steps: 1. Create an algorithm, using pseudocode, to describe the desired operation of the device. 2. Convert the pseudocode into an ASM chart. 3. Design the datapath based on the ASM chart. 4. Create a detailed ASM chart based on the datapath. 5. Design the control logic based on the detailed ASM chart. ASM chart An ASM chart consists of an interconnection of four types of basic elements: state name, state box, decision box, and conditional outputs box. An ASM state, represented as a rectangle, corresponds to one state of a regular state diagram or finite state machine. The Moore type outputs are listed inside the box. State Name: The name of the state is indicated inside the circle and the circle is placed in the top left corner or the name is placed without the circle. State Box: The output of the state is indicated inside the rectangle box Decision Box: A diamond indicates that the stated condition/expression is to be tested and the exit path is to be chosen accordingly. The condition expression contains one or more inputs to the FSM (Finite State Machine). An ASM condition check, indicated by a diamond with one input and two outputs (for true and false), is used to conditionally transfer between two State Boxes, to another Decision Box, or to a Conditional Output Box. The decision box contains the stated condition expressio
https://en.wikipedia.org/wiki/Thermal%20simulations%20for%20integrated%20circuits
Miniaturizing components has always been a primary goal in the semiconductor industry because it cuts production cost and lets companies build smaller computers and other devices. Miniaturization, however, has increased dissipated power per unit area and made it a key limiting factor in integrated circuit performance. Temperature increase becomes relevant for relatively small-cross-sections wires, where it may affect normal semiconductor behavior. Besides, since the generation of heat is proportional to the frequency of operation for switching circuits, fast computers have larger heat generation than slow ones, an undesired effect for chips manufacturers. This article summaries physical concepts that describe the generation and conduction of heat in an integrated circuit, and presents numerical methods that model heat transfer from a macroscopic point of view. Generation and transfer of heat Fourier's law At macroscopic level, Fourier's law states a relation between the transmitted heat per unit time per unit area and the gradient of temperature: Where is the thermal conductivity, [W·m−1 K−1]. Joule heating Electronic systems work based on current and voltage signals. Current is the flow of charged particles through the material and these particles (electrons or holes), interact with the lattice of the crystal losing its energy which is released in form of heat. Joule Heating is a predominant mechanism for heat generation in integrated circuits and is an undesired effect in most of the cases. For an ohmic material, it has the form: Where is the current density in [A·m−2], is the specific electric resistivity in [·m] and is the generated heat per unit volume in [W·m−3]. Heat-transfer equation The governing equation of the physics of the heat transfer problem relates the flux of heat in space, its variation in time and the generation of power by the following expression: Where is the thermal conductivity, is the density of the medium, is the s
https://en.wikipedia.org/wiki/Index%20of%20optics%20articles
Optics is the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behavior of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. A B C D E F G H I J K L M N O P Q R S T U W Z See also :Category:Optical components :Category:Optical materials
https://en.wikipedia.org/wiki/Trace%20fossil%20classification
Trace fossils are classified in various ways for different purposes. Traces can be classified taxonomically (by morphology), ethologically (by behavior), and toponomically, that is, according to their relationship to the surrounding sedimentary layers. Except in the rare cases where the original maker of a trace fossil can be identified with confidence, phylogenetic classification of trace fossils is an unreasonable proposition. Taxonomic classification The taxonomic classification of trace fossils parallels the taxonomic classification of organisms under the International Code of Zoological Nomenclature. In trace fossil nomenclature a Latin binomial name is used, just as in animal and plant taxonomy, with a genus and specific epithet. However, the binomial names are not linked to an organism, but rather just a trace fossil. This is due to the rarity of association between a trace fossil and a specific organism or group of organisms. Trace fossils are therefore included in an ichnotaxon separate from Linnaean taxonomy. When referring to trace fossils, the terms ichnogenus and ichnospecies parallel genus and species respectively. The most promising cases of phylogenetic classification are those in which similar trace fossils show details complex enough to deduce the makers, such as bryozoan borings, large trilobite trace fossils such as Cruziana, and vertebrate footprints. However, most trace fossils lack sufficiently complex details to allow such classification. Ethologic classification The Seilacherian System Adolf Seilacher was the first to propose a broadly accepted ethological basis for trace fossil classification. He recognized that most trace fossils are created by animals in one of five main behavioural activities, and named them accordingly: Cubichnia are the traces of organisms left on the surface of a soft sediment. This behaviour may simply be resting as in the case of a starfish, but might also evidence the hiding place of prey, or even the ambus
https://en.wikipedia.org/wiki/Akira%20Yoshizawa
was a Japanese origamist, considered to be the grandmaster of origami. He is credited with raising origami from a craft to a living art. According to his own estimation made in 1989, he created more than 50,000 models, of which only a few hundred designs were presented as diagrams in his 18 books. Yoshizawa acted as an international cultural ambassador for Japan throughout his career. In 1983, Emperor Hirohito awarded him the Order of the Rising Sun, 5th class, one of the highest honors bestowed in Japan. Life Yoshizawa was born on 14 March 1911, in Kaminokawa, Japan, to the family of a dairy farmer. When he was a child, he took pleasure in teaching himself origami. He moved into a factory job in Tokyo when he was 13 years old. His passion for origami was rekindled in his early 20s, when he was promoted from factory worker to technical draftsman. His new job was to teach junior employees geometry. Yoshizawa used the traditional art of origami to understand and communicate geometrical problems. In 1937, he left factory work to pursue origami full-time. During the next 20 years, he lived in total poverty, earning his living by door-to-door selling of (a Japanese preserved condiment that is usually made of seaweed). During World War II, Yoshizawa served in the army medical corps in Hong Kong. He made origami models to cheer up the sick patients, but eventually fell ill himself and was sent back to Japan. His origami work was creative enough to be included in the 1944 book Origami Shuko, by . However, it was his work for the January 1952 issue of the magazine Asahi Graph that launched his career, which included the 12 zodiac signs commissioned by a magazine. In 1954, his first monograph, Atarashii Origami Geijutsu (New Origami Art) was published. In this work, he established the Yoshizawa–Randlett system of notation for origami folds (a system of symbols, arrows and diagrams), which has become the standard for most paperfolders. The publishing of this book helped
https://en.wikipedia.org/wiki/Kleptoprotein
A kleptoprotein is a protein which is not encoded in the genome of the organism which uses it, but instead is obtained through diet from a prey organism. Importantly, a kleptoprotein must maintain its function and be mostly or entirely undigested, drawing a distinction from proteins that are digested for nutrition, which become destroyed and non-functional in the process. This phenomenon was first reported in the bioluminescent fish Parapriacanthus, which has specialized light organs adapted towards counter-illumination, but obtains the luciferase enzyme within these organs from bioluminescent ostracods, including Cypridina noctiluca or Vargula hilgendorfii. See also Kleptoplasty
https://en.wikipedia.org/wiki/Thermodynamic%20limit
In statistical mechanics, the thermodynamic limit or macroscopic limit, of a system is the limit for a large number of particles (e.g., atoms or molecules) where the volume is taken to grow in proportion with the number of particles. The thermodynamic limit is defined as the limit of a system with a large volume, with the particle density held fixed. In this limit, macroscopic thermodynamics is valid. There, thermal fluctuations in global quantities are negligible, and all thermodynamic quantities, such as pressure and energy, are simply functions of the thermodynamic variables, such as temperature and density. For example, for a large volume of gas, the fluctuations of the total internal energy are negligible and can be ignored, and the average internal energy can be predicted from knowledge of the pressure and temperature of the gas. Note that not all types of thermal fluctuations disappear in the thermodynamic limit—only the fluctuations in system variables cease to be important. There will still be detectable fluctuations (typically at microscopic scales) in some physically observable quantities, such as microscopic spatial density fluctuations in a gas scatter light (Rayleigh scattering) motion of visible particles (Brownian motion) electromagnetic field fluctuations, (blackbody radiation in free space, Johnson–Nyquist noise in wires) Mathematically an asymptotic analysis is performed when considering the thermodynamic limit. Origin The thermodynamic limit is essentially a consequence of the central limit theorem of probability theory. The internal energy of a gas of N molecules is the sum of order N contributions, each of which is approximately independent, and so the central limit theorem predicts that the ratio of the size of the fluctuations to the mean is of order 1/N1/2. Thus for a macroscopic volume with perhaps the Avogadro number of molecules, fluctuations are negligible, and so thermodynamics works. In general, almost all macroscopic volu
https://en.wikipedia.org/wiki/End-to-end%20delay
End-to-end delay or one-way delay (OWD) refers to the time taken for a packet to be transmitted across a network from source to destination. It is a common term in IP network monitoring, and differs from round-trip time (RTT) in that only path in the one direction from source to destination is measured. Measurement The ping utility measures the RTT, that is, the time to go and come back to a host. Half the RTT is often used as an approximation of OWD but this assumes that the forward and back paths are the same in terms of congestion, number of hops, or quality of service (QoS). This is not always a good assumption. To avoid such problems, the OWD may be measured directly. Direct OWDs may be measured between two points A and B of an IP network through the use of synchronized clocks; A records a timestamp on the packet and sends it to B, which notes the receiving time and calculates the OWD as their difference. The transmitted packets need to be identified at source and destination in order to avoid packet loss or packet reordering. However, this method suffers several limitations, such as requiring intensive cooperation between both parties, and the accuracy of the measured delay is subject to the synchronization precision. The Minimum-Pairs Protocol is an example by which several cooperating entities, A, B, and C, could measure OWDs between one of them and a fourth less cooperative one (e.g., between B and X). Estimate Transmission between two network nodes may be asymmetric, and the forward and reverse delays are not equal. Half the RTT value is the average of the forward and reverse delays and so may be sometimes used as an approximation to the end-to-end delay. The accuracy of such an estimate depends on the nature of delay distribution in both directions. As delays in both directions become more symmetric, the accuracy increases. The probability mass function (PMF) of absolute error, E, between the smaller of the forward and reverse OWDs and their average
https://en.wikipedia.org/wiki/List%20of%20MOSFET%20applications
The MOSFET (metal–oxide–semiconductor field-effect transistor) is a type of insulated-gate field-effect transistor (IGFET) that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The voltage of the covered gate determines the electrical conductivity of the device; this ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals. The MOSFET is the basic building block of most modern electronics, and the most frequently manufactured device in history, with an estimated total of 13sextillion (1.3 × 1022) MOSFETs manufactured between 1960 and 2018. It is the most common semiconductor device in digital and analog circuits, and the most common power device. It was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. MOSFET scaling and miniaturization has been driving the rapid exponential growth of electronic semiconductor technology since the 1960s, and enable high-density integrated circuits (ICs) such as memory chips and microprocessors. MOSFETs in integrated circuits are the primary elements of computer processors, semiconductor memory, image sensors, and most other types of integrated circuits. Discrete MOSFET devices are widely used in applications such as switch mode power supplies, variable-frequency drives, and other power electronics applications where each device may be switching thousands of watts. Radio-frequency amplifiers up to the UHF spectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, or mixers to convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems, sound reinforcement, and home and automobile sound systems. Integrated circuits The MOSFET is the most widely used type of transistor and the most critical device component in integrated circuit (IC) chips. Planar process, develop
https://en.wikipedia.org/wiki/Download
In computer networks, download means to receive data from a remote system, typically a server such as a web server, an FTP server, an email server, or other similar systems. This contrasts with uploading, where data is sent to a remote server. A download is a file offered for downloading or that has been downloaded, or the process of receiving such a file. Definition Downloading generally transfers entire files for local storage and later use, as contrasted with streaming, where the data is used nearly immediately, while the transmission is still in progress, and which may not be stored long-term. Websites that offer streaming media or media displayed in-browser, such as YouTube, increasingly place restrictions on the ability of users to save these materials to their computers after they have been received. Downloading in computer networks involves retrieving data from a remote system, like a web server, FTP server, or email server, unlike uploading where data is sent to a remote server. A download can refer to a file made available for retrieval or one that has been received, encompassing the entire process of obtaining such a file. Downloading is not the same as data transfer; moving or copying data between two storage devices would be data transfer, but receiving data from the Internet or BBS is downloading. Copyright Downloading media files involves the use of linking and framing Internet material, and relates to copyright law. Streaming and downloading can involve making copies of works that infringe on copyrights or other rights, and organizations running such websites may become vicariously liable for copyright infringement by causing others to do so. Open hosting servers allows people to upload files to a central server, which incurs bandwidth and hard disk space costs due to files generated with each download. Anonymous and open hosting servers make it difficult to hold hosts accountable. Taking legal action against the technologies behind unauthoriz
https://en.wikipedia.org/wiki/Kaczmarz%20method
The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems . It was first discovered by the Polish mathematician Stefan Kaczmarz, and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART). ART includes the positivity constraint, making it nonlinear. The Kaczmarz method is applicable to any linear system of equations, but its computational advantage relative to other methods depends on the system being sparse. It has been demonstrated to be superior, in some biomedical imaging applications, to other methods such as the filtered backprojection method. It has many applications ranging from computed tomography (CT) to signal processing. It can be obtained also by applying to the hyperplanes, described by the linear system, the method of successive projections onto convex sets (POCS). Algorithm 1: Kaczmarz algorithm Let be a system of linear equations, let be the number of rows of A, be the th row of complex-valued matrix , and let be arbitrary complex-valued initial approximation to the solution of . For compute: where and denotes complex conjugation of . If the system is consistent, converges to the minimum-norm solution, provided that the iterations start with the zero vector. A more general algorithm can be defined using a relaxation parameter There are versions of the method that converge to a regularized weighted least squares solution when applied to a system of inconsistent equations and, at least as far as initial behavior is concerned, at a lesser cost than other iterative methods, such as the conjugate gradient method. Algorithm 2: Randomized Kaczmarz algorithm In 2009, a randomized version of the Kaczmarz method for overdetermined linear systems was introduced by Thomas Strohmer and Roman Vershynin in which the i-th equation is selected randomly with prob
https://en.wikipedia.org/wiki/List%20of%20dynamical%20systems%20and%20differential%20equations%20topics
This is a list of dynamical system and differential equation topics, by Wikipedia page. See also list of partial differential equation topics, list of equations. Dynamical systems, in general Deterministic system (mathematics) Linear system Partial differential equation Dynamical systems and chaos theory Chaos theory Chaos argument Butterfly effect 0-1 test for chaos Bifurcation diagram Feigenbaum constant Sharkovskii's theorem Attractor Strange nonchaotic attractor Stability theory Mechanical equilibrium Astable Monostable Bistability Metastability Feedback Negative feedback Positive feedback Homeostasis Damping ratio Dissipative system Spontaneous symmetry breaking Turbulence Perturbation theory Control theory Non-linear control Adaptive control Hierarchical control Intelligent control Optimal control Dynamic programming Robust control Stochastic control System dynamics, system analysis Takens' theorem Exponential dichotomy Liénard's theorem Krylov–Bogolyubov theorem Krylov-Bogoliubov averaging method Abstract dynamical systems Measure-preserving dynamical system Ergodic theory Mixing (mathematics) Almost periodic function Symbolic dynamics Time scale calculus Arithmetic dynamics Sequential dynamical system Graph dynamical system Topological dynamical system Dynamical systems, examples List of chaotic maps Logistic map Lorenz attractor Lorenz-96 Iterated function system Tetration Ackermann function Horseshoe map Hénon map Arnold's cat map Population dynamics Complex dynamics Fatou set Julia set Mandelbrot set Difference equations Recurrence relation Matrix difference equation Rational difference equation Ordinary differential equations: general Examples of differential equations Autonomous system (mathematics) Picard–Lindelöf theorem Peano existence theorem Carathéodory existence theorem Numerical ordinary differential equations Bendixson–Dulac theorem Gradient conjecture Recurrence plot Limit cycle Initial value problem Clairaut's equation Singular sol
https://en.wikipedia.org/wiki/Systems%20development%20life%20cycle
In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation. Overview A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates. Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such as waterfall, spiral, agile, rapid prototyping, incremental, and synchronize and stabilize. SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes that allow for rapid changes. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results. Anamorphic development is guided by project scope and adaptive iterations. In project management a project can include both a project life cycle (PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004
https://en.wikipedia.org/wiki/Radiation%20hardening
Radiation hardening is the process of making electronic components and circuits resistant to damage or malfunction caused by high levels of ionizing radiation (particle radiation and high-energy electromagnetic radiation), especially for environments in outer space (especially beyond the low Earth orbit), around nuclear reactors and particle accelerators, or during nuclear accidents or nuclear warfare. Most semiconductor electronic components are susceptible to radiation damage, and radiation-hardened (rad-hard) components are based on their non-hardened equivalents, with some design and manufacturing variations that reduce the susceptibility to radiation damage. Due to the extensive development and testing required to produce a radiation-tolerant design of a microelectronic chip, the technology of radiation-hardened chips tends to lag behind the most recent developments. Radiation-hardened products are typically tested to one or more resultant-effects tests, including total ionizing dose (TID), enhanced low dose rate effects (ELDRS), neutron and proton displacement damage, and single event effects (SEEs). Problems caused by radiation Environments with high levels of ionizing radiation create special design challenges. A single charged particle can knock thousands of electrons loose, causing electronic noise and signal spikes. In the case of digital circuits, this can cause results which are inaccurate or unintelligible. This is a particularly serious problem in the design of satellites, spacecraft, future quantum computers, military aircraft, nuclear power stations, and nuclear weapons. In order to ensure the proper operation of such systems, manufacturers of integrated circuits and sensors intended for the military or aerospace markets employ various methods of radiation hardening. The resulting systems are said to be rad(iation)-hardened, rad-hard, or (within context) hardened. Major radiation damage sources Typical sources of exposure of electronics to ioni
https://en.wikipedia.org/wiki/Regressive%20discrete%20Fourier%20series
In applied mathematics, the regressive discrete Fourier series (RDFS) is a generalization of the discrete Fourier transform where the Fourier series coefficients are computed in a least squares sense and the period is arbitrary, i.e., not necessarily equal to the length of the data. It was first proposed by Arruda (1992a, 1992b). It can be used to smooth data in one or more dimensions and to compute derivatives from the smoothed curve, surface, or hypersurface. Technique One-dimensional regressive discrete Fourier series The one-dimensional RDFS proposed by Arruda (1992a) can be formulated in a very straightforward way. Given a sampled data vector (signal) , one can write the algebraic expression: Typically , but this is not necessary. The above equation can be written in matrix form as The least squares solution of the above linear system of equations can be written as: where is the conjugate transpose of , and the smoothed signal is obtained from: The first derivative of the smoothed signal can be obtained from: Two-dimensional regressive discrete Fourier series (RDFS) The two-dimensional, or bidimensional RDFS proposed by Arruda (1992b) can also be formulated in a straightforward way. Here the equally spaced data case will be treated for the sake of simplicity. The general non-equally-spaced and arbitrary grid cases are given in the reference (Arruda, 1992b). Given a sampled data matrix (bi dimensional signal) one can write the algebraic expression: The above equation can be written in matrix form for a rectangular grid. For the equally spaced sampling case : we have: The least squares solution may be shown to be: and the smoothed bidimensional surface is given by: where is the conjugate, and is the transpose of . Differentiation with respect to can be easily implemented analogously to the one-dimensional case (Arruda, 1992b). Current applications Spatially dense data condensation applications: Arruda, J.R.F. [1993] applied the RDFS to co
https://en.wikipedia.org/wiki/Rensch%27s%20rule
Rensch's rule is a biological rule on allometrics, concerning the relationship between the extent of sexual size dimorphism and which sex is larger. Across species within a lineage, size dimorphism increases with increasing body size when the male is the larger sex, and decreases with increasing average body size when the female is the larger sex. The rule was proposed by the evolutionary biologist Bernhard Rensch in 1950. After controlling for confounding factors such as evolutionary history, an increase in average body size makes the difference in body size larger if the species has larger males, and smaller if it has larger females. Some studies propose that this is due to sexual bimaturism, which causes male traits to diverge faster and develop for a longer period of time. The correlation between sexual size dimorphism and body size is hypothesized to be a result of an increase in male-male competition in larger species, a result of limited environmental resources, fuelling aggression between males over access to breeding territories and mating partners. Phylogenetic lineages that appear to follow this rule include primates, pinnipeds, and artiodactyls. This rule has rarely been tested on parasites. A 2019 study showed that ectoparasitic philopterid and menoponid lice comply with it, while ricinid lice exhibit a reversed pattern.
https://en.wikipedia.org/wiki/Hop%20%28networking%29
In wired computer networking, including the Internet, a hop occurs when a packet is passed from one network segment to the next. Data packets pass through routers as they travel between source and destination. The hop count refers to the number of network devices through which data passes from source to destination (depending on routing protocol, this may include the source/destination, that is, the first hop is counted as hop 0 or hop 1). Since store and forward and other latencies are incurred through each hop, a large number of hops between source and destination implies lower real-time performance. Hop count In wired networks, the hop count refers to the number of networks or network devices through which data passes between source and destination (depending on routing protocol, this may include the source/destination, that is, the first hop is counted as hop 0 or hop 1). Thus, hop count is a rough measure of distance between two hosts. For a routing protocol using 1-origin hop counts (such as RIP), a hop count of n means that n networks separate the source host from the destination host. Other protocols such as DHCP use the term "hop" to refer to the number of times a message has been forwarded. On a layer 3 network such as Internet Protocol (IP), each router along the data path constitutes a hop. By itself, this metric is, however, not useful for determining the optimum network path, as it does not take into consideration the speed, load, reliability, or latency of any particular hop, but merely the total count. Nevertheless, some routing protocols, such as Routing Information Protocol (RIP), use hop count as their sole metric. Each time a router receives a packet, it modifies the packet, decrementing the time to live (TTL). The router discards any packets received with a zero TTL value. This prevents packets from endlessly bouncing around the network in the event of routing errors. Routers are capable of managing hop counts, but other types of network de
https://en.wikipedia.org/wiki/Molecular-weight%20size%20marker
A molecular-weight size marker, also referred to as a protein ladder, DNA ladder, or RNA ladder, is a set of standards that are used to identify the approximate size of a molecule run on a gel during electrophoresis, using the principle that molecular weight is inversely proportional to migration rate through a gel matrix. Therefore, when used in gel electrophoresis, markers effectively provide a logarithmic scale by which to estimate the size of the other fragments (providing the fragment sizes of the marker are known). Protein, DNA, and RNA markers with pre-determined fragment sizes and concentrations are commercially available. These can be run in either agarose or polyacrylamide gels. The markers are loaded in lanes adjacent to sample lanes before the commencement of the run. DNA markers Development Although the concept of molecular-weight markers has been retained, techniques of development have varied throughout the years. New inventions of molecular-weight markers are distributed in kits specific to the marker's type. An early problem in the development of markers was achieving high resolution throughout the entire length of the marker. Depending on the running conditions of gel electrophoresis, fragments may have been compressed, disrupting clarity. To address this issue, a kit for Southern Blot analysis was developed in 1990, providing the first marker to combine target DNA and probe DNA. This technique took advantage of logarithmic spacing, and could be used to identify target bands ranging over a length of 20,000 nucleotides. Design There are two common methods in which to construct a DNA molecular-weight size marker. One such method employs the technique of partial ligation. DNA ligation is the process by which linear DNA pieces are connected to each other via covalent bonds; more specifically, these bonds are phosphodiester bonds. Here, a 100bp duplex DNA piece is partially ligated. The consequence of this is that dimers of 200bp, trimers of 300bp,
https://en.wikipedia.org/wiki/Biogeography%20of%20Deep-Water%20Chemosynthetic%20Ecosystems
The Biogeography of Deep-Water Chemosynthetic Ecosystems is a field project of the Census of Marine Life programme (CoML). The main aim of ChEss is to determine the biogeography of deep-water chemosynthetic ecosystems at a global scale and to understand the processes driving these ecosystems. ChEss addresses the main questions of CoML on diversity, abundance and distribution of marine species, focusing on deep-water reducing environments such as hydrothermal vents, cold seeps, whale falls, sunken wood and areas of low oxygen that intersect with continental margins and seamounts. Background Deep-sea hydrothermal vents and their associated fauna were first discovered along the Galapagos Rift in the eastern Pacific in 1977. Vents are now known to occur along all active mid ocean ridges and back-arc spreading centres, from fast to ultra-slow spreading ridges. The interest in chemosynthetic environments was strengthened by the discovery of chemosynthetic-based fauna at cold seeps along the base of the Florida Escarpment in 1983. Cold seeps occur along active and passive continental margins. More recently, the study of chemosynthetic fauna has extended to the communities that develop in other reducing habitats such as whale falls, sunken wood and areas of oxygen minima when they intersect with the margin or seamounts. Since the first discovery of hydrothermal vents, more than 600 species have been described from vents and seeps. This is equivalent of 1 new description every 2 weeks(!). As biologists, geochemists, and physicists combine research efforts in these systems, new species will certainly be discovered. Moreover, because of the extreme conditions of the vent and seep habitat, certain species may have specific physiological adaptations with interesting results for the biochemical and medical industry. These globally distributed, ephemeral and insular habitats that support endemic faunas offer natural laboratories for studies on dispersal, isolation and evolutio
https://en.wikipedia.org/wiki/In-target%20probe
In-target probe, or ITP is a device used in computer hardware and microprocessor design, to control a target microprocessor or similar ASIC at the register level. It generally allows full control of the target device and allows the computer engineer access to individual processor registers, program counter, and instructions within the device. It allows the processor to be single-stepped or for breakpoints to be set. Unlike an in-circuit emulator (ICE), an In-Target Probe uses the target device to execute, rather than substituting for the target device. See also Hardware-assisted virtualization In-circuit emulator Joint Test Action Group External links ITP700 Debug Port Design Guide - Intel Embedded systems Debugging
https://en.wikipedia.org/wiki/Negative%20frequency
In mathematics, signed frequency (negative and positive frequency) expands upon the concept of frequency, from just an absolute value representing how often some repeating event occurs, to also have a positive or negative sign representing one of two opposing orientations for occurrences of those events. The following examples help illustrate the concept: For a rotating object, the absolute value of its frequency of rotation indicates how many rotations the object completes per unit of time, while the sign could indicate whether it is rotating clockwise or counterclockwise. Mathematically speaking, the vector has a positive frequency of +1 radian per unit of time and rotates counterclockwise around the unit circle, while the vector has a negative frequency of -1 radian per unit of time, which rotates clockwise instead. For a harmonic oscillator such as a pendulum, the absolute value of its frequency indicates how many times it swings back and forth per unit of time, while the sign could indicate in which of the two opposite directions it started moving. For a periodic function represented in a Cartesian coordinate system, the absolute value of its frequency indicates how often in its domain it repeats its values, while changing the sign of its frequency could represent a reflection around its y-axis. Sinusoids Let be a nonnegative angular frequency with units of radians per unit of time and let be a phase in radians. A function has slope When used as the argument of a sinusoid, can represent a negative frequency. Because cosine is an even function, the negative frequency sinusoid is indistinguishable from the positive frequency sinusoid Similarly, because sine is an odd function, the negative frequency sinusoid is indistinguishable from the positive frequency sinusoid or Thus any sinusoid can be represented in terms of positive frequencies only. The sign of the underlying phase slope is ambiguous. Because leads by radians (or cycle) for posi
https://en.wikipedia.org/wiki/Backhouse%27s%20constant
Backhouse's constant is a mathematical constant named after Nigel Backhouse. Its value is approximately 1.456 074 948. It is defined by using the power series such that the coefficients of successive terms are the prime numbers, and its multiplicative inverse as a formal power series, Then: . This limit was conjectured to exist by Backhouse, and later proven by Philippe Flajolet.