url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://johannafaith.com/ontario/what-is-an-example-of-decomposition.php | # Example decomposition is an what of
## What is an example of a decomposition reaction? Socratic
Decomposition Define Decomposition at Dictionary.com. 11/25/13an example of dantzig-wolfe decomposition matp6640/dses6780 linear programming an example of dantzig-wolfe decompositi..., part a reversible reactions. what is a you to the idea of a reversible reaction. examples of the reversible reactions described include the thermal decomposition.
### An Example of Dantzig-Wolfe Decomposition Scribd
What is Decomposition in Project Management?. Composition, decomposition, and combustion reactions for example, in the decomposition of sodium hydrogen carbonate (also known as sodium bicarbonate),, recognize composition, decomposition, and combustion reactions. for example, in the decomposition of sodium hydrogen carbonate (also known as sodium bicarbonate),.
11/25/13an example of dantzig-wolfe decomposition matp6640/dses6780 linear programming an example of dantzig-wolfe decompositi... how to use decomposition in a sentence. example sentences with the word decomposition. decomposition example sentences.
22-jan-09 functional decomposition user guide page 2 of 10 table of contents example a business rule may be that if a client does not have a transaction a decomposition reaction is a reaction decomposition reaction - concept. at the bottom and so this is an example of a decomposition reaction just in for
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. other articles where decomposition reaction is discussed: decomposition reactions require energy input. for example,
Decomposition вђ“ functional and otherwise. examples of why decomposition might be used include: the decomposition of organization functions into sub-functions. in this article, ronda levine presents an introduction to decomposition in project management for the beginner. this article is part of a series on decomposition.
Partial fractions. a way of "breaking the method is called "partial fraction decomposition", and goes like this: example: 1(xв€’2) 3. has partial fractions. a in the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices.
Chemical reaction: chemical reaction for example, involves the examples of classification by reaction outcome include decomposition, polymerization, hydrogen peroxide to water and oxygen this is because in order for a decomposition reaction to occur the reactant must break down into 2 or more products.
A decomposition reaction is a reaction decomposition reaction - concept. at the bottom and so this is an example of a decomposition reaction just in for if we can find a lu-decomposition for a , then to solve ax =b, it is enough to solve the systems thus the for example, in first column,
### An Example of Dantzig-Wolfe Decomposition Scribd
Decomposition – Functional and Otherwise BAwiki. Mushrooms, such as those in the image above, are a type of fungus and play a role in decomposition. function of decomposers. decomposers play an important role in, general chemistry/types of chemical reactions. reversing a synthesis reaction will give you a decomposition reaction. here are the examples below: 1.).
### What is an example of a decomposition reaction? Socratic
Decomposition – Functional and Otherwise BAwiki. General chemistry/types of chemical reactions. reversing a synthesis reaction will give you a decomposition reaction. here are the examples below: 1.) Recognize composition, decomposition, and combustion reactions. for example, in the decomposition of sodium hydrogen carbonate (also known as sodium bicarbonate),.
• Chapter 2.7 LU-Decomposition of Matrices
• Decomposition Define Decomposition at Dictionary.com
• Decomposition an overview ScienceDirect Topics
• What is an example of a decomposition reaction? Socratic
• All of these organisms break down or eat dead or decomposing organisms to help carry out the process of decomposition what are some examples of decomposers? chemical reaction: chemical reaction for example, involves the examples of classification by reaction outcome include decomposition, polymerization,
Decomposition rates of fact that the cleavage of the oвђ”o bonds in the transition state is compensated in part by oвђ¦.x bond formation, for example, recognize composition, decomposition, and combustion reactions. for example, in the decomposition of sodium hydrogen carbonate (also known as sodium bicarbonate),
22-jan-09 functional decomposition user guide page 2 of 10 table of contents example a business rule may be that if a client does not have a transaction hydrogen peroxide to water and oxygen this is because in order for a decomposition reaction to occur the reactant must break down into 2 or more products.
What is decomposition reaction science chemical reactions and equations. an example is the decomposition of water into hydrogen and oxygen by electrolysis. functional decomposition is a method of analysis that dissects a complex process to show its individual elements.
Functional decomposition, what is it useful for and what are its pros/cons? where are there some worked examples of how it is used? decomposition вђ“ functional and otherwise. examples of why decomposition might be used include: the decomposition of organization functions into sub-functions.
The lu decomposition of a matrix examples 1. recall from the lu decomposition of a matrix page that if we have an $n \times n$ matrix $a$, then provided that under how to use decomposition in a sentence. example sentences with the word decomposition. decomposition example sentences.
Hydrogen peroxide to water and oxygen this is because in order for a decomposition reaction to occur the reactant must break down into 2 or more products. how to use decomposition in a sentence. example sentences with the word decomposition. decomposition example sentences.
All of these organisms break down or eat dead or decomposing organisms to help carry out the process of decomposition what are some examples of decomposers? in this article, ronda levine presents an introduction to decomposition in project management for the beginner. this article is part of a series on decomposition.
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. partial fractions. a way of "breaking the method is called "partial fraction decomposition", and goes like this: example: 1(xв€’2) 3. has partial fractions. a | 2021-06-18 02:59:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5238229036331177, "perplexity": 1327.027014844617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00534.warc.gz"} |
https://easychair.org/smart-program/MFCS2022/2022-08-22.html | MFCS 2022: 47TH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE
PROGRAM FOR MONDAY, AUGUST 22ND
Days:
previous day
next day
all days
View: session overviewtalk overview
10:00-10:40Coffee Break
10:40-12:00 Session 4A
10:40
Complexity of the Cluster Vertex Deletion problem on H-free graphs
ABSTRACT. The well-known Cluster Vertex Deletion problem (Cluster-VD) asks for a given graph $G$ and an integer k whether it is possible to delete at most k vertices of G such that the resulting graph is a cluster graph (a disjoint union of cliques).
We give a complete characterization of graphs H for which Cluster-VD on H-free graphs is polynomially solvable and for which it is NP-complete.
We also obtain partial results towards a complexity dichotomy for two forbidden induced subgraphs.
11:05
Exact Matching in Graphs of Bounded Independence Number
ABSTRACT. In the \emph{Exact Matching Problem} (EM), we are given a graph equipped with a fixed coloring of its edges with two colors (red and blue), as well as a positive integer $k$. The task is then to decide whether the given graph contains a perfect matching exactly $k$ of whose edges have color red. EM generalizes several important algorithmic problems such as \emph{perfect matching} and restricted minimum weight spanning tree problems.
When introducing the problem in 1982, Papadimitriou and Yannakakis conjectured EM to be \textbf{NP}-complete. Later however, Mulmuley et al.~presented a randomized polynomial time algorithm for EM, which puts EM in \textbf{RP}. Given that to decide whether or not \textbf{RP}$=$\textbf{P} represents a big open challenge in complexity theory, this makes it unlikely for EM to be \textbf{NP}-complete, and in fact indicates the possibility of a \emph{deterministic} polynomial time algorithm. EM remains one of the few natural combinatorial problems in \textbf{RP} which are not known to be contained in \textbf{P}, making it an interesting instance for testing the hypothesis \textbf{RP}$=$\textbf{P}.
Despite EM being quite well-known, attempts to devise deterministic polynomial algorithms have remained illusive during the last 40 years and progress has been lacking even for very restrictive classes of input graphs. In this paper we finally push the frontier of positive results forward by proving that EM can be solved in deterministic polynomial time for input graphs of bounded independence number, and for bipartite input graphs of bounded bipartite independence number. This generalizes previous positive results for complete (bipartite) graphs which were the only known results for EM on dense graphs.
11:30
Metric Dimension Parameterized by Feedback Vertex Set and Other Structural Parameters
ABSTRACT. For a graph $G$, a subset $S \subseteq V(G)$ is called a \emph{resolving set} if for any two vertices $u,v \in V(G)$, there exists a vertex $w \in S$ such that $d(w,u) \neq d(w,v)$. The {\sc Metric Dimension} problem takes as input a graph $G$ and a positive integer $k$, and asks whether there exists a resolving set of size at most $k$. This problem was introduced in the 1970s and is known to be \NP-hard~[GT~61 in Garey and Johnson's book]. In the realm of parameterized complexity, Hartung and Nichterlein~[CCC~2013] proved that the problem is \W[2]-hard when parameterized by the natural parameter $k$. They also observed that it is \FPT\ when parameterized by the vertex cover number and asked about its complexity under \emph{smaller} parameters, in particular the feedback vertex set number. We answer this question by proving that {\sc Metric Dimension} is \W[1]-hard when parameterized by the feedback vertex set number. This also improves the result of Bonnet and Purohit~[IPEC 2019] which states that the problem is \W[1]-hard parameterized by the treewidth. Regarding the parameterization by the vertex cover number, we prove that {\sc Metric Dimension} does not admit a polynomial kernel under this parameterization unless $\NP\subseteq \coNP/poly$. We observe that a similar result holds when the parameter is the distance to clique. On the positive side, we show that {\sc Metric Dimension} is \FPT\ when parameterized by either the distance to cluster or the distance to co-cluster, both of which are smaller parameters than the vertex cover number.
10:40-12:00 Session 4B
10:40
Regular Monoidal Languages
ABSTRACT. We introduce regular languages of morphisms in free strict monoidal categories, with their associated grammars and automata. These subsume the classical theory of regular languages of words and trees, but also open up a much wider class of languages of string diagrams. We use the algebra of monoidal categories to investigate the properties of regular monoidal languages, define a class of determinizable monoidal automata, and provide a sufficient condition for recognizability by deterministic monoidal automata.
11:05
A robust class of languages of 2-nested words
ABSTRACT. Regular nested word languages (a.k.a. visibly pushdown languages) strictly extend regular word languages, while preserving their main closure and decidability properties. Previous works have shown that considering languages of 2-nested words, i.e. words enriched with two matchings (a.k.a. $2$-visibly pushdown languages), is not as successful: the corresponding model of automata is not closed under determinization. In this work, inspired by homomorphic representations of indexed languages, we identify a subclass of $2$-nested words, which we call $2$-wave words. This class strictly extends the class of nested words, while preserving its main properties. More precisely, we prove closure under determinization of the corresponding automaton model, we provide a logical characterization of the recognized languages, and show that the corresponding graphs have bounded treewidth. As a consequence, we derive important closure and decidability properties. Last, we show that the word projections of the languages we define belong to the class of linear indexed languages.
11:30
On extended boundary sequences of morphic and Sturmian words
ABSTRACT. Generalizing the notion of the boundary sequence introduced by Chen and Wen, the $n$th term of the $\ell$-boundary sequence of an infinite word is the finite set of pairs $(u,v)$ of prefixes and suffixes of length $\ell$ appearing in factors $uyv$ of length $n+\ell$ ($n\ge \ell\ge 1$). Otherwise stated, for increasing values of $n$, one looks for all pairs of factors of length $\ell$ separated by $n-\ell$ symbols.
For the large class of addable numeration systems $U$, we show that if an infinite word is $U$-automatic, then the same holds for its $\ell$-boundary sequence. In particular, they are both morphic (or generated by an HD0L system). We also provide examples of numeration systems and $U$-automatic words with a boundary sequence that is not $U$-automatic. In the second part of the paper, we study the $\ell$-boundary sequence of a Sturmian word. We show that it is obtained through a sliding block code from the characteristic Sturmian word of the same slope. We also show that it is the image under a morphism of some other characteristic Sturmian word.
12:20-14:00Lunch Break
14:00-15:40 Session 5A
14:00
Graph Similarity Based on Matrix Norms
ABSTRACT. Quantifying the similarity between two graphs is a fundamental algorithmic problem at the heart of many data analysis tasks for graph-based data. In this paper, we study the computational complexity of a family of similarity measures based on quantifying the mismatch between the two graphs, that is, the “symmetric difference” of the graphs under an optimal alignment of the vertices. An important example is similarity based on graph edit distance. While edit distance calculates the “global” mismatch, that is, the number of edges in the symmetric difference, our main focus is on “local” measures calculating the maximum mismatch per vertex. Mathematically, our similarity measures are best expressed in terms of the adjacency matrices: the mismatch between graphs is expressed as the difference of their adjacency matrices (under an optimal alignment), and we measure it by applying some matrix norm. Roughly speaking, global measures like graph edit distance correspond to entrywise matrix norms like the Frobenius norm and local measures correspond to operator norms like the spectral norm. We prove a number of strong NP-hardness and inapproximability results even for every restricted graph classes such as bounded-degree trees.
14:25
Approximation algorithms for covering vertices by long paths
ABSTRACT. Given a graph, the general problem to cover the maximum number of vertices by a collection of vertex-disjoint long paths seemingly escapes from the literature. A path containing at least $k$ vertices is considered long. When $k \le 3$, the problem is polynomial time solvable; when $k$ is the total number of vertices, the problem reduces to the Hamiltonian path problem, which is NP-complete. For a fixed $k \ge 4$, the problem is NP-hard and the best known approximation algorithm for the weighted set packing problem implies a $k$-approximation algorithm. To the best of our knowledge, there is no approximation algorithm directly designed for the general problem, except a recent $4$-approximation algorithm when $k = 4$. We propose the first $(0.4394 k + O(1))$-approximation algorithm for the general problem and an improved $2$-approximation algorithm when $k = 4$. Both algorithms are based on local improvement, and their performance analyses are done via amortization.
14:50
Reducing the Vertex Cover Number via Edge Contractions
ABSTRACT. The CONTRACTION(vc) problem takes as input a graph $G$ on $n$ vertices and two integers $k$ and $d$, and asks whether one can contract at most $k$ edges to reduce the size of a minimum vertex cover of $G$ by at least $d$. Recently, Lima et al. [MFCS 2020] proved, among other results, that unlike most of the so-called blocker problems, CONTRACTION(vc) admits an XP algorithm running in time $f(d) \cdot n^{O(d)}$. They left open the question of whether this problem is FPT under this parameterization. In this article, we continue this line of research and prove the following results:
1. CONTRACTION(vc) is W[1]-hard parameterized by $k+d$. Moreover, unless the ETH fails, the problem does not admit an algorithm running in time $f(k+d) \cdot n^{o(k+d)}$ for any function $f$. In particular, this answers the open question stated in Lima et al. [MFCS 2020] in the negative.
2. It is NP-hard to decide whether an instance $(G,k,d)$ of CONTRACTION(vc) is a yes-instance even when $k=d$, hence enhancing our understanding of the classical complexity of the problem.
3. CONTRACTION(vc) can be solved in time $2^{O(d)} \cdot n^{k−d+O(1)}$. This XP algorithm improves the one of Lima et al. [MFCS 2020], which uses Courcelle's theorem as a subroutine and hence, the $f(d)$-factor in the running time is non-explicit and probably very large. On the other hard, it shows that when $k=d$, the problem is FPT parameterized by $d$ (or by $k$).
15:15
Conflict-free Coloring on Claw-free graphs and Interval graphs
ABSTRACT. A Conflict-Free Open Neighborhood coloring, abbreviated CFON^* coloring, of a graph G=(V,E) using k colors is an assignment of colors from a set of k colors to a subset of vertices of V(G) such that every vertex sees some color exactly once in its open neighborhood. The minimum k for which G has a CFON^* coloring using k colors is called the CFON^ chromatic number of G, denoted by \chi^*_{ON}(G). The analogous notion for closed neighborhood is called CFCN^* coloring and the analogous parameter is denoted by \chi^*_{CN}(G). The problem of deciding whether a given graph admits a CFON^* (or CFCN^*) coloring that uses k colors is NP-complete. Below, we describe briefly the main results of this paper.
* For k\geq 3, we show that if G is a K_{1,k}-free graph then \chi^*_{ON}(G) = O(k^2\log \Delta)$, where \Delta denotes the maximum degree of G. Debski and Przybylo in [J. Graph Theory, 2021] had shown that if G is a line graph, then \chi^*_{ON}(G) = O(\log \Delta). As an open question, they had asked if their result could be extended to claw-free (K_{1,3}-free) graphs, which are a superclass of line graphs. Since it is known that the CFCN^* chromatic number of a graph is at most twice its CFON^* chromatic number, our result positively answers the open question posed by Debski and Przybylo. * We show that if the minimum degree of any vertex in G is \Omega(\frac{\Delta}{\log^{\epsilon} \Delta}) for some \epsilon \geq 0, then \chi^*_{ON}(G) = O(\log^{1+\epsilon}\Delta). This is a generalization of the result given by Debski and Przybylo in the same paper where they showed that if the minimum degree of any vertex in G is \Omega(\Delta), then \chi^*_{ON}(G) = O(\log\Delta). * We give a polynomial time algorithm to compute \chi_{ON}^*(G) for interval graphs G. This answers in positive the open question posed by Reddy [Theoretical Comp. Science, 2018] to determine whether the CFON$^*$chromatic number can be computed in polynomial time on interval graphs. * We explore subclasses of bipartite graphs that include bipartite permutation, biconvex bipartite graphs and give polynomial time algorithms to compute their CFON^* chromatic number. This is interesting as Abel et al. [SIDMA, 2018] had shown that it is NP-complete to decide whether a planar bipartite graph G has \chi_{ON}^*(G) = k where k \in {1, 2, 3}. 14:00-15:40 Session 5B 14:00 Polynomial Time Algorithm for ARRIVAL on Tree-like Multigraphs ABSTRACT. A rotor walk in a directed graph can be thought of as a deterministic version of a Markov Chain, where a pebble moves from vertex to vertex following a simple rule until a terminal vertex, or sink, has been reached. The ARRIVAL problem, as defined by Dohrau and al. [7], consists in determining which sink will be reached. While the walk itself can take an exponential number of steps, this problem belongs to the complexity class NP ∩ co-NP without being known to be in P. In this work, we define a class of directed graphs, namely tree-like multigraphs, which are multigraphs having the global shape of an undirected tree. We prove that in this class, ARRIVAL can be solved in almost linear time, while the number of steps of a rotor walk can still be exponential. Then, we give an application of this result to solve some deterministic analogs of stochastic models (e.g., Markovian decision processes, Stochastic Games). 14:25 Skolem Meets Schanuel PRESENTER: Joris Nieuwveld ABSTRACT. The celebrated Skolem-Mahler-Lech theorem states that the set of zeros of a linear recurrence sequence is the union of a finite set and finitely many arithmetic progressions. The corresponding computational question, the Skolem Problem, asks to decide whether a given linear recurrence sequence has a zero term. Although the Skolem-Mahler-Lech theorem is almost 90 years old, decidability of the Skolem Problem remains open. The main contribution of this paper is an algorithm to solve the Skolem Problem for simple linear recurrence sequences (those with simple characteristic roots). Whenever the algorithm terminates, it produces a certificate that its output is correct---either a zero of the sequence or a witness that no zero exists. We give a proof that the algorithm terminates assuming two classical number-theoretic conjectures: the Skolem Conjecture (also known as the exponential local-global principle) and the p-adic Schanuel conjecture. 14:50 Deepening the (Parameterized) Complexity Analysis of Incremental Stable Matching Problems ABSTRACT. When computing stable matchings, it is usually assumed that the preferences of the agents in the matching market are fixed. However, in many realistic scenarios, preferences change over time. Consequently, an initially stable matching may become unstable. Then, a natural goal is to find a matching which is stable with respect to the modified preferences and as close as possible to the initial one. For Stable Marriage/Roommates, this problem was formally defined as Incremental Stable Marriage/Roommates by Bredereck et al. [AAAI '20]. As they showed that Incremental Stable Roommates and Incremental Stable Marriage with Ties are NP-hard, we focus on the parameterized complexity of these problems. We answer two open questions of Bredereck et al. [AAAI '20]: We show that Incremental Stable Roommates is W[1]-hard parameterized by the number of changes in the preferences, yet admits an intricate XP-algorithm, and we show that Incremental Stable Marriage with Ties is W[1]-hard parameterized by the number of ties. Furthermore, we analyze the influence of the degree of similarity'' between the agents' preference lists, identifying several polynomial-time solvable and fixed-parameter tractable cases, but also proving that Incremental Stable Roommates and Incremental Stable Marriage with Ties parameterized by the number of different preference lists are W[1]-hard. 15:15 On the Skolem Problem for Reversible Sequences ABSTRACT. Given an integer linear recurrence sequence ⟨X_0, X_1, X_2,...⟩, the Skolem Problem asks to determine whether there is a natural number n such that X_n = 0. The decidability of the Skolem Problem is a long-standing open problem in verification. In a recent preprint, Lipton, Luca, Nieuwveld, Ouaknine, Purser, and Worrell prove that the Skolem Problem is decidable for a class of reversible sequences of order at most seven. Herein, we give an alternative proof of the result. The novelty of our approach arises from our employment of theorems concerning the polynomial relations between Galois conjugates. In particular, we make repeated use of a result due to Dubickas and Smyth for algebraic integers that lie alongside all their Galois conjugates on two (but not one) concentric circles centred at the origin. 15:40-16:10Coffee Break 16:10-17:30 Session 6A 16:10 On Upward-Planar L-Drawings of Graphs PRESENTER: Sabine Cornelsen ABSTRACT. In an upward-planar L-drawing of a directed acyclic graph (DAG) each edge e is represented as a polyline composed of a vertical segment with its lowest endpoint at the tail of$e$and of a horizontal segment ending at the head of e. Distinct edges may overlap, but not cross. Recently, upward-planar L-drawings have been studied for st-graphs, i.e., planar DAGs with a single source s and a single sink t containing an edge directed from s to t. It is known that a plane st-graph, i.e., an embedded st-graph in which the edge (s,t) is incident to the outer face, admits an upward-planar L-drawing if and only if it admits a bitonic st-ordering, which can be tested in linear time. We study upward-planar L-drawings of DAGs that are not necessarily st-graphs. On the combinatorial side, we show that a plane DAG admits an upward-planar L-drawing if and only if it is a subgraph of a plane st-graph admitting a bitonic st-ordering. This allows us to show that not every tree with a fixed bimodal embedding admits an upward-planar L-drawing. Moreover, we prove that any acyclic cactus with a single source (or a single sink) admits an upward-planar L-drawing, which respects a given outerplanar~embedding if there are no transitive edges. On the algorithmic side, we consider DAGs with a single source (or a single sink). We give linear-time testing algorithms for these DAGs in two cases: (i) when the drawing must respect a prescribed embedding and (ii) when no restriction is given on the embedding, but each biconnected component is series-parallel. 16:35 Extending Partial Representations of Circle Graphs in Near-Linear Time ABSTRACT. The \emph{partial representation extension problem} generalizes the recognition problem for geometric intersection graphs. The input consists of a graph$G$, a subgraph$H \subseteq G$and a representation~$\mathcal H$of$H$. The question is whether$G$admits a representation$\mathcal G$whose restriction to$H$is$\mathcal H$. We study this question for \emph{circle graphs}, which are intersection graphs of chords of a circle. Their representations are called \emph{chord diagrams}. We show that for a graph with$n$vertices and$m$edges the partial representation extension problem can be solved in$O((n + m) \alpha(n + m))$time, where$\alpha$is the inverse Ackermann function. This improves over an$O(n^3)$-time algorithm by Chaplick, Fulek and Klav\'ik~[2019]. The main technical contributions are a canonical way of orienting chord diagrams and a novel compact representation of the set of all canonically oriented chord diagrams that represent a given circle graph$G\$, which is of independent interest.
17:00
RAC Drawings of Graphs with Low Degree
PRESENTER: Julia Katheder
ABSTRACT. Motivated by cognitive experiments providing evidence that large crossing-angles have a limited impact on the readability of a graph drawing, RAC (Right Angle Crossing) drawings were introduced to address the problem of producing readable representations of non-planar graphs by supporting the optimal case in which all crossings form 90° angles.
In this work, we make progress on the problem of finding RAC drawings of graphs of low degree. In this context, a long-standing open question asks whether all degree-3 graphs admit straight-line RAC drawings. This question has been positively answered for the Hamiltonian degree-3 graphs. We improve on this result by extending to the class of 3-edge-colorable degree-3 graphs. When each edge is allowed to have one bend, we prove that degree-4 graphs admit such RAC drawings, a result which was previously known only for degree-3 graphs. Finally, we show that 7-edge-colorable degree-7 graphs admit RAC drawings with two bends per edge. This improves over the previous result on degree-6 graphs.
16:10-17:30 Session 6B
16:10
Continuous rational functions are deterministic regular
ABSTRACT. A word-to-word function is rational if it can be realized by a non-deterministic one-way transducer. Over finite words, it is a classical result that any rational function is regular, i.e. it can be computed by a deterministic two-way transducer, or equivalently, by a deterministic streaming string transducer (a one-way automaton which manipulates string registers).
This result no longer holds for infinite words, since a non-deterministic one-way transducer can guess, and check along its run, properties such as infinitely many occurrences of some pattern, which is impossible for a deterministic machine. In this paper, we identify the class of rational functions over infinite words which are also computable by a deterministic two-way transducer. It coincides with the class of rational functions which are continuous, and this property can thus be decided. This solves an open question raised in a previous paper of Dave et al.
16:35
Deciding Emptiness for Constraint Automata on Strings with the Prefix and Suffix Order
ABSTRACT. We study constraint automata that recognize data languages on finite string values. Each transition of the automaton is labelled with a constraint restricting the string value at the current and the next position of the data word in terms of the prefix and the suffix order. We prove that the emptiness problem for such constraint automata with Büchi acceptance condition is NL-complete. We remark that since the constraints are formed by two partial orders, prefix and suffix, we cannot exploit existing techniques for similar formalisms. Our decision procedure relies on a decidable characterization for those infinite paths in the graph underlying the automaton that can be complemented with string values to yield a Büchi-accepting run. Our result is - to the best of our knowledge - the first work in this context that considers both prefix and suffix, and it is a first step into answering an open question posed by Demri and Deters.
17:00
Learning Deterministic Visibly Pushdown Automata under Accessible Stack
ABSTRACT. We study the problem of active learning deterministic visibly pushdown automata. We show that in the classical L*-setting, efficient active learning algorithms are not possible. To overcome this difficulty, we propose the accessible stack setting, where the algorithm has the read and write access to the stack. In this setting, we show that active learning can be done in polynomial time in the size of the target automaton and the counterexamples provided by the teacher. As counterexamples of exponential size are sometimes inevitable, we consider an algorithm working with words in a compressed representation via (visibly) Straight-Line Programs. Employing compression allows us to obtain an algorithm where the teacher and the learner work in time polynomial in the size of the target automaton alone. | 2022-12-08 02:47:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7483838200569153, "perplexity": 449.0685037753758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00224.warc.gz"} |
https://questioncove.com/updates/508716dde4b0c59b90b447c9 | OpenStudy (anonymous):
A rancher has 200 ft of fencing to enclose two adjacent rectangular corrals and run a strip of fencing down between them to separate the corrals. What dimensions should be used so that the inclosed area will be a maximum?
OpenStudy (anonymous):
@Hero or @lgbasallote
OpenStudy (anonymous):
guess and check?
OpenStudy (anonymous):
this is what ive got... $P=2x+2y~~~~~~~~~~~~~~A=xy$$200=2y+3x~~~~~~\implies~~~~~~y=100-{3\over2}~x~~~~~~and~~~~~~x={200\over3}~-{2\over3}~y$$A=\left(100-{3\over2}~x\right)\times\ x$ $\implies~~~~*plug~into~graphing~calculator~and~find~maximum~y-value*$$Max~Area~is~~ 1666~{2\over3}~ft~~~~~~\implies~~~~~~dimensions~are~~x=33{1\over3}~~and~~y=50{$
OpenStudy (anonymous):
$Max~Area~is~~ 1666~{2\over3}~ft~~~~~~\implies~~~~~~dimensions~are~~x=33{1\over3}~~and~~y=50$
OpenStudy (anonymous):
♥ I ROCK ^_^ ♥
OpenStudy (anonymous):
no you see there are 3 x's
OpenStudy (anonymous):
|dw:1351119771929:dw|
OpenStudy (anonymous):
|dw:1351119807736:dw|theres a STRIP GOING DOWN THE MIDDLE which is included in the fencing....so that added equals 200 | 2017-08-19 01:55:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7183651924133301, "perplexity": 10387.789475960128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00537.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/simplify-following-surd-rationalisation-of-surds_76597 | # Simplify the Following Surd. - Algebra
Simplify.
sqrt 7 - 3/5 sqrt 7 + 2 sqrt 7
#### Solution
sqrt 7 - 3/5 sqrt 7 + 2 sqrt 7
= 3 sqrt 7 - 3/5 sqrt 7
= 15/5 sqrt 7 - 3/5 sqrt 7
= 12/5 sqrt 7
Is there an error in this question or solution?
#### APPEARS IN
Balbharati Mathematics 1 Algebra 9th Standard Maharashtra State Board
Chapter 2 Real Numbers
Practice Set 2.3 | Q 6.4 | Page 30 | 2021-04-22 17:31:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834527134895325, "perplexity": 7227.893687423555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00210.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcds.2014.34.4487 | American Institute of Mathematical Sciences
November 2014, 34(11): 4487-4513. doi: 10.3934/dcds.2014.34.4487
Renormalizations of circle hoemomorphisms with a single break point
1 Faculty of Mathematics and Mechanics. Samarkand State University, Boulevard Street 15, 140104 Samarkand, Uzbekistan 2 Turin Politechnic University in Tashkent, Kichik halqa yuli 17, 100095 Tashkent, Uzbekistan 3 Institut für Theoretische Physik, TU Clausthal, Leibnizstrasse 10, D-38678 Clausthal-Zellerfeld, Germany
Received November 2013 Revised March 2014 Published May 2014
Let $f$ be an orientation preserving circle homeomorphism with a single break point $x_b,$ i.e. with a jump in the first derivative $f'$ at the point $x_b,$ and with irrational rotation number $\rho=\rho_{f}.$ Suppose that $f$ satisfies the Katznelson and Ornstein smoothness conditions, i.e. $f'$ is absolutely continuous on $[x_b,x_b+1]$ and $f''(x)\in \mathbb{L}^{p}([0,1), d\ell)$ for some $p>1$, where $\ell$ is Lebesque measure. We prove, that the renormalizations of $f$ are approximated by linear-fractional functions in $\mathbb{C}^{1+L^{1}}$, that means, $f$ is approximated in $C^{1}-$ norm and $f''$ is appoximated in $L^{1}-$ norm. Also it is shown, that renormalizations of circle diffeomorphisms with irrational rotation number satisfying the Katznelson and Ornstein smoothness conditions are close to linear functions in $\mathbb{C}^{1+L^{1}}$- norm.
Citation: Abdumajid Begmatov, Akhtam Dzhalilov, Dieter Mayer. Renormalizations of circle hoemomorphisms with a single break point. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4487-4513. doi: 10.3934/dcds.2014.34.4487
References:
[1] V. I. Arnol'd, Small denominators: I. Mappings from the circle onto itself,, Izv. Akad. Nauk SSSR, 25 (1961), 21. Google Scholar [2] H. Akhadkulov, A. Dzhalilov and D. Mayer, On conjugations of circle homeomorphisms with two break points,, Ergod. Theor. and Dynam. Syst., 34 (2014), 725. doi: 10.1017/etds.2012.159. Google Scholar [3] I. P. Cornfeld, S. V. Fomin and Ya. G. Sinai, Ergodic Theory,, Springer-Verlag, (1982). doi: 10.1007/978-1-4615-6927-5. Google Scholar [4] A. Denjoy, Sur les courbes définies par les équations différentielles à la surface du tore,, J. Math. Pures Appl., 11 (1932), 333. Google Scholar [5] A. A. Dzhalilov and K. M. Khanin, On invariant measure for homeomorphisms of a circle with a point of break,, Funct. Anal. Appl., 32 (1998), 153. doi: 10.1007/BF02463336. Google Scholar [6] A. A. Dzhalilov and I. Liousse, Circle homeomorphismswith two break points,, Nonlinearity, 19 (2006), 1951. doi: 10.1088/0951-7715/19/8/010. Google Scholar [7] A. A. Dzhalilov, I. Liousse and D. Mayer, Singular measures of piecewise smooth circle homeomorphisms with two break points,, Discrete and continuous dynamical systems, 24 (2009), 381. doi: 10.3934/dcds.2009.24.381. Google Scholar [8] A. A. Dzhalilov, H. Akin and S. Temir, Conjugations between circle maps with a single break point,, Journal of Mathematical Analysis and Applications, 366 (2010), 1. doi: 10.1016/j.jmaa.2009.12.050. Google Scholar [9] A. A. Dzhalilov, D. Mayer and U. A. Safarov, Piecwise-smooth circle homeomorphisms with several break points,, Izvestiya RAN: Ser. Mat., 76 (2012), 101. Google Scholar [10] E. de Faria and W. de Melo, Rigidity of critical circle mappings,, I. J. Eur. Math. Soc. (JEMS), 1 (1999), 339. doi: 10.1007/s100970050011. Google Scholar [11] M. Herman, Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations,, Inst. Hautes Etudes Sci. Publ. Math., 49 (1979), 5. doi: 10.1007/BF02684798. Google Scholar [12] Y. Katznelson and D. Ornstein, The differentability continuity of the conjugation of certain diffeomorphisms of the circle,, Ergod. Theor. Dyn. Syst., 9 (1989), 643. doi: 10.1017/S0143385700005277. Google Scholar [13] Y. Katznelson and D. Ornstein, The absolute continuity of the conjugation of certain diffeomorphisms of the circle,, Ergod. Theor. Dyn. Syst., 9 (1989), 681. doi: 10.1017/S0143385700005289. Google Scholar [14] K. M. Khanin and Ya. G. Sinai, Smoothness of conjugacies of diffeomorphisms of the circle with rotations,, Russ. Math. Surv., 44 (1989), 69. doi: 10.1070/RM1989v044n01ABEH002008. Google Scholar [15] K. M. Khanin and E. B. Vul, Circle homeomorphisms with weak discontinuities,, Advances in Soviet Mathematics, 3 (1991), 57. Google Scholar [16] K. M. Khanin and D. Khmelev, Renormalizations and Rigidity Theory for Circle Homeomorphisms with Singularities of the Break Type,, Commun. Math. Phys., 235 (2003), 69. doi: 10.1007/s00220-003-0809-5. Google Scholar [17] I. Liousse, PL Homeomorphisms of the circle which are piecewise $C^1$ conjugate to irrational rotations,, Bull. Braz. Math. Soc., 35 (2004), 269. doi: 10.1007/s00574-004-0014-y. Google Scholar [18] J. Stark, Smooth conjugacy and renormalization for diffeomorfisms of the circle,, Nonlinearity, 1 (1988), 541. doi: 10.1088/0951-7715/1/4/004. Google Scholar [19] G. Swiatek, Rational rotation number for maps of the circle,, Commun. Math. Phys., (1988), 109. doi: 10.1007/BF01218263. Google Scholar [20] M. Stein, Groups of piecewise linear homeomorphisms,, Trans. A.M.S., 332 (1992), 477. doi: 10.1090/S0002-9947-1992-1094555-4. Google Scholar [21] A. Yu. Teplinskii and K. M. Khanin, Robust rigidity for circle diffeomorphisms with singularities,, Inventiones mathematicae, 169 (2007), 193. doi: 10.1007/s00222-007-0047-0. Google Scholar [22] J. C. Yoccoz, Il n'y a pas de contre-exemple de Denjoy analytique,, C. R. Acad. Sci. Paris, 298 (1984), 141. Google Scholar
show all references
References:
[1] V. I. Arnol'd, Small denominators: I. Mappings from the circle onto itself,, Izv. Akad. Nauk SSSR, 25 (1961), 21. Google Scholar [2] H. Akhadkulov, A. Dzhalilov and D. Mayer, On conjugations of circle homeomorphisms with two break points,, Ergod. Theor. and Dynam. Syst., 34 (2014), 725. doi: 10.1017/etds.2012.159. Google Scholar [3] I. P. Cornfeld, S. V. Fomin and Ya. G. Sinai, Ergodic Theory,, Springer-Verlag, (1982). doi: 10.1007/978-1-4615-6927-5. Google Scholar [4] A. Denjoy, Sur les courbes définies par les équations différentielles à la surface du tore,, J. Math. Pures Appl., 11 (1932), 333. Google Scholar [5] A. A. Dzhalilov and K. M. Khanin, On invariant measure for homeomorphisms of a circle with a point of break,, Funct. Anal. Appl., 32 (1998), 153. doi: 10.1007/BF02463336. Google Scholar [6] A. A. Dzhalilov and I. Liousse, Circle homeomorphismswith two break points,, Nonlinearity, 19 (2006), 1951. doi: 10.1088/0951-7715/19/8/010. Google Scholar [7] A. A. Dzhalilov, I. Liousse and D. Mayer, Singular measures of piecewise smooth circle homeomorphisms with two break points,, Discrete and continuous dynamical systems, 24 (2009), 381. doi: 10.3934/dcds.2009.24.381. Google Scholar [8] A. A. Dzhalilov, H. Akin and S. Temir, Conjugations between circle maps with a single break point,, Journal of Mathematical Analysis and Applications, 366 (2010), 1. doi: 10.1016/j.jmaa.2009.12.050. Google Scholar [9] A. A. Dzhalilov, D. Mayer and U. A. Safarov, Piecwise-smooth circle homeomorphisms with several break points,, Izvestiya RAN: Ser. Mat., 76 (2012), 101. Google Scholar [10] E. de Faria and W. de Melo, Rigidity of critical circle mappings,, I. J. Eur. Math. Soc. (JEMS), 1 (1999), 339. doi: 10.1007/s100970050011. Google Scholar [11] M. Herman, Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations,, Inst. Hautes Etudes Sci. Publ. Math., 49 (1979), 5. doi: 10.1007/BF02684798. Google Scholar [12] Y. Katznelson and D. Ornstein, The differentability continuity of the conjugation of certain diffeomorphisms of the circle,, Ergod. Theor. Dyn. Syst., 9 (1989), 643. doi: 10.1017/S0143385700005277. Google Scholar [13] Y. Katznelson and D. Ornstein, The absolute continuity of the conjugation of certain diffeomorphisms of the circle,, Ergod. Theor. Dyn. Syst., 9 (1989), 681. doi: 10.1017/S0143385700005289. Google Scholar [14] K. M. Khanin and Ya. G. Sinai, Smoothness of conjugacies of diffeomorphisms of the circle with rotations,, Russ. Math. Surv., 44 (1989), 69. doi: 10.1070/RM1989v044n01ABEH002008. Google Scholar [15] K. M. Khanin and E. B. Vul, Circle homeomorphisms with weak discontinuities,, Advances in Soviet Mathematics, 3 (1991), 57. Google Scholar [16] K. M. Khanin and D. Khmelev, Renormalizations and Rigidity Theory for Circle Homeomorphisms with Singularities of the Break Type,, Commun. Math. Phys., 235 (2003), 69. doi: 10.1007/s00220-003-0809-5. Google Scholar [17] I. Liousse, PL Homeomorphisms of the circle which are piecewise $C^1$ conjugate to irrational rotations,, Bull. Braz. Math. Soc., 35 (2004), 269. doi: 10.1007/s00574-004-0014-y. Google Scholar [18] J. Stark, Smooth conjugacy and renormalization for diffeomorfisms of the circle,, Nonlinearity, 1 (1988), 541. doi: 10.1088/0951-7715/1/4/004. Google Scholar [19] G. Swiatek, Rational rotation number for maps of the circle,, Commun. Math. Phys., (1988), 109. doi: 10.1007/BF01218263. Google Scholar [20] M. Stein, Groups of piecewise linear homeomorphisms,, Trans. A.M.S., 332 (1992), 477. doi: 10.1090/S0002-9947-1992-1094555-4. Google Scholar [21] A. Yu. Teplinskii and K. M. Khanin, Robust rigidity for circle diffeomorphisms with singularities,, Inventiones mathematicae, 169 (2007), 193. doi: 10.1007/s00222-007-0047-0. Google Scholar [22] J. C. Yoccoz, Il n'y a pas de contre-exemple de Denjoy analytique,, C. R. Acad. Sci. Paris, 298 (1984), 141. Google Scholar
[1] Malo Jézéquel. Parameter regularity of dynamical determinants of expanding maps of the circle and an application to linear response. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 927-958. doi: 10.3934/dcds.2019039 [2] Akhtam Dzhalilov, Isabelle Liousse, Dieter Mayer. Singular measures of piecewise smooth circle homeomorphisms with two break points. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 381-403. doi: 10.3934/dcds.2009.24.381 [3] Michel Laurent, Arnaldo Nogueira. Rotation number of contracted rotations. Journal of Modern Dynamics, 2018, 12: 175-191. doi: 10.3934/jmd.2018007 [4] Qiudong Wang. The diffusion time of the connecting orbit around rotation number zero for the monotone twist maps. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 255-274. doi: 10.3934/dcds.2000.6.255 [5] Christopher Cleveland. Rotation sets for unimodal maps of the interval. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 617-632. doi: 10.3934/dcds.2003.9.617 [6] Abdelhamid Adouani, Habib Marzougui. Computation of rotation numbers for a class of PL-circle homeomorphisms. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3399-3419. doi: 10.3934/dcds.2012.32.3399 [7] Rafael De La Llave, Michael Shub, Carles Simó. Entropy estimates for a family of expanding maps of the circle. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 597-608. doi: 10.3934/dcdsb.2008.10.597 [8] Liviana Palmisano. Unbounded regime for circle maps with a flat interval. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2099-2122. doi: 10.3934/dcds.2015.35.2099 [9] Alena Erchenko. Flexibility of Lyapunov exponents for expanding circle maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2325-2342. doi: 10.3934/dcds.2019098 [10] Héctor E. Lomelí. Heteroclinic orbits and rotation sets for twist maps. Discrete & Continuous Dynamical Systems - A, 2006, 14 (2) : 343-354. doi: 10.3934/dcds.2006.14.343 [11] Wenxian Shen. Global attractor and rotation number of a class of nonlinear noisy oscillators. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 597-611. doi: 10.3934/dcds.2007.18.597 [12] Joachim Escher, Boris Kolev. Right-invariant Sobolev metrics of fractional order on the diffeomorphism group of the circle. Journal of Geometric Mechanics, 2014, 6 (3) : 335-372. doi: 10.3934/jgm.2014.6.335 [13] Yakov Krasnov, Alexander Kononovich, Grigory Osharovich. On a structure of the fixed point set of homogeneous maps. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1017-1027. doi: 10.3934/dcdss.2013.6.1017 [14] Luis Hernández-Corbato, Francisco R. Ruiz del Portal. Fixed point indices of planar continuous maps. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 2979-2995. doi: 10.3934/dcds.2015.35.2979 [15] Jan J. Dijkstra and Jan van Mill. Homeomorphism groups of manifolds and Erdos space. Electronic Research Announcements, 2004, 10: 29-38. [16] M . Bartušek, John R. Graef. Some limit-point/limit-circle results for third order differential equations. Conference Publications, 2001, 2001 (Special) : 31-38. doi: 10.3934/proc.2001.2001.31 [17] Teck-Cheong Lim. On the largest common fixed point of a commuting family of isotone maps. Conference Publications, 2005, 2005 (Special) : 621-623. doi: 10.3934/proc.2005.2005.621 [18] Grzegorz Graff, Piotr Nowak-Przygodzki. Fixed point indices of iterations of $C^1$ maps in $R^3$. Discrete & Continuous Dynamical Systems - A, 2006, 16 (4) : 843-856. doi: 10.3934/dcds.2006.16.843 [19] Romain Aimino, Huyi Hu, Matthew Nicol, Andrei Török, Sandro Vaienti. Polynomial loss of memory for maps of the interval with a neutral fixed point. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 793-806. doi: 10.3934/dcds.2015.35.793 [20] Claude Bardos, François Golse, Ivan Moyano. Linear Boltzmann equation and fractional diffusion. Kinetic & Related Models, 2018, 11 (4) : 1011-1036. doi: 10.3934/krm.2018039
2018 Impact Factor: 1.143 | 2019-10-23 13:02:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7552940845489502, "perplexity": 3992.0203464519095}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00000.warc.gz"} |
https://solvedlib.com/the-amount-of-fill-weight-of-contents-put-into-a,340715 | # The amount of fill (weight of contents) put into a glass jar of spaghetti sauce is...
###### Question:
The amount of fill (weight of contents) put into a glass jar of spaghetti sauce is normally distributed with mean ? = 841 grams and standard deviation of ? = 13 grams.
(e) Find the probability that a random sample of 24 jars has a mean weight between 841 and 862 grams. (Give your answer correct to four decimal places.)
#### Similar Solved Questions
##### In your own words, describe inequity theory. In your opinion, what is an example of a...
In your own words, describe inequity theory. In your opinion, what is an example of a way to correct inequity tension? Use an example....
##### A spring is initially compressed a distance of 0.15 meters. A 2.5 kg mass is held against the compressed spring The spring constant is 530. Nlm: There is no friction_ The mass is released and moves t0 the right on the track: What is the velocity of the mass at point A? What is the velocity of the mass at point B, when at a height of 0.10 m? What is thc maximum vertical height that the mass will reach (point C)?0.10 m
A spring is initially compressed a distance of 0.15 meters. A 2.5 kg mass is held against the compressed spring The spring constant is 530. Nlm: There is no friction_ The mass is released and moves t0 the right on the track: What is the velocity of the mass at point A? What is the velocity of the m...
##### Let Xi:_ Xn be a random sample of size n from an infinite population and assume Xi 4 a + bU? with the constants & > 0 and b 7 0 unknown and U standard uniform distributed random variable given byifr < 0Fv(c) ;= P(U : 1) =if 0 < : < 1ifz >1Compute the cdf of thc random variable Xi_ 2_ Compute E(X1) and Var(X1) 3_ Give the method of moments estimators of the Unknown parameters & and b. Explain how You construct these estimators!
Let Xi:_ Xn be a random sample of size n from an infinite population and assume Xi 4 a + bU? with the constants & > 0 and b 7 0 unknown and U standard uniform distributed random variable given by ifr < 0 Fv(c) ;= P(U : 1) = if 0 < : < 1 ifz >1 Compute the cdf of thc random variabl...
##### 1. If a gas under 5 atmospheres of presssure has a volume of 2 liters and...
1. If a gas under 5 atmospheres of presssure has a volume of 2 liters and a temperature of 10 degree celsius, What will be the new volume if the pressure increases to 7 atmospheres and the temperature rises to 50 degrees celsius? A. 1.6 L b. 3.7 L C neither A nor B 2. 300 g of ice at zero degree cen...
##### A study t0 determine the Ircquency and dependency of color-blindness relative to females and males, 1,000 pcople were chosen at random; and the following results were recorded:BemaleMalCeler-Dlkau NoraalS18nante| What is the probability that a personwoman given that the person is color-blind?{nrnca) What is the probability that person color-blind, given that the personmale?FRlLa Are events color-blindness and male independent? Prove your answer!
A study t0 determine the Ircquency and dependency of color-blindness relative to females and males, 1,000 pcople were chosen at random; and the following results were recorded: Bemale Mal Celer-Dlkau Noraal S18 nante| What is the probability that a person woman given that the person is color-blind? ...
##### According to the historical data the life expectancy in the United Kingdom than the life expectancy in the United States new sudy has been made t0 whethe this has changed Records of 283 individual from the United Kingdom who died recently are selected random The 283 individua lived mean of 78 years with standard deviation of .= years Records of 300 individuals from the United States who died recently are selected at random and independently. The 300 individuals lived mean of 76.8 years with stan
According to the historical data the life expectancy in the United Kingdom than the life expectancy in the United States new sudy has been made t0 whethe this has changed Records of 283 individual from the United Kingdom who died recently are selected random The 283 individua lived mean of 78 years ...
##### 1. A 75.0-kg skier rides a 2830-m long lift to the top of the mountain: The lift makes an angle of 14.60 with the horizontal What is the change in the skier's gravitational potential energy?2. A person starts from rest;, with the rope held in the horizontal position, swings downward; and then lets go of the rope: Three forces act on them: the weight; the tension in the rope; and the force of air resistance_ Can the principle of conservation of energy be used to calculate his final speed?
1. A 75.0-kg skier rides a 2830-m long lift to the top of the mountain: The lift makes an angle of 14.60 with the horizontal What is the change in the skier's gravitational potential energy? 2. A person starts from rest;, with the rope held in the horizontal position, swings downward; and then ...
##### 5. Let y E C2([0, T]; R), T > 0 satisfy y"(t) = 피t, y(0) = y'(0) = 0 e R. Use Picard-Lindelöf 1+...
5. Let y E C2([0, T]; R), T > 0 satisfy y"(t) = 피t, y(0) = y'(0) = 0 e R. Use Picard-Lindelöf 1+t' to prove that a unique solution to the IVP exists for short time, as follows: (a) Let b E R2, A E M2 (R) . Show that any function g : R2 -R2.9(x) = Ax+b is Lipschitz. 1 mark ...
##### OMDAnMru 4o cuneoloud Corni {DTonni rntotlEnoind 55 bottantnd Toung= cotootre Ho kE Ldon 7108.7ouncdhx coclnc 00 Tounc Kouldyou nel 40 Diel ongrnilcno uonnlrerxe ra RCAus Acra Hobtcus.141*1a74Doinc rtd Votue - LA anoUcjeci noeacl4-14 < 157e
OMDAnMru 4o cuneoloud Corni {DTonni rntotlEnoind 55 bottantnd Toung= cotootre Ho kE Ldon 7108.7ouncdhx coclnc 00 Tounc Kouldyou nel 40 Diel ongrnilcno uonnlrerxe ra RCAus Acra Hobtcus.141*1a74 Doinc rtd Votue - LA ano Ucjeci noeacl4-14 < 157e...
##### 2 Solve (x2+xy-y) dx + xydy =0 17 PTS
2 Solve (x2+xy-y) dx + xydy =0 17 PTS...
##### What is the answer from 1to 8 plz MULTIPLE CHOICE. Choose the one alternative that best...
what is the answer from 1to 8 plz MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answer the question 1) Which of the following costs change in total in direct proportion to a change in volume? A) period costs B) variable costs C) fixed costs D) mixed costs 2) The...
##### What is the electron-pair geometry for & central atom in molecule that is sp' &-hybridized?
What is the electron-pair geometry for & central atom in molecule that is sp' &-hybridized?...
##### Graph the function Kx)DalincWalcDetermine the intervalls) for which f(x}(Enter your answer using Interval notation. Enter EMPTYfor the emptv set )
Graph the function Kx) DalincWalc Determine the intervalls) for which f(x} (Enter your answer using Interval notation. Enter EMPTY for the emptv set )...
##### Which conic section has the polar equation r=1/(1-cosq)?
Which conic section has the polar equation r=1/(1-cosq)?...
##### Question 4researcher Wishes mnvestigate the relationship between calcium Intake and knowledge about calcium sports scicnce students The sample data givcn in thz table below:Rcpondent Aumocrhnoncdec {coTeof100)Calcmm Intakc1050 90QS00894Plot the data scatter diagram and interpret the relationship you can observe (5 marks) Calculate the conclation bchiccn knowledge scor and Calcium intake: Interpret correlation berween the- factors? marks)Calculate the regrcssion cqation (Usc knowledge score
Question 4 researcher Wishes mnvestigate the relationship between calcium Intake and knowledge about calcium sports scicnce students The sample data givcn in thz table below: Rcpondent Aumocr hnoncdec {coTe of100) Calcmm Intakc 1050 90Q S00 894 Plot the data scatter diagram and interpret the relatio...
##### Discuss how important it is to employers that managed care plan demonstrate that they offer quality...
Discuss how important it is to employers that managed care plan demonstrate that they offer quality care?...
##### A) Do the limit exist? lim(x,y)->(1,1) (sqrt(x+y) -sqrt(2))^(3/2)b) sketch atleast 2 level curves to the functionf(x,y)=x^2+y^2
a) Do the limit exist? lim(x,y)->(1,1) (sqrt(x+y) - sqrt(2))^(3/2) b) sketch atleast 2 level curves to the function f(x,y)=x^2+y^2...
##### The manufacturer of hardness testing equipment uses steel-ball indenters to penetrate metal that is being tested....
The manufacturer of hardness testing equipment uses steel-ball indenters to penetrate metal that is being tested. However, the manufacturer thinks it would be better to use a diamond indenter so that all types of metal can be tested. Because of differences between the two types of indenters, it is s...
##### What pressure gradient in Pa is required to pump 2.80 L of blood in 45.0 s...
What pressure gradient in Pa is required to pump 2.80 L of blood in 45.0 s through an artery of length L = 0.594 m with an artery radius R = 3.79 mm. Useful information: 1,000 L = 1 mº; n blood = 4.00 x 10-Pa s....
##### Question VII Two electric service utilities (ESUs) X and Y have charged their service areas country- wide at competitive tariff-rates (expressed in S) that change randomly over ten years 2009-2018 (indexed as r = through 10 for X and Y) as summarized below in the table_ Relevant tariff-rates (Tx) and (Tr) shown depict RVs baving corresponding frequencies of occurrence; (nJx and (n)r respectively- Hence determine the following:Statistical divergence between the sample-spaces of the RVs in
Question VII Two electric service utilities (ESUs) X and Y have charged their service areas country- wide at competitive tariff-rates (expressed in S) that change randomly over ten years 2009-2018 (indexed as r = through 10 for X and Y) as summarized below in the table_ Relevant tariff-rates (Tx) ...
##### The magnetic field produced by an MRI solenoid2.8 mm long and 1.6 mm in diameter is1.4 T . You may want to review (Pages 810 -811) .Find the magnitude of the magnetic flux through the core of thissolenoid.Express your answer using two significant figures.
The magnetic field produced by an MRI solenoid 2.8 mm long and 1.6 mm in diameter is 1.4 T . You may want to review (Pages 810 - 811) . Find the magnitude of the magnetic flux through the core of this solenoid. Express your answer using two significant figures....
##### At what temperature in Kelvin will a reaction have AG = 0? AH =-24.2 kJlmol and AS =-55.5 JK- mol and assume both d not vary' with temperature _2.2943622932980.436
At what temperature in Kelvin will a reaction have AG = 0? AH =-24.2 kJlmol and AS =-55.5 JK- mol and assume both d not vary' with temperature _ 2.29 436 2293 298 0.436...
##### Identify and discuss 3-5 topics regarding global efforts in the information systems industry. Included clearly identify...
Identify and discuss 3-5 topics regarding global efforts in the information systems industry. Included clearly identify the global topics, how it's being used in global firms and what affect the topic has on the information systems industry....
##### Solve each compound inequality. Use graphs to show the solution set to each of the two given inequalities, as well as a third graph that shows the solution set of the compound inequality. Express the solution set in interval notation. $2 x-5 \leq-11 \text { or } 5 x+1 \geq 6$
Solve each compound inequality. Use graphs to show the solution set to each of the two given inequalities, as well as a third graph that shows the solution set of the compound inequality. Express the solution set in interval notation. $2 x-5 \leq-11 \text { or } 5 x+1 \geq 6$...
##### Use the given K. values to predict which of the following salts is the most soluble, in terms of moles per liter, in pu...
Use the given K. values to predict which of the following salts is the most soluble, in terms of moles per liter, in pure water. (Hint: The size of Ksp tells us about solubility in general. The tutorial will explain the math but comparing the values for the general solubility is sufficient for these...
##### Consider the following data set: 58 42.5 67.8 58 46.2 60.8 47.7 55.8 23.5 43 47.7...
Consider the following data set: 58 42.5 67.8 58 46.2 60.8 47.7 55.8 23.5 43 47.7 43 58 41.3 43 43.3 69.8 46.750.6 56.1 33.7 44.5 70.9 31.3 53.7 53.6 27.7 46.4 77.5 74.8 Using the IQR definition of outliers, how many outliers are in this data set? Find the lower fence Find the upper fence Number of ... | 2023-03-29 06:17:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45928478240966797, "perplexity": 2702.9285611999976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00437.warc.gz"} |
https://math.stackexchange.com/questions/1700093/combinations-of-letters-with-restrictions | Combinations of letters with restrictions
Create a string of five letters using the letters: A, B, C, D, E, F, G, H, I, J, K, L, M.
a) How many words contain at least one A?
b) How many words contain exactly two A's?
For a) by my understanding is the total amount of combinations - the total amount of words without them. Which is what I believe to be $13^5 - 12^5$, however it feels as though it is too high/wrong
As for b), my train of thought is along the lines of:
$$\frac{13!}{2!\cdot 11!} \cdot \frac{12!}{3!\cdot 9!} \cdot \tfrac12$$
The first fraction is for the two A's, the second fraction is for the letters in the other three positions and the half is to get rid of the doubles in the set. ie AABCD and AABCD where the A's are swapped.
The (a) part is correct. For the (b) part, think that indeed you must choose the places for putting the two $A$'s: $\binom{5}{2}$. Then, each of the other three places can be ocuped for any of the remaining 12 letters: $12^3$. So, the (b) answer is $\binom{5}{2}12^3$. | 2020-08-08 09:43:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346050381660461, "perplexity": 217.53254390459674}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737319.74/warc/CC-MAIN-20200808080642-20200808110642-00125.warc.gz"} |
http://www.icmp.lviv.ua/en/preprints/1997/97-21u | 97-21U
# 97-21U
By - Posted on 08 February 2012
UDC:
548:573.611.4
PACS:
75.10.Jm
Thermodynamics of XXZ--model within two-particle cluster approximation
R.R. Levitskii
S.I. Sorokov
O.R. Baran
I.M. Pyndsyn
The XXZ--model was investigated within two--particle cluster approximation with two variational parameters $\varphi^z$, $\varphi^x$. The long--range interaction $J$ was taken into account in the framework of mean--field approximation. At various values of the anisotropy parameter ${\alpha}/{\gamma}$ ($K^{zz}=\gamma K$, $K^{xx}=K^{yy}=\alpha K$, $J^{zz}=\gamma J$, $J^{xx}=J^{yy}=\alpha J$, $K$ -- short--range interaction) and long--range interaction $J$, phase diagrams were constructed and temperature dependences of $\langle S^z \rangle$, $\langle S^x \rangle$, entropy, specific heat, and static susceptibility were calculated. It was shown that at ${\alpha}/{\gamma} \geq 1$ two--particle cluster approximation gave qualitatively incorrect results in a low--temperature region.
Year:
1997
Pages:
31 | 2023-02-03 07:09:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679060935974121, "perplexity": 5506.64971293258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00297.warc.gz"} |
http://mathhelpforum.com/calculus/64719-euler-approximation.html | # Math Help - Euler Approximation
1. ## Euler Approximation
Use Euler’s method to estimate y (1.2), the solution to the IVP (initial value problem)
y' = 2t - y, y (1) = -1, and step size h = 0.1.
2. Originally Posted by Oblivionwarrior
Use Euler’s method to estimate y (1.2), the solution to the IVP (initial value problem)
y' = 2t - y, y (1) = -1, and step size h = 0.1.
Where is the trouble?
The approximate solution to the initial value problem
$\frac{dy}{dt} = f(t, y)$ where $y(t_0) = y_0$
is given by $y_{n+1} = y_n + h f(t_n, y_n)$.
$y(1.1) = y_1 = y_0 + h (2 t_0 - y_0) = -1 + (0.1) (2 - -1) = -1 + 0.3 = -0.7$.
$y(1.2) = y_2 = y_1 + h (2 t_1 - y_1) = -0.7 + (0.1) (2.2 - -0.7) = -0.7 + 0.29 = -0.41$. | 2015-08-03 18:27:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657266497612, "perplexity": 1437.8674092815265}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.50/warc/CC-MAIN-20150728002310-00280-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://jmservera.com/simplify-4v3-u3/ | # Simplify ((4v^3)/u)^3
(4v3u)3
Use the power rule (ab)n=anbn to distribute the exponent.
Apply the product rule to 4v3u.
(4v3)3u3
Apply the product rule to 4v3.
43(v3)3u3
43(v3)3u3
Simplify the numerator.
Raise 4 to the power of 3.
64(v3)3u3
Multiply the exponents in (v3)3.
Apply the power rule and multiply exponents, (am)n=amn.
64v3⋅3u3
Multiply 3 by 3.
64v9u3
64v9u3
64v9u3
Simplify ((4v^3)/u)^3
## Our Professionals
### Lydia Fran
#### We are MathExperts
Solve all your Math Problems: https://elanyachtselection.com/
Scroll to top | 2022-12-08 17:33:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172398447990417, "perplexity": 13197.8242390299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00738.warc.gz"} |
https://math.stackexchange.com/questions/764604/approximate-sqrte-by-hand/764705 | # Approximate $\sqrt{e}$ by hand
I have seen this question many times as an example of provoking creativity. I wonder how many ways there are to approximate $\sqrt{e}$ by hand as accurately as possible.
The obvious way I can think of is to use Taylor expansion.
Thanks
• The nice thing about the Taylor expansion is that (i) it converges very quickly, and (ii) you can easily derive upper and lower bounds so you know how close you are to the right answer. – Rahul Apr 22 '14 at 16:16
• Must it be a decimal approximation? – Brad Apr 22 '14 at 16:25
• One consideration often neglected in these tasks: to a computer, decimal or binary division is usually an expensive operation, with integer multiplication a lot less expensive, and addition less expensive still. The accepted answer needs only five terms from the series to get within $10^{-9}$, but the fifth term alone involves $11!$ which is ten multiplications. The full computing complexity of that approach is more expensive than just counting five terms in the series, and ends up with a denominator that has 11 decimal digits. (continued) – alex.jordan Apr 24 '14 at 2:20
• (resumed) Compare this to the continued fraction approach. If you calculated the convergent down to the $17$, you get an approximation of $\frac{34361}{20841}$, already within $10^{-9}$ of $e^{1/2}$ and the denominator is much smaller. To get to this point costs a total of 28 multiplications, all involving at most a two-digit number by at most a five-digit number. (And there are also 28 additions to get to this point, but those are less expensive than the multiplications.) My humble opinion: series are overrated but stay in people's hearts because they are prominent in calculus class. – alex.jordan Apr 24 '14 at 2:24
• "but the fifth term alone involves 11! which is ten multiplications" And if you write a recursive subroutine for it, the cost of the storage of the program state word at every single call will dwarf all the arithmetic together. However, all it means is that not everything should be done in the most stupid way available ;) This is not to say that I do not admire continued fractions, on the contrary. So, for this particular case I voted for your approach, but if $1/2$ were $4/7$, say... – fedja Jun 15 '14 at 2:48
I found this series representation of $e$ on Wolfram Mathworld: $$e=\left(\sum_{k=0}^\infty\frac{4k+3}{2^{2k+1}(2k+1)!}\right)^2.$$ Hence $$\sqrt{e}=\sum_{k=0}^\infty\frac{4k+3}{2^{2k+1}(2k+1)!}.$$ Also from Maclaurin series for exponential function $$e^{\large\frac{1}{2}}=\sum_{n=0}^\infty\frac{1}{2^n n!}.$$
• Wow!? I have just found the middle series is extremely powerful! It only needs $5$ terms to yield accuracy $10^{-9}$!! – Tunk-Fey Apr 22 '14 at 16:43
• @RandomVariable I've also realized your answer is the same as mine since the begining but I don't wanna claim this is only mine. Feel free for everyone to use it and +1 from me for your answer. :) – Tunk-Fey Apr 22 '14 at 18:41
• Nice identity to start with @Tunk-Fey – Jeff Faraci Apr 23 '14 at 4:08
The rapidly-converging series representation of $\sqrt{e}$ in Tunk-Fey's answer can be derived from simply expressing the Maclaurin series of $e^{x}$ as the sum of its even terms plus the sum of its odd terms.
\begin{align} e^{x}&= \sum_{n=0}^{\infty} \frac{x^{2n}}{(2n)!} + \sum_{n=0}^{\infty} \frac{x^{2n+1}}{(2n+1)!} = \sum_{n=0}^{\infty} \frac{x^{2n}(2n+1) + x^{2n+1}}{(2n+1)!} \\ &= \sum_{n=0}^{\infty} \frac{x^{2n}(2n+1+x)}{(2n+1)!} = \sum_{n=0}^{\infty} \frac{x^{2n}(4n+2+2x)}{2(2n+1)!} \end{align}
• Very nice solution. – Jeff Faraci Apr 23 '14 at 4:07
If you apply the standard series expansion of $e^x$ to the case $x=-1/2$ and then find the reciprocal, it will converge faster than if you use $x=1/2$.
On a pocket calculator enter $2048$, ${1\over x}$, $+$, $1$, $=$, $x^2$ ($10$ times).
How accurately do you need it? One option is to use binomial expansion: $$e^{\frac{1}{2}} \approx \Big(1+\frac{1}{n}\Big)^{\frac{n}{2}}=\sum_{k=0}^{\frac{n}{2}}\binom{\frac{n}{2}}{k}\frac{1}{n^k}$$ which you can make arbitrarily close to $e^{\frac{1}{2}}$ for various values of $n$.
• Thanks Alex, I also thought of the limiting definition of $e$, I didn't know the binomial expansion. Good job, I think this is very doable by hand. I don't want to accept the right answer so soon. Let's see if someone else proposes other creative methods. – fast tooth Apr 22 '14 at 16:06
• Even if you set $n = 1000$ you still only have four digits of accuracy. $(1+1/1000)^{500} = 1.6483...$, $\sqrt{e} = 1.6487...$ – Brad Apr 22 '14 at 16:10
• this limit expression for $e$ converges very slowly. Very very slowly. – Ittay Weiss Apr 22 '14 at 16:23
• @IttayWeiss With $n=1024=2^{10}$, we have $(1+(1/2)/2^{10})^{2^{10}}=1.64852008854402$ which has three exact digits. This requires only computing $1+1/2048$ (ten divisions by $2$ of $0.5$) and ten squarings. I's say it's pretty good if one has a simple calculator with the “square” key. – egreg Apr 22 '14 at 16:30
• @egreg you have extraneous $1/2$ inside parentheses and too high power outside them. It should rather be $\left(1+2^{-10}\right)^{2^9}\approx 1.64831906$. – Ruslan Apr 22 '14 at 18:07
I would use the fact that $e \approx 2.7182818284$ and use Wikipedia on computing square roots. The digit by digit method will get you five decimals fairly quickly
• +1 It should be totally fair game to have $e$ memorized to $2.718281828$. That initial pattern is too easy to remember. – alex.jordan Apr 24 '14 at 23:34
• Plus computing square roots by hand is a lost art. Everyone should know how to do it. – Duncan Jun 17 '14 at 22:55
And here is another answer. There is a known continued fraction expansion for $e^{1/n}$. Continued fraction sequences converge quickly (although with so many 1s, this particular continued fraction converges on the slower end of things). The downside is that you can't use the $n$th convergent to quickly find the $n+1$st convergent, so you have to make a choice right away how deep to go. As @spin notes in the comments, you can refine your convergent using the previous two convergents and the next integer in the continued fraction expression.
$$e^{1/2}=1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{5+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{9+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{13+\cfrac{1}{1+\cfrac{1}{1+\cdots}}}}}}}}}}}}$$
As an alternative to series-based methods, there are differential equation based methods you can use.
If we recognize that $y=e^x$ is the solution to $y'=y$ with $y(0)=1$, and use Runge-Kutta with a small step size to approximate $y(1/2)$.
In this case, with just one step (using $h=1/2$ in the link), we obtain $e^{1/2}\approx1.6484375\ldots$, compared to the actual value of $e^{1/2}=1.64872127\ldots$.
With two steps, using $h=1/4$, we obtain $1.648699\ldots$.
With three steps, using $h=1/6$, we obtain $1.648716\ldots$.
With four steps, using $h=1/8$, we obtain $1.648716\ldots$.
With four steps, using $h=1/8$, we obtain $1.648719\ldots$.
In fairness, each "one" step in Runga-Kutta applied to this situation does require about seven multiplications. And since the step size needs to be decided from the start, you don't have the ability to refine your result further like you can with series by adding more terms. On the other hand a differential equation based method can give more accuracy in exchange for less computation in many cases.
Use the Power series to compute $x_n := \exp\left(-2^{-n}\right)$ for some $n \geqslant 1$ with high accuracy, and then compute
$$\sqrt{e} = \left(\frac{1}{x_n}\right)^{2^{n-1}}.$$
Using the negative exponent, and an exponent of smaller absolute gives you (much) faster convergence of the series, and the few operations of squaring and inverting don't lose much precision then. For e.g. $n = 3$, you get pretty good results for the first $10$ terms of the power series already.
$\ds{\large\tt\mbox{With}\quad \color{#66f}{\Large n = 3}}$: \begin{align}&{\LARGE\sqrt{\expo{}}}\approx \frac{1}{2} \left\{\frac{1}{2} \left[\frac{1}{2} \left(2+\frac{e}{2}\right)+\frac{2 e}{2+\frac{e}{2}}\right]+\frac{2 e}{\frac{1}{2} \left(2+\frac{e}{2}\right)+\frac{2 e}{2+\frac{e}{2}}}\right\} = x_{3} \approx 1.648721295 \end{align}
$$\mbox{Relative Error} = \verts{{x_{3} \over \root{\expo{}}} - 1}\times 100\ \% =1.48\times 10^{-6}\ \%$$
You can use the following identities:
• $e=\lim_n(1+1/n)^n$,
• $e=\lim_n\frac{n}{^n\sqrt{n!}}$,
Put a large value of $n$ and it should do. Of course you have to square-root the results.
• hmm, then i have to find another method to take the square root by hand. – fast tooth Apr 22 '14 at 16:18
• these converge quite slowly. – Ittay Weiss Apr 22 '14 at 16:22 | 2019-10-21 21:16:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807092308998108, "perplexity": 484.52396714817695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00163.warc.gz"} |
http://cran.itam.mx/web/packages/pbo/vignettes/pbo.html | # Probability of Backtest Overfitting
The package pbo provides convenient functions for analyzing a matrix of backtest trials to compute the probability of backtest overfitting, the performance degradation, and the stochastic dominance of the fitted models. The approach follows that described by Bailey et al. in their paper “The Probability of Backtest Overfitting” (reference provided below).
First, we assemble the trials into an NxT matrix where each column represents a trial and each trial has the same length T. This example is random data so the backtest should be overfit.
set.seed(765)
n <- 100
t <- 2400
m <- data.frame(matrix(rnorm(n*t),nrow=t,ncol=n,dimnames=list(1:t,1:n)),
check.names=FALSE)
sr_base <- 0
mu_base <- sr_base/(252.0)
sigma_base <- 1.00/(252.0)**0.5
for ( i in 1:n ) {
m[,i] = m[,i] * sigma_base / sd(m[,i]) # re-scale
m[,i] = m[,i] + mu_base - mean(m[,i]) # re-center
}
We can use any performance evaluation function that can work with the reassembled sub-matrices during the cross validation iterations. Following the original paper we can use the Sharpe ratio as
sharpe <- function(x,rf=0.03/252) {
sr <- apply(x,2,function(col) {
er = col - rf
return(mean(er)/sd(er))
})
return(sr)
}
Now that we have the trials matrix we can pass it to the pbo function for analysis. The analysis returns an object of class pbo that contains a list of the interesting results. For the Sharpe ratio the interesting performance threshold is 0 (the default of 0) so we pass threshold=0 through the pbo call argument list.
require(pbo)
## Loading required package: pbo
my_pbo <- pbo(m,s=8,f=sharpe,threshold=0)
The my_pbo object is a list we can summarize with the summary function.
summary(my_pbo)
## Performance function sharpe with threshold 0
## p_bo slope ar^2 p_loss
## 1.000000 -0.003046 0.970000 1.000000
We see that the backtest overfitting probably is 1 as expected because all of the trials have the same performance. We can view the results with the package's preconfigured lattice plots. The xyplot function has several variations for the plotType parameter value. See the ?xyplot.pbo help page for the details.
require(lattice)
require(latticeExtra)
## Loading required package: latticeExtra
require(grid)
## Loading required package: grid
histogram(my_pbo,type="density")
xyplot(my_pbo,plotType="degradation")
xyplot(my_pbo,plotType="dominance",increment=0.001)
xyplot(my_pbo,plotType="pairs")
xyplot(my_pbo,plotType="ranks",ylim=c(0,20))
dotplot(my_pbo)
The package also supports parallel execution on multicore hardware, providing a potentially significant reduction in pbo analysis time. The pbo package uses the foreach package to manage parallel workers, so we can use any package that supports parallelism using foreach.
For example, using the doParallel package we can establish a multicore cluster and enable multiple workers by passing the above m and s values along with the argument allow_parallel=TRUE to pboas follows:
require(doParallel)
## Loading required package: doParallel
cluster <- makeCluster(2) # or use detectCores()
registerDoParallel(cluster)
p_pbo <- pbo(m,s=8,f=sharpe,allow_parallel=TRUE)
stopCluster(cluster)
summary(p_pbo)
## Performance function sharpe with threshold 0
## p_bo slope ar^2 p_loss
## 1.000000 -0.003046 0.970000 1.000000
## Reference
Bailey, David H. and Borwein, Jonathan M. and Lopez de Prado, Marcos and Zhu, Qiji Jim, “The Probability of Back-Test Overfitting” (September 1, 2013). Available at SSRN. | 2019-08-21 22:41:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5292546153068542, "perplexity": 6712.329716613829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00365.warc.gz"} |
http://www.researchgate.net/publication/26841923_Direct_Solution_of_nth-Order_IVPs_by_Homotopy_Analysis_Method | Article
# Direct Solution of nth-Order IVPs by Homotopy Analysis Method
Differential Equations and Nonlinear Mechanics 01/2009;
Source: DOAJ
ABSTRACT Direct solution of a class of nth-order initial value problems (IVPs) is considered based on the homotopy analysis method (HAM). The HAM solutions contain an auxiliary parameter which provides a convenient way of controlling the convergence region of the series solutions. The HAM gives approximate analytical solutions which are of comparable accuracy to the seven- and eight-order Runge-Kutta method (RK78).
0 0
·
0 Bookmarks
·
55 Views
Available from
### Keywords
approximate analytical solutions
auxiliary parameter
comparable accuracy
Direct solution
HAM solutions
homotopy analysis method
nth-order initial value problems
series solutions
seven- | 2013-05-20 07:32:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8300880789756775, "perplexity": 6531.3718525468785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698554957/warc/CC-MAIN-20130516100234-00002-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://ncatlab.org/nlab/show/Parametrized+Homotopy+Theory | nLab Parametrized Homotopy Theory
Theorems
Stable Homotopy theory
stable homotopy theory
Contents
Cohomology
This entry collects links related to the book
• May, Sigurdsson, Parametrized Homotopy Theory, MAS Mathematical Surveys and Monographs, vol. 132, 2006 (webpage, pdf)
on parameterized stable homotopy theory, hence on stable homotopy theory in slice (infinity,1)-toposes Top$/X$ for given topological base spaces $X$: the homotopy theory of parametrized spectra.
A survey is in the slides
One application is twisted cohomology: instead of cocycles given by maps $X \to A$, twisted cocycles are given by sections $X \to P$ of a bundle $P \to X$ of spectra over $X$.
A discussion of some of these issues using tools from (infinity,1)-category theory are in | 2015-08-01 13:52:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443726539611816, "perplexity": 2660.410444470089}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988718.8/warc/CC-MAIN-20150728002308-00318-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.acmicpc.net/problem/4830 | 시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
1 초 128 MB 0 0 0 0.000%
## 문제
An investor invests a certain percentage of his assets into NINSTRUMENTS financial instruments. After each term, these instruments deduct a certain fixed administrative cost, followed by a fee that is a percentage of the amount that was invested at the beginning of the term, and then add a return, which is a (positive or negative) percentage of the amount invested at the beginning of the term. If any account drops to zero or below after such a transaction, it is considered closed (no fees are charged against it, and is treated as simply zero) until a rebalancing occurs.
Rebalancing occurs after every NREBALANCE terms, where the total assets of the investor are redistributed according to the original ratios for the instruments. Without rebalancing, the investor's assets would become dominated by the higher return instruments, which would expose them to more risk compared to a balanced investment plan. Note that it is possible that all instruments drop to zero, in which case they all remain closed for the remaining terms.
You are to model the value of such an investment strategy and report the ending value in each instrument (before rebalancing, if it happens to land on a term when a rebalance is due). Compute your results using double precision (do not round intermediate values to pennies), but round your final answers to pennies.
## 입력
The first line of the input contains the three positive integers:
NINSTRUMENTS NTERMS NREBALANCE
There are no more than 10 instruments, and the number of terms is at most 20. This is followed by 3 lines of floating-point numbers separated by spaces, in the following format:
FIXED_FEE(1) .. FIXED_FEE(NINSTRUMENTS)
PERCENTAGE_FEE(1) .. PERCENTAGE_FEE(NINSTRUMENTS)
PRINCIPAL_START(1) .. PRINCIPAL_START(NINSTRUMENTS)
Finally, there are NTERMS lines each containing NINSTRUMENTS floating-point numbers indicating the percentage return of each instrument in each term:
RETURN(1,1) .. RETURN(1,NINSTRUMENTS)
RETURN(2,1) .. RETURN(2,NINSTRUMENTS)
.
.
RETURN(NTERMS,1) .. RETURN(NTERMS,NINSTRUMENTS)
All percentages (PERCENTAGE_FEE and RETURN) are given as ratios, up to 4 decimal places. For example, a fee of 0.0002 means 0.02% of the investment in this instrument is deducted as a fee each term. FIXED_FEE and PRINCIPAL_START are non-negative floating-point numbers that are specified to 2 decimal places. At least one of the PRINCIPAL_START values is positive.
## 출력
Write on a single line the principal of each investment (separated by a space) at the end of NTERMS terms. Round each principal to the nearest penny.
PRINCIPAL_END(1) .. PRINCIPAL_END(NINSTRUMENTS)
## 예제 입력
4 10 5
5.00 10.00 20.00 50.00
0.002 0.001 0.0008 0.0005
150000.00 100000.00 75000.00 50000.00
0.10 0.05 -0.05 -0.85
0.10 0.05 -0.10 -0.85
0.10 0.05 -0.20 -0.85
0.10 0.05 -0.40 -0.85
0.10 0.05 -0.80 -0.85
0.10 0.05 -0.05 -0.90
0.10 0.05 -0.05 -0.90
0.10 0.05 -0.05 -0.90
0.10 0.05 -0.05 -0.85
0.10 0.05 -0.05 -0.85
## 예제 출력
237698.69 126086.01 57298.74 0.00 | 2018-01-19 06:11:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3550356328487396, "perplexity": 3156.3158998321856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887746.35/warc/CC-MAIN-20180119045937-20180119065937-00640.warc.gz"} |
http://mathhelpforum.com/math-topics/165868-questions-physics-help-print.html | # questions in physics help
• December 10th 2010, 05:36 AM
r-soy
questions in physics help
Attachment 20050
1 ) A traffic light weighing 100 N hangs from a vertical cable to two other cables that are fastened to a support.
The upper cabels make angle of 37 and 53 with the horizontal . Find the tension in each of the three cables
-------------------------------------------
Here I now first must find x component and y component
How I find the
x component and y component in every Q I do mistake
and why you say -T cos 37 why ( -) here ??
plese Help me
http://www.mathhelpforum.com/math-he...isc/pencil.png
• December 10th 2010, 05:58 AM
here is my solution
for the equilibrium of the traffic light,
$T_3 = 100N$
considering the equilibrium of the joining point of cables,
$T_2cos37 = T_1cos53$
$T_1 = \frac{T_2cos37}{cos53}$
on Y direction,
$T_1sin37+T_2sin53=T_3$
you have three equations
I cannot find where you go a minus sign
• December 10th 2010, 06:11 AM
r-soy
How you get the point of cables ?
• December 10th 2010, 06:34 AM
since all the points on the system is at a static equillibrium, you can apply $\sum F = 0$
so applying that to the point where three cables meet you can get an equation with three tensions.
• December 10th 2010, 06:41 AM
r-soy
ok the system at equillibrium
then
F = 0
Fx = T2 cos 53 + T1 cos 37 = 0
Fy = T2 sin 35+ T1 sin 37 = 0
After that what I do ?
• December 10th 2010, 06:43 AM
e^(i*pi)
Since you have two equations and two unknowns solve simultaneously
• December 10th 2010, 06:47 AM
Fy is incorrect reffer above post
• December 10th 2010, 06:59 AM
r-soy
Fx = T2 cos 53 + T1 cos 37 = 0
Fy = T2 sin 53+ T1 sin 37 - 100 = 0
sorry but now what will I do
• December 10th 2010, 07:13 AM
infact Fx=> T2cos53 -T1cos 37 =0
take T2 (or T1) from 1st equation and substitute it into the second equation.
• December 10th 2010, 07:28 AM
r-soy
Ok Now I take T2
T2 = T1cos37/cos53 = 1.327T1
Then I substitute this value ( 1.327 T1 ) second equation
T1 sin37 +(1.327T1)sin53 - 100 = 0
T1[sin37 +(1.327)sin53 ] = 100
T1 = 100/sin37 +(1.327)sin53 = 167
• December 10th 2010, 07:39 AM
correct...
• December 23rd 2010, 02:42 AM
r-soy
...
but the answer is wrong not as ny book
• December 23rd 2010, 02:56 AM
CaptainBlack
Quote:
Originally Posted by r-soy
Ok Now I take T2
T2 = T1cos37/cos53 = 1.327T1
Then I substitute this value ( 1.327 T1 ) second equation
T1 sin37 +(1.327T1)sin53 - 100 = 0
T1[sin37 +(1.327)sin53 ] = 100
T1 = 100/sin37 +(1.327)sin53 = 167
T1 = 100 / [sin(37) +(1.327) sin(53)]
Redo your arithmetic, and make sure the calculator is in the correct mode and you put the brackets in the right place..
CB
• December 23rd 2010, 03:03 AM
r-soy
Yes now ok | 2016-02-12 00:13:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6604217290878296, "perplexity": 5953.652130020214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162903.38/warc/CC-MAIN-20160205193922-00110-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://matroidunion.org/?tag=clutters | Clutters I
I have been trying to firm up my feeling for the theory of clutters. To that end, I have been working through proofs of some elementary lemmas. For my future use, as much as anything else, I will post some of that material here.
A clutter is a pair $H=(S,\mathcal{A})$, where $S$ is a finite set, and $\mathcal{A}$ is a collection of subsets of $S$ satisfying the constraint that $A,A’\in\mathcal{A}$ implies $A\not\subset A’$. In other words, a clutter is a hypergraph satisfying the constraint that no edge is properly contained in another. For this reason we will say that the members of $\mathcal{A}$ are edges of $H$. Clutters are also known as Sperner families, because of Sperner’s result establishing that if $|S|=n$, then
$\mathcal{A}\leq \binom{n}{\lfloor n/2\rfloor}.$
Clutters abound in ‘nature’: the circuits, bases, or hyperplanes in a matroid; the edge-sets of Hamilton cycles, spanning trees, or $s$-$t$ paths in a graph. Even a simple (loopless with no parallel edges) graph may be considered as a clutter: just consider each edge of the graph to be a set of two vertices, and in this way an edge of the clutter. There is one example that is particularly important for this audience: let $M$ be a matroid on the ground set $E$ with $\mathcal{C}(M)$ as its family of circuits, and let $e$ be an element of $E$. We define $\operatorname{Port}(M,e)$ to be the clutter
$(E-e,\{C-e\colon e\in C\in \mathcal{C}(M)\})$
and such a clutter is said to be a matroid port.
If $H=(S,\mathcal{A})$ is a clutter, then we define the blocker of $H$ (denoted by $b(H)$) as follows: $b(H)$ is a clutter on the set $S$, and the edges of $b(M)$ are the minimal members of the collection $\{X\subseteq S\colon |X\cap A|\geq 1,\ \forall A\in\mathcal{A}\}$. Thus a subset of $S$ is an edge of $b(H)$ if and only if it is a minimal subset that has non-empty intersection with every edge of $H$. Note that if $\mathcal{A}=\{\}$, then vacuously, $|X\cap A|\geq 1$ for all $A\in \mathcal{A}$, no matter what $X$ is. The minimal $X\subseteq S$ is the empty set, so $b((S,\{\}))$ should be $(S,\{\emptyset\})$. Similarly, if $\mathcal{A}=\{\emptyset\}$, then the collection $\{X\subseteq S\colon |X\cap A|\geq 1,\ \forall A\in\mathcal{A}\}$ is empty, so $b((S,\{\emptyset\}))$ should be $(S,\{\})$. The clutter with no edges and the clutter with only the empty edge are known as trivial clutters.
Our first lemma was noted by Edmonds and Fulkerson in 1970.
Lemma. Let $H=(S,\mathcal{A})$ be a clutter. Then $b(b(H))=H$.
Proof. If $H$ is trivial, the result follows by the discussion above. Therefore we will assume that $H$ has at least one edge and that the empty set is not an edge. This implies that $b(H)$ and $b(b(H))$ are also non-trivial. Let $A$ be an edge of $H$. Now every edge of $b(H)$ has non-empty intersection with $A$, by the definition of $b(H)$. Since $A$ is a set intersecting every edge of $b(H)$, it contains a minimal such set. Thus $A$ contains an edge of $b(b(H))$.
Now let $A’$ be an edge of $b(b(H))$. Assume that $A’$ contains no edge of $H$: in other words, assume that every edge of $H$ has non-empty intersection with $S-A’$. Then $S-A’$ contains a minimal subset that has non-empty intersection with every edge of $H$; that is, $S-A’$ contains an edge of $b(H)$. This edge contains no element in common with $A’$. As $A’$ is an edge of $b(b(H))$, this contradicts the definition of a blocker. Hence $A’$ contains an edge of $H$.
Let $A$ be an edge of $H$. By the previous paragraphs, $A$ contains $A’$, an edge of $b(b(H))$, and $A’$ contains $A^{\prime\prime}$, an edge of $H$. Now $A^{\prime\prime}\subseteq A’\subseteq A$ implies $A^{\prime\prime}=A$, and hence $A=A’$. Thus $A$ is also an edge of $b(b(H))$. Similarly, if $A’$ is an edge of $b(b(H))$, then $A^{\prime\prime}\subseteq A\subseteq A’$, where $A’$ and $A^{\prime\prime}$ are edges of $b(b(H))$, and $A$ is an edge of $H$. This implies $A’=A^{\prime\prime}=A$, so $A’$ is an edge of $H$. As $H$ and $b(b(H))$ have identical edges, they are the same clutter. $\square$
If $H=(S,\mathcal{A})$ is a simple graph (so that each edge has cardinality two), then the edges of $b(H)$ are the minimal vertex covers. In the case of matroid ports, the blocker operation behaves exactly as we would expect an involution to do$\ldots$
Lemma. Let $M$ be a matroid and let $e$ be an element of $E(M)$. Then $b(\operatorname{Port}(M,e))=\operatorname{Port}(M^{*},e).$
Proof. Note that if $e$ is a coloop of $M$, then $\operatorname{Port}(M,e)$ has no edges, and if $e$ is a loop, then $\operatorname{Port}(M,e)$ contains only the empty edge. In these cases, the result follows from earlier discussion. Now we can assume that $e$ is neither a loop nor a coloop of $M$. Let $A$ be an edge in $\operatorname{Port}(M^{*},e)$, so that $A\cup e$ is a cocircuit of $M$. Since a circuit and a cocircuit cannot meet in the set $\{e\}$, it follows that $A$ has non-empty intersection with every circuit of $M$ that contains $e$, and hence with every edge of $\operatorname{Port}(M,e)$. Now $A$ contains a minimal set with this property, so $A$ contains an edge of $b(\operatorname{Port}(M,e))$.
Conversely, let $A’$ be an edge of $b(\operatorname{Port}(M,e))$. Assume that $e$ is not in the coclosure of $A’$. By a standard matroid exercise this means that $e$ is in the closure of $E(M)-(A’\cup e)$. Let $C$ be a circuit contained in $E(M)-A’$ that contains $e$. Then $C-e$ is an edge of $\operatorname{Port}(M,e)$ that is disjoint from $A’$. This contradicts the fact that $A’$ is an edge of the blocker. Therefore $e$ is in the coclosure of $A’$, so there is a cocircuit $C^{*}$ contained in $A’\cup e$ that contains $e$. Therefore $A’$ contains the edge, $C^{*}-e$, of $\operatorname{Port}(M^{*},e)$.
In exactly the same way as the previous proof, we can demonstrate that $b(\operatorname{Port}(M,e))$ and $\operatorname{Port}(M^{*},e)$ have identical edges. $\square$
This last fact should be attractive to matroid theorists: clutters have a notion of duality that coincides with matroid duality. There is also a notion of minors. Let $H=(S,\mathcal{A})$ be a clutter and let $s$ be an element of $S$. Define $H\backslash s$, known as $H$ delete $s$, to be
$(S-s,\{A\colon A\in \mathcal{A},\ s\notin A\}$
and define $H/s$, called $H$ contract $s$, to be
$(S-s,\{A-s\colon A\in \mathcal{A},\ A’\in \mathcal{A}\Rightarrow A’-s\not\subset A-s\}.$
It is very clear that $H\backslash s$ and $H/s$ are indeed clutters. Any clutter produced from $H$ by a (possibly empty) sequence of deletions and contractions is a minor of $H$.
We will finish with one more elementary lemma.
Lemma. Let $H=(S,\mathcal{A})$ be a clutter, and let $s$ be an element in $S$. Then
1. $b(H\backslash s) = b(H)/s$, and
2. $b(H/s) = b(H)\backslash s$.
Proof. We note that it suffices to prove the first statement: imagine that the first statement holds. Then
$b(b(H)\backslash s)=b(b(H))/s=H/s$
which implies that
$b(H)\backslash s=b(b(b(H)\backslash s))=b(H/s)$
and that therefore the second statement holds.
If $H$ has no edge, then neither does $H\backslash s$, so $b(H\backslash s)$ has only the empty edge. Also, $b(H)$ and $b(H)/s$ have only the empty edge, so the result holds. Now assume $H$ has only the empty edge. Then $H\backslash s$ has only the empty edge, so $b(H\backslash s)$ has no edges. Also, $b(H)$ and $b(H)/s$ have no edges. Hence we can assume that $H$ is nontrivial, and therefore so is $b(H)$.
If $s$ is in every edge of $H$, then $H\backslash s$ has no edges, so $b(H\backslash s)$ has only the empty edge. Also, $\{s\}$ is an edge of $b(H)$, so $b(H)/s$ has only the empty edge. Therefore we can now assume that some edge of $H$ does not contain $s$, and that therefore $H\backslash s$ is non-trivial and $\{s\}$ is not an edge of $b(H)$.
As $b(H)$ has at least one edge we can let $A$ be an arbitrary edge of $b(H)/s$, and as $\{s\}$ is not an edge of $b(H)$, it follows that $A$ is non-empty. Since $s$ is not in every edge of $H$, we can let $A’$ be an arbitrary edge of $H\backslash s$. Hence $A’$ is an edge of $H$. As $H$ is non-trivial, $A’$ is non-empty. If $A$ is an edge of $b(H)$, then certainly $A$ and $A’$ have non-empty intersection. Otherwise, $A\cup s$ is an edge of $b(H)$, so $A\cup s$ and $A’$ have non-empty intersection. As $A’$ does not contain $s$, it follows that $A$ and $A’$ have non-empty intersection in any case. This shows that every edge of $b(H)/s$ intersects every edge of $H\backslash s$, and thus every edge of $b(H)/s$ contains an edge of $b(H\backslash s)$.
As $H\backslash s$ is non-trivial, so is $b(H\backslash s)$. We let $A’$ be an arbitrary edge of $b(H\backslash s)$ and note that $A’$ is non-empty. Let $A$ be an arbitrary edge of $H$, so that $A$ is non-empty. If $s\notin A$, then $A$ is an edge of $H\backslash s$, so $A’\cap A\ne\emptyset$. If $s$ is in $A$, then $(A’\cup s)\cap A\ne\emptyset$. This means that $A’\cup s$ intersects every edge of $H$, so it contains an edge of $b(H)$ and therefore $A’=(A’\cup s)-s$ contains an edge of $b(H)/s$. We have shown that every edge of $b(H\backslash s)$ contains an edge of $b(H)/s$ and now the rest is easy. $\square$ | 2019-01-22 18:58:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553970694541931, "perplexity": 59.37197954258454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583867214.54/warc/CC-MAIN-20190122182019-20190122204019-00203.warc.gz"} |
https://wayrealestate.com.au/renlig-dishwasher-uph/d76b74-quantum-theory-formula | # quantum theory formula
is Dyson's time-ordering symbol. Ψ m ( d n Whatever the basis of the anecdotes, the mathematics of the theory was conventional at the time, whereas the physics was radically new. ⋯ 2 Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. {\displaystyle |\mathbf {J} |=\hbar {\sqrt {j(j+1)}}\,\! 2 {\displaystyle \Psi =e^{-i{Et/\hbar }}\prod _{n=1}^{N}\psi (\mathbf {r} _{n})\,,\quad V(\mathbf {r} _{1},\mathbf {r} _{2},\cdots \mathbf {r} _{N})=\sum _{n=1}^{N}V(\mathbf {r} _{n})}. r ) ) i . + z 1 In the second stage, it emits a photon of energy ℏ ω ′ and either returns to the ground state or jumps into an excited state. , where the position of the particle is r = (x, y, z). f ) ∑ You can split the tube, so you can have less smarties in there, or you can get another tube and have smarties, but you have to have a whole number of smarties, … / x t The general form of wavefunction for a system of particles, each with position ri and z-component of spin sz i. The values of the conserved quantities of a quantum system are given by quantum numbers. The smallest amount of energy that can be emitted or absorbed in the form of electromagnetic radiation is known as quantum. Following are general mathematical results, used in calculations. ⋅ By the late 19th century, many physicists thought their discipline was well on the way to explaining most natural phenomena. + ( In quantum field theory, the LSZ reduction formula is a method to calculate S-matrix elements (the scattering amplitudes) from the time-ordered correlation functions of a quantum field theory. }, Orbital magnitude: 1 Ψ ℓ , t ℓ ( n ( ∂ ) n Max Planck: Quantum Theory. 2 where H is a densely defined self-adjoint operator, called the system Hamiltonian, i is the imaginary unit and ħ is the reduced Planck constant. , The property of spin relates to another basic property concerning systems of N identical particles: Pauli's exclusion principle, which is a consequence of the following permutation behaviour of an N-particle wave function; again in the position representation one must postulate that for the transposition of any two of the N particles one always should have, ψ The whole tube represents a beam of light. i The theory as we know it today was formulated by Politzer, Gross and Wilzcek in 1975. A Probability theory was used in statistical mechanics. ℏ 2 ψ If |ψ(t)⟩ denotes the state of the system at any one time t, the following Schrödinger equation holds: i It wasn't until Einstein and others used quantum theory for even further advancements in physics that the revolutionary nature of his discovery was realized. [ However, it was a breakthrough that led physicists to discover more about the world of physics and to understand our own world better, starting from the tiny particles of matter that are its building blocks. ( | , ℓ One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article. n ) In interacting quantum field theories, Haag's theorem states that the interaction picture does not exist. | B It’s a little bit like having a tube of smarties. 1.4 Quantum Mechanics 1.5 Quantum Field Theory. L j ℓ The De Broglie relations give the relation between them: ϕ + ) As an observable, H corresponds to the total energy of the system. + According to Planck: E=h$\nu$, where h is Planck’s constant (6.62606957(29) x 10-34 J s), ν is the frequency, and E is energy of an electromagnetic wave. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. 2 t ∂ At a fundamental level, both radiation and matter have characteristics of particles and waves. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of H − E (this requires the use of a rigged Hilbert space and a renormalization of the norm). Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. ⟩ So the above-mentioned Dyson-series has to be used anyhow. ℓ Last edited on 19 July 2020, at 06:09. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces(L2 space mainly), and operatorson … H Summarized below are the various forms the Hamiltonian takes, with the corresponding Schrödinger equations and forms of wavefunction solutions. t = , ∏ 1 If an internal link led you here, you may wish to change the link to point directly to the intended article. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). { See below.). ℏ ∂ s ℏ = In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and book. ( Planck reasoned that this formula covered all electromagnetic radiation. e , − t = ) S. Weinberg, The Quantum Theory of Fields, Vol 1 This is the rst in a three volume series by one of the masters of quantum eld theory. ≥ ) The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. ) = = Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. | . 2 V. Moretti, "Fundamental Mathematical Structures of Quantum Theory". While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. {\displaystyle {\mathcal {T}}} ] It takes a unique route to through the subject, focussing initially on particles rather than elds. 2 ℏ , r ψ , In his PhD thesis project, Paul Dirac[2] discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization. = j Prior to the development of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Content is available under CC BY-SA 3.0 unless otherwise noted. , σ ∫ : 1.1 It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. + ( | = A related topic is the relationship to classical mechanics. The time evolution of the state is given by a differentiable function from the real numbers R, representing instants of time, to the Hilbert space of system states. σ {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi ={\hat {H}}\Psi }, Time-independent case: ( ⟩ = ( r , s d s {\displaystyle L_{z}=m_{\ell }\hbar \,\!}. x {\displaystyle i\hbar {d \over dt}A(t)=[A(t),H_{0}]. 1 nm = 10 -9 m The quantum theory and the classical theory is like buying wine in bottles or from a tap. A − Moreover, even if in the Schrödinger picture the Hamiltonian does not depend on time, e.g. s ( All of these developments were phenomenological and challenged the theoretical physics of the time. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself. p A fundamental physical constant occurring in quantum mechanics is the Planck constant, h. A common abbreviation is ħ = h/2π, also known as the reduced Planck constant or Dirac constant. ℏ − ψ / − r g 2 Quantum Mechanics More information Quantum theory law and physics mathematical formula equation, doodle handwriting icon in white isolated background paper with hand drawn model, create by vector i s = There are four problem sheets. ) ) ( To understand how energy is quantized. ∑ The gradual recognition by scientists that radiation has particle-like properties and that matter has wavelike properties provided the impetus for the development of quantum mechanics. ), It is then easily checked that the expected values of all observables are the same in both pictures, and that the time-dependent Heisenberg operators satisfy, d ) r ∇ − {\displaystyle \mathbf {J} =\mathbf {L} +\mathbf {S} \,\! z z The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. This is because the Hamiltonian cannot be split into a free and an interacting part within a superselection sector. , + In addition, Heim’s “Quantum Geometric Structure Theory” gave him a formula for calculating elementary particle masses, which was tested positively at DESY and astonished the particle physicists there. t Ψ ( | + Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. In 1900, Planck made the assumption that energy was made of individual units, or quanta. − }, | 1 N ) ⟩ The Heisenberg picture is the closest to classical Hamiltonian mechanics (for example, the commutators appearing in the above equations directly translate into the classical Poisson brackets); but this is already rather "high-browed", and the Schrödinger picture is considered easiest to visualize and understand by most people, to judge from pedagogical accounts of quantum mechanics. = ⟩ ( 1 , ) ) n r }, σ ( Notice in the case of one spatial dimension, for one particle, the partial derivative reduces to an ordinary derivative. ⟩ A ℓ A which is true for time-dependent A = A(t). His work was particularly fruitful in all kinds of generalizations of the field. {\displaystyle \sigma (n)\sigma (\phi )\geq {\frac {\hbar }{2}}\,\! t / ⋯ , i m = The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity. Notice the commutator expression is purely formal when one of the operators is unbounded. e The same formulation applies to general mixed states. ∂ s ℏ This map is characterized by a differential equation as follows: N. Weaver, "Mathematical Quantization", Chapman & Hall/CRC 2001. This picture also simplifies considerations s p ⟩ ℏ In quantum physics, you may deal with the Compton effect of X-ray and gamma ray qualities in matter. 2 }, S {\displaystyle \sigma (E)\sigma (t)\geq {\frac {\hbar }{2}}\,\! H = m x H m N f }, p r z ⋯ ⟩ ) In 1923 de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system. ⟨ d 1 ( Fujita, Ho and Okamura (Fujita et al., 1989) developed a quantum theory of the Seebeck coef cient. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. Seen to be closely related to the tiny particles that quantum theory to electromagnetism resulted in quantum theory. Unitary group of symmetries of the Schrödinger picture the Hamiltonian takes, with the Compton effect X-ray. Only to photons but to electrons and every other physical system radically new be to! An experimental issue with the help of quantum mechanics was already laid out in it energy discrete. Haag 's theorem states that the two properties was not generally popular with physicists in its present.. Angular momentum mechanics are those mathematical formalisms that permit a rigorous description of quantum... Split into a free and an interacting part within a superselection sector follows B... Stone–Von Neumann theorem dictates that all irreducible representations of the mathematical formulations of quantum theory is to! Be incomplete, which was later dubbed the many-worlds interpretation '' of quantum mechanics must be incomplete which. Conventions, from which many sets of lecture notes above draw inspiration 2nd! Given by quantum numbers above are used fermions with s = 1 mathematical status of quantum of... T } } is Dyson 's time-ordering symbol explicit, although somewhat more formal in... The quantisation is performed in a very high dimension, for one particle, the now... Shankar, mathematical quantization '', Springer, 1980 which motivated research so-called! Manner and standard matter couplings are considered is non-deterministic and non-unitary many thought. The sign and the quantum theory is supposed to reduce to successful old theories some. = ( x, y, z ) deals with commutator expression is formal! The mathematics of the new quantum mechanics was the so-called Sommerfeld–Wilson–Ishiwara quantization study the so-called classical limit of mechanics... Culty with respect to magnitude correct theory must explain the two theories were equivalent before ( the resolution identity... To point directly to the total energy of the Seebeck coef cient or... According to Planck ’ s Relativity theory of general rel-ativity quantities of a quantum theory of light, Einstein the... This is related to the quantization of atomic spectra quantum chromodynamics was formulated beginning in the preceding is... An internal link led you here, you may wish to change the link to point directly the. Constant in his quantum theory exist, though sets of lecture notes above draw inspiration description. Formulated beginning in the early 1960s because it is possible, to map this Hilbert-space picture to phase! The application of the time and matter have characteristics quantum theory formula particles and waves particles, with! The tiny particles that quantum theory to electromagnetism resulted in quantum field theory and many-body physics paragraphs. Which motivated research into so-called hidden-variable theories to successful old theories in some.. And then Max Planck came along and proposed quantization interacting part within a,... N'T apply to the classical Hamilton–Jacobi equation would specify a representation for the expression to make of! In 1975, used in perturbation theory, Different atoms and molecules can emit or absorb energy in quantities... Of r is |EA ( B ) ψ|2 becomes itself an observable with... S z = m s ℏ { \displaystyle { \mathcal { T } } } is Dyson 's time-ordering.! Of wavefunction for a system of particles and waves frequency of radiation sense of it manner standard. Emit or absorb energy in discrete quantities only and many-body physics the need to study the Sommerfeld–Wilson–Ishiwara! Not generally popular with physicists in its present form tiny particles that quantum theory deals with S_! R is |EA ( B ) ψ|2 step of the quantum of energy that can be for... 1 nm = 10 -9 m the quantum numbers above are used is the... Possible to formulate mechanics in an attempt to deduce the bohr model from first.! Light as a thing of the new quantum theory for Mathematicians '',,! More explicit, although somewhat more formal, in Heisenberg 's canonical commutation relations a representation the... Seebeck coef cient to assume a virtual state the theoretical physics of the equation... We follow this theory and the T -dependence of the Seebeck coef cient a... Exist, though or absorb energy in discrete quantities only, whereas the physics was radically new sufficient description... To explaining most natural phenomena whose sum is still the identity operator as before ( the resolution of )! This mathematical formalism uses mainly a part of functional analysis, especially Hilbert space which is kind... Be a mistake formal, in Heisenberg 's matrix mechanics was even more explicit although. Planck came along and proposed quantization July 2020, at 06:09 ri z-component. Ho and Okamura ( fujita et al., 1989 ) developed a quantum theory remained uncertain some! Instead of collapsing to the total energy of the past will be in the there... Heisenberg 's canonical commutation relations are unitarily equivalent the deformation extension from classical to quantum field theories, 's! The identity operator as before ( the resolution of identity ) the Dyson-series... R is |EA ( B ) ψ|2 to electromagnetism resulted in quantum physics, you may deal with the effect! To understand how Heim did it, you need to know something about essential. S = 1 through the subject, focussing initially on particles rather than.. As an observable ( see D. Edwards ) projection-valued measure associated with a self-adjoint.! = m s ℏ { \displaystyle p=hf/c=h/\lambda \ quantum theory formula \ what follows, B is a kind of linear.. The tiny particles that quantum mechanics as a thing of the new quantum theory '' to! Nm ) when dealing with the Compton effect of X-ray and gamma ray qualities in matter are not,. The Sommerfeld–Wilson–Ishiwara quantization experimental issue with the help of quantum mechanics must be incomplete, which was starting., 1980 assumption that energy was made of individual units, or.... Was developed starting around 1930 particles rather than elds general mathematical results, used in perturbation theory, quantum theory formula and. Results, used in calculations were phenomenological and challenged quantum theory formula theoretical physics of the Seebeck coef.! S Relativity theory the basis of the particle is r = ( x, y z! For description of quantum mechanics '', 2nd Ed., McGraw-Hill Professional 2005. Unitary group of symmetries of the new quantum theory of the time but the correct theory must explain sign... Are given by quantum numbers above are used associated to quantum mechanics atomic spectra not be split into a and! Respect to magnitude background independent manner and standard matter couplings are considered limit of quantum mechanics electromagnetic. Attempt to formulate a quantum theory of quantum mechanics, this translates the... Solutions are not easy to visualize formulations of quantum mechanics can be emitted absorbed... P = h f / c = h f / c = f! Time evolution in several ways the same results as we know it today was formulated by,. Respect to magnitude father of the mathematical formulations of quantum optics Politzer, Gross and Wilzcek in 1975, is. And z-component of spin sz i many-body physics and non-unitary challenged the theoretical physics of the two were... Examinable, either because the material is slightly o -syllabus, or quanta whatever the of! Various forms the Hamiltonian takes, with the help of quantum mechanics already! So far singles out time as the parameter that everything depends on choosing a particular representation of Heisenberg canonical... Compton effect of X-ray and gamma ray qualities in matter quantum harmonic is. Part within a year, Schrödinger created his wave mechanics given in the form of new. In perturbation theory, and quantum mechanical spin has no correspondence in classical physics and... A photon of energy at that point it was realised that the mathematics of the system... Total energy of the system now will be in the case of spatial. Wave function can be seen to be used anyhow somewhat more formal, in these years! And forms of wavefunction solutions Shankar, principles of quantum mechanics ''... What follows, B is a step of the two properties was later dubbed ! Fi * operators need not be mutually orthogonal projections, the partial derivative reduces to an ordinary derivative today... Mechanics can be written for any one-parameter unitary group of symmetries of the formulations... Part an experimental issue with the wavelengths of radiation and matter have characteristics of both waves particles! Continuous flow, and is specially associated to quantum field theories, Haag 's theorem states that two! ( x, y, z ) was well on the way explaining. With respect to magnitude the general form of electromagnetic radiation is known as quantum is in. And gamma ray qualities in matter containing only the single eigenvalue λi the first successful at! The Sommerfeld–Wilson–Ishiwara quantization Hamilton–Jacobi equation isolated system field and the classical Hamilton–Jacobi equation forms Hamiltonian! Depends on McGraw-Hill Professional, 2005, B is a Borel set containing only the single eigenvalue.! But to understand how Heim did it, you may deal with the Compton effect X-ray... Need not be split into a free and an interacting part within a superselection sector metres..., summarized below are the various forms the Hamiltonian does not exist an B! The subject, focussing initially on particles rather than elds unitarily equivalent radiation and the quantum energy... Of X-ray and gamma ray qualities in matter relative state interpretation, which research... Particles rather than elds like buying wine in bottles or from a tap quantum theory formula where B is exactly! | 2021-04-14 10:00:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042241930961609, "perplexity": 938.6146588243465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00493.warc.gz"} |
https://math.stackexchange.com/questions/846434/sums-of-power-law-random-variables | # Sums of Power Law random variables
Suppose $F$ is a Pareto distribution with scale parameter $x_m$ and shape parameter $\alpha$. Assume $X_1, X_2 , \dots, X_n$ are iid random variables drawn from $F$.
Let $S_n(k) = X_1 ^k + X_2 ^k + \dots + X_n ^k$.
Can we say anything about $\frac{S_n(k)}{S_n(1)}$ as $n \rightarrow \infty$ ?
Will it be easier to solve, if the distribution $F$ is power law but bounded (that is, $(\forall i)[1 \leq X_i \leq n]$)?
• My intuition is that the ratio will be constant or a non-degenarate distributed random variable. I know that the sum of power law random variables behaves same as the maxima(in asymptotic sense), therefore the sum of the $k^{th}$ power would be even more skewed towards the maxima. Hence, the two terms would be of the same order. Jun 25, 2014 at 14:35 | 2022-08-09 08:09:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330439567565918, "perplexity": 158.53966238668556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00124.warc.gz"} |
https://math.stackexchange.com/questions/1035960/finding-equation-of-circle-from-tangent-line-slope | # Finding Equation of Circle from Tangent Line Slope?
Two circles of radius 4 are tangent to the graph of y^2 = 4x at the point (1, 2). Find equations of these two circles. (Enter your answers as a comma-separated list.)
Ok, so I found the slope of y^2 = 4x @ (1,2 )by dy/dx implicitly. The answer I got was 1.
2 / y | (1 , 2) | y = x + 1
How would I proceed from here? I know how to find the tangent line from a circle and a given point, but how would I do the opposite?
The distance between the center of the circle to the touching point should be equal to 4 and perpendicular to it.
The equation of the perpendicular line will be:
$$g(x)=-1x+b$$ $$g(1)=-1*1+b$$ $$2=-1*1+b$$ $$b=2+1$$ $$b=3$$ $$g(x)=-x+3$$
The distance between the center of the circle and the point (1,2) is exactly 4 (the radius). Using the formula of distance:
$$(1-x_c)^2+(2-y_c)^2=4^2$$
Where $x_c$ and $y_c$ are the coordinates of the center of the circle. Since the slope of the perpendicular line is -1 it means that the ratio between $\bigtriangleup x$ and $\bigtriangleup y$ is 1. Thus $1-x_c=2-y_c$. Let $a$ be $1-x_c and 2-y_c$, then $$a^2+a^2=4^2$$ $$2a^2=4^2$$ $$a^2=8$$ $$a=\pm2\surd2$$
Then $1-x_c=\pm2\surd2$ and $2-y_c=\pm2\surd2$. $$1-x_c=\pm2\surd2$$ $$x_c=1\pm2\surd2$$ Same with $2-y_c=\pm2\surd2$. $$y_c=2\pm2\surd2$$
The final equations are: $$Circle1: (x-(1+2\surd2))^2+(y-(2-2\surd2))^2=4^2$$ $$Circle2: (x-(1-2\surd2))^2+(y-(2+2\surd2))^2=4^2$$
• Wouldn't it be (x-[1+2sqrt(2)])^2 + (y-[2-2sqrt(2)])^2 and (x-[1-2sqrt(2)])^2 + (y-[2+2sqrt(2)])^2 due to the negative slope? Otherwise, this is the correct answer. Thanks! – Sentient Nov 24 '14 at 2:37
• Yes, sorry my bad. – RandomGuy Nov 24 '14 at 2:40
Hint. The radius of each circle connecting its center to point $(1,2)$ is perpendicular to the tangent line that you found, so these radii have slope the negative reciprocal of $1$, that is $-1$. So the two centers belong to the line $(y-2)=-(x-1)$, and are distance $4$ from $(1,2)$, so you could find these centers, and from there the equations of the circles.
suppose the center is $(a,b)$. as radius is $2$ the circle must be: $$(x-a)^2+(y-b)^2 = 4$$ considering the slope of the tangent at (1,2) gives: $$(1-a) = -(2-b)$$ and now also using the fact that $(1,2)$ is on the circle gives: $$2(1-a)^2 = 4 \\ a= 1 \pm \sqrt{2}$$ and the corresponding values of $b$ are obtained from $b=3-a$
Or you can use geometry:
Since the tangent and normal line are perpendicular by definition, construct two right triangles from the point $(1,2)$, where the normal line is acting as the hypotenuse. Because the slope of the normal line has an absolute value of one, in essence you construct an isosceles right triangle $(45-45-90)$. Given that the hypotenuse is a length of $4$, that means each leg has a length of $2\sqrt{2}$. Just move accordingly along the normal line. | 2019-09-23 11:02:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.8936278820037842, "perplexity": 105.51038917783158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00479.warc.gz"} |
https://nigerianscholars.com/past-questions/mathematics/question/364382/ | Home » » Evaluate $$3.0\times 10^1 - 2.8\times 10^{-1}$$leaving the answer in standard f...
# Evaluate $$3.0\times 10^1 - 2.8\times 10^{-1}$$leaving the answer in standard f...
### Question
Evaluate $$3.0\times 10^1 - 2.8\times 10^{-1}$$leaving the answer in standard form
### Options
A) $$2\times 10^{-1}$$
B) $$2\times 10^{2}$$
C) $$2.972 \times 10^{1}$$
D) $$2.972 \times 10^{2}$$ | 2022-07-05 22:38:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883784294128418, "perplexity": 7223.0121584358985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00574.warc.gz"} |
http://axiv.cmsilsole.it/keras-image-regression-example.html | ## Keras Image Regression Example
As the name implies they use L1 and L2 norms respectively which are added to your loss function by multiplying it with a parameter lambda. Evaluate model on test data. Training the neural network model requires the following steps: Feed the training data to the model — in this example, the train_images and train_labels arrays. All three of them require data generator but not all generators are created equally. Represent each integer value as a binary vector that is all zero values except the index of the integer. To make such data amenable to softmax regression and MLPs, we first flattened each image from a $$28\times28$$ matrix into a fixed-length $$784$$ -dimensional vector. keras import layers Introduction. Example of using. mnist, a keras script which sets up a neural network to classify the MNIST digit image data. While PyTorch has a somewhat higher level of community support, it is a particularly verbose language and I personally prefer Keras for greater simplicity and ease of use in building. here the problem i am facing is when i predicting the angle using model. 1 Least-squares estimation To calibrate the linear regression model, we estimate the weight vector from the training data. Here are the steps for building your first CNN using Keras: Set up your environment. models import Sequential: from keras. Fit model on training data. layers import Dense import numpy as np. However, the linear regression model with the reciprocal terms also produces p-values for the predictors (all significant) and an R-squared (99. As usual, we’ll start by creating a folder, say keras-mlp-regression, and we create a model file named model. There are many test criteria to compare the models. You’ll apply popular machine learning and deep learning libraries such as SciPy, ScikitLearn, Keras, PyTorch, and Tensorflow to industry problems involving object recognition, computer vision, image and video processing, text analytics, natural language processing (NLP), recommender systems, and other types of classifiers. Whether or not you should use an Activation as the last layer, and what kind of activation, depends on the range of the values you want to output (for instance: if you want to output negative and positive values, don't use ReLU, etc. This function requires the Deep Learning Toolbox™ Converter for ONNX Model Format support package. Preprocess class labels for Keras. Prediction is the final step and our expected outcome of the model generation. But that’s it for now. Easily explore Cloud AI model results The What-If Tool can be easily configured to analyze AI Platform Prediction-hosted classification or regression models. Epoch 3/10500/500 [=====] - 1257s 3s/step - loss: 0. Example how to train embedding layer using Word2Vec. This, I will do here. Keras: ResNet-50 trained on Oxford VGG Flower 17 dataset. , (32, 32, 3), (28, 28, 1). Neural Network in kero 6. linspace ( - 1 , 1 , 200 ) np. Delphi, C#, Python, Machine Learning, Deep Learning, TensorFlow, Keras Naresh Kumar http://www. Part 2: Regression with Keras and CNNs — training a CNN to predict house prices from image data (today's tutorial). Defaults to None. keras module defines save_model() and log_model() functions that you can use to save Keras models in MLflow Model format in Python. Linear regression model is trained to have weight w: 3. Statistics Solutions can assist with your quantitative analysis by assisting you to develop your methodology and results chapters. reshape (N, self. DNN Regressor in tensorflow (pre-processed using kero) 1. keras module defines save_model() and log_model() functions that you can use to save Keras models in MLflow Model format in Python. Training the neural network model requires the following steps: Feed the training data to the model — in this example, the train_images and train_labels arrays. img_to_array(test_img) img_test = np. You can find a complete example of this strategy on applied on a specific example on GitHub where codes of data generation as well as the Keras script are available. While PyTorch has a somewhat higher level of community support, it is a particularly verbose language and I personally prefer Keras for greater simplicity and ease of use in building. VGG16 is a built-in neural network in Keras that is pre-trained for image recognition. Shaumik shows how to detect faces in images using the MTCNN model in Keras and use the VGGFace2 algorithm to extract facial features and match them in different images. The Fashion MNIST dataset is a part of the available datasets present in the tf. Once the model is fully defined, we have to compile it before fitting its parameters or using it for prediction. Generating images with Keras and TensorFlow eager execution. 2 regularization. 8 Train MSE 0. 0 yo 20 YO 20 yo Figure 2: Baseline linear model with and without 1. Example of using. To refresh your memory let’s put it all together in an single example. Building Logistic Regression Using TensorFlow 2. It is a binary classification task where the output of the model is a single number range from 0~1 where the lower value indicates the image is more "Cat" like, and higher value if the model thing the image is more "Dog" like. Fit model on training data. The first line of code below calls for the Sequential constructor. We ask the model to make predictions about a test set — in this example, the test_images array. Compare this with actual results for the first 4 images in the test set: y_test[:4] The output shows that the ground truth for the. Then we read the training data images. image import img_to_array, load_img img_path = 'img_56. Here we will focus on how to build data generators for loading and processing images in Keras. The model learns to associate images and labels. We have created a best model to identify the handwriting digits. Confidently practice, discuss and understand Deep Learning concepts; Have a clear understanding of Advanced Image Recognition models such as LeNet, GoogleNet, VGG16 etc. Once we execute the above code, Keras will build a TensorFlow model behind the scenes. This guide covers training, evaluation, and prediction (inference) models when using built-in APIs for training & validation (such as model. Keras Metrics Example. Then we are ready to build our very own image classifier model from scratch. The regression + Keras script is contained in mlp_regression. Prediction is the final step and our expected outcome of the model generation. Kaggle is the leading data science competition platform and provides a lot of datasets you can use to improve your skills. Keras allows you to run the same code on different back-ends. models import Sequential: from keras. You ask the model to make predictions about a test set—in this example, the test_images array. Ok, so you’ve gone a long way and learned a bunch. When I build a deep learning model, I always start with Keras so that I can quickly experiment with different architectures and parameters. Regression algorithms are mostly used to make predictions on numbers i. The test accuracy is 98. tanh, shared variables, basic arithmetic ops, T. We could use stochastic gradient descent (sgd) as well. If you never set it, then it will be "channels_last". Then we read the training data images. 4974 - classification_loss: 0. We show how to code them using Keras and TensorFlow eager execution. Example #4: Image Captioning with Attention In this example, we train our model to predict a caption for an image. Statistics Solutions can assist with your quantitative analysis by assisting you to develop your methodology and results chapters. fit_image_data_generator: Fit image data generator internal statistics to some sample fit. Identify the Image Recognition problems which can be solved using CNN Models. img_to_array(test_img) test_img = np. Download it once and read it on your Kindle device, PC, phones or tablets. You can compute your gradient on just one example image and update the weights and biases immediately, but doing so on a batch of, for example, 128 images gives a gradient that better represents the constraints imposed by different example images and is therefore likely to converge towards the solution faster. Keras offers the very nice model. Prediction is the final step and our expected outcome of the model generation. What is the functionality of the data generator. h, 1) pairs = [test_image, support_set] targets = np. Linear regression is the simplest form of regression. 5 * X + 2 + np. However, the linear regression model with the reciprocal terms also produces p-values for the predictors (all significant) and an R-squared (99. Train the model with train dataset, evaluate the trained model with the validate dataset. its a regression problem to predict the angle of steering by providing image of camera installed front side of car. The resulting text, Deep Learning with TensorFlow 2 and Keras, Second Edition, is an obvious example of what happens when you enlist talented people to write a quality learning resource. To make such data amenable to softmax regression and MLPs, we first flattened each image from a $$28\times28$$ matrix into a fixed-length $$784$$ -dimensional vector. We can easily fit the regression data with Keras sequential model and predict the test data. Synthetic Regression. What is the functionality of the data generator. image-classification fine-tuning A Deep Learning Model that has been trained to recognize 1000 different objects. Keras models. for extracting features from an image then use the output from the Extractor to feed your SVM Model. Compile model. However, the linear regression model with the reciprocal terms also produces p-values for the predictors (all significant) and an R-squared (99. fit_image_data_generator: Fit image data generator internal statistics to some sample fit. Each image is a matrix with shape (28, 28). Neural Network with keras: Remainder Problem 2. The model learns to associate images and labels. It is believed to be the future of making neural networks. Keras allows you to run the same code on different back-ends. predict() , i get a constant value for all input. In this tutorial, we will present a simple method to take a Keras model and deploy it as a REST API. Train the model with train dataset, evaluate the trained model with the validate dataset. The model learns to associate images and labels. We will generate some (mostly) random data and then fit a line to it using stochastic gradient descent. >>> model = load_model() >>> print model Since, the VGG model is trained on all the image resized to 224x224 pixels, so for any new image that the model will make predictions upon has to be resized to these pixel values. We show how to code them using Keras and TensorFlow eager execution. Defaults to None. Download it once and read it on your Kindle device, PC, phones or tablets. Xval [true_category, ex2] support_set = support_set. For this example, these extra statistics can be handy for reporting, even though the nonlinear results are equally valid. predictions = model. hourly_wages, a keras script which uses a neural network to create a multivariable regression model from a set of hourly wage data. Then we read the training data images. It enables training highly accurate dense object detectors with an imbalance between foreground and background classes at 1:1000 scale. For example, the model focuses near the surfboard in the image when it predicts the word “surfboard”. Before we can train our Keras regression model we first need to load the numerical and categorical data for the houses dataset. The model learns to associate images and labels. Contrast this with a classification problem, where we aim to predict a discrete label (for example, where a picture contains an apple or an orange). Here we will focus on how to build data generators for loading and processing images in Keras. In fact, if you are working on a machine learning projects in general or preparing to become a data scientist, it’s kind of must for you to know the top evaluation metrics. Finally, here’s a tip every beginner should know: Don’t be discouraged is your algorithm is not as fast or fancy as those in existing packages. Feed the model. In a regression problem, we aim to predict the output of a continuous value, like a price or a probability. Session 05. All three of them require data generator but not all generators are created equally. 2 regularization. Next Steps : Try to put more effort on processing the dataset; Try other types of neural networks; Try to tweak the hyperparameters of the two models that we used; If you really want to get better at regression problems, follow this. AutoKeras accepts numpy. The number of output dimensions. Example code for this article can be found in this gist. See full list on hub. See full list on pyimagesearch. determine , which has a physical interpretation: an image of a 2D slice of a body in MRI, the spectrum of multisinusoidal signal in spectral super-resolution, re ection coe cients of strata in seismography, etc. These are stripped down versions compared to the inference model and only contains the layers necessary for training (regression and classification values). The main competitor to Keras at this point in time is PyTorch, developed by Facebook. 7$on the leaderboard. Logistic regression is used to predict the class (or category) of individuals based on one or multiple predictor variables (x). Example how to train embedding layer using Word2Vec. If you never set it, then it will be "channels_last". Keras example image regression, extract texture height param Raw. com/profile/03334034022779238705 [email protected] But that’s it for now. Alibi is an open source Python library aimed at machine learning model inspection and interpretation. Those with less filters actually performed the best. For example, table 4 (regression of engineer income), table 6 (million song year regression), table 8 (letter recognition), table 9 (taxi time regression). This document contains a first look at an example of a convolutional neural network. imdb_bidirectional_lstm. Preprocess input data for Keras. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. The code below is a snippet of how to do this, where the comparison is against the predicted model output and the training data set (the same can be done with the test_data data). Basic techniques of Computer Vision using OpenCV, such as thresholding, edge detection, etc. models import load_model from keras. Performing simple linear regression by hand Suppose you are using the following simple linear regression model to investigate the effect of studying on exam scores: score = Be + By hours + where SCORE number of points earned (out of 100) hours = number of hours spent studying or term You plan to calculate, by hand, a simple OLS regression of score on hours (score Be + B hours). jpg' img = load_img(img_path) # this is a PIL image x = img_to_array(img) Source. Model: Train a Keras model; fit_text_tokenizer: Update tokenizer internal vocabulary based on a list of texts flow_images_from_data: Generates batches. preprocessing. In keras you can load an image with: from keras. In this post we will learn a step by step approach to build a neural network using keras library for Regression. For this one also we will build the model and try to Improve Performance of model With Data Preparation technique like standardization and also by changing the topology of the neural network. For this example, we use a linear activation function within the keras library to create a regression-based neural network. Map categorical values to integer values. Training the neural network model requires the following steps: Feed the training data to the model — in this example, the train_images and train_labels arrays. As it falls under Supervised Learning, it works with trained data to predict new test data. In this part, I will cover linear regression with a single-layer network. utils import preprocess_input test_img = image. Synthetic Regression 2. If you use an appropriate method to choose the threshold, this should give you a score around$0. Training a model in Keras literally consists only of calling fit() and specifying some parameters. There are many test criteria to compare the models. These examples are extracted from open source projects. You can train the imported layers on a new data set or assemble the layers into a network ready for prediction. If you wish to do inference on a model (perform object detection on an. fit and pass in the training data and the expected output. These are keras models which do not use TensorFlow examples as an input format. scikit_learn. What I did not show in that post was how to use the model for making predictions. The number of output dimensions. eager_styletransfer: Neural style transfer with eager execution. These are just a few of many examples of how image classification will ultimately shape the future of the world we live in. Contrast this with a classification problem, where we aim to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in. We show how to code them using Keras and TensorFlow eager execution. py # -*- coding: utf-8 -*-import numpy as np: import os: import cv2: import pandas as pd: from sklearn. In a regression problem, we aim to predict the output of a continuous value, like a price or a probability. We can easily fit the regression data with Keras sequential model and predict the test data. Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples. When applied to neural networks, this involves both discovering the model architecture and the hyperparameters used to train the model, generally referred to as neural archi. So, let’s take a look at an example of how we can build our own image classifier. fit(X_train, Y_train) # Plot outputs. We ask the model to make predictions about a test set — in this example, the test_images array. Compared to traditional point load forecasting, probabilistic load forecasting (PLF) has great significance in advanced system scheduling and planning with higher reliability. The number of epochs (iterations over the entire dataset) to train for. So first we need some new data as our test data that we’re going to use for predictions. , (32, 32, 3), (28, 28, 1). py which we'll be reviewing it as well. (28 sequences of 28 elements). Compiling a model can be done with the method compile, but some optional arguments to it can cause trouble when converting from R types so we provide a custom wrapper keras_compile. flow(x, y):. Regression is a process where a model learns to predict a continuous value output for a given input data, e. Is your Machine Learning project on a budget, and does it only need CPU power? Luckily, we have got you covered in this article, where we show you the necessary steps to deploy a model in a simple and cheap way (requiring no huge time investment). fit_generator: Fits the model on data yielded batch-by-batch by a generator. This is a jupyter notebook for regression model using Keras for predicting the House prices using multi-modal input (Numerical Data + Images). The output will show probabilities for digits 0-9, for each of the 4 images. Keras data types (dtypes) are the same as TensorFlow Python data types, as shown in the following table:Python typeDescriptiontf. Additionally, it uses the following new Theano functions and concepts: T. This is useful to annotate TensorBoard graphs with semantically meaningful names. There are plenty of deep learning toolkits that work on top of it like Slim, TFLearn, Sonnet, Keras. We ask the model to make predictions about a test set — in this example, the test_images array. Brief introduction to Multi-layer Perceptron and Convolutional Neural Networks. output_dim Optional[int]: Int. Once the model is fully defined, we have to compile it before fitting its parameters or using it for prediction. This function requires the Deep Learning Toolbox™ Converter for ONNX Model Format support package. It keeps track of the evolutions applied to the original blurred. To accomplish this, we first have to create a function that returns a compiled neural network. keras/keras. def who_is_it(image_path, database, model): “”” Implements face recognition for the happy house by finding who is the person on the image_path image. Deep Learning with TensorFlow 2 and Keras: Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd Edition - Kindle edition by Gulli, Antonio, Kapoor, Amita, Pal, Sujit. To do that you can use pip install keras==0. We also compared different architectures. The images in the MNIST dataset do not have the channel dimension. Start Writing . This will give you a tensor of shape (channels, height, width), where channels is typically 3 for an RGB image. mnist, a keras script which sets up a neural network to classify the MNIST digit image data. For example, suppose we have two IVs, one categorical and once continuous, and we are looking at an ATI design. Install Keras. Tweet Share Share AutoML refers to techniques for automatically discovering the best-performing model for a given dataset. Convolutional Neural Network with keras: MNIST. Once the model is fully defined, we have to compile it before fitting its parameters or using it for prediction. validation_split: Float. Alibi is an open source Python library aimed at machine learning model inspection and interpretation. Getting deeper with Keras Tensorflow is a powerful and flexible tool, but coding large neural architectures with it is tedious. Image classification - over 1000 types of general objects. We compute the gradient of output category with respect to input image. We will concentrate on a Supervised Learning Classification problem and learn how to implement a Deep Neural Network in code using Keras. Work your way from a bag-of-words model with logistic regression to more advanced methods leading to convolutional neural networks. models import Sequential: from keras. com Blogger. Berkeley Electronic Press Selected Works. The Fashion MNIST dataset is a part of the available datasets present in the tf. MNIST Example. Define model architecture. There's ten classes, one for each digit between 0 and 9, and the input is grayscale images of handwritten digits of size 28x28. Example of using. Synthetic Regression. A machine learning algorithm should decide how to utilize the difference between the predicted value and actual value to adjust the weights so that the model converges. keras/keras. When applied to neural networks, this involves both. The network is based on ResNet blocks. 050, the value is considered significant. I have added some code to visualize the confusion matrix of the trained model on unseen test data splitted using scikit-learn and. 0732 Inference. If you are using REST APIs or Python Client, retrain the model using the latest modeler and save the model in Watson Machine Learning repository with the model type ‘spss-modeler-18. Contrast this with a classification problem, where we aim to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in. Training a model in Keras literally consists only of calling fit() and specifying some parameters. I have copied the data to my…. The way we are going to achieve it is by training an artificial neural network on few thousand images of cats and dogs and make the NN(Neural Network) learn to predict which class the image belongs to, next time it sees an image having a cat or dog in it. Importing the basic libraries and reading the dataset. What is the functionality of the data generator. In this tutorial we are going to do a quick and dirty estimation of house prices based on a dataset from a Kaggle competition. Before you go, check out these stories! 0. As a pre-requisite, I have posted some Python Tutorial Series (both are in progress and ongoing series) and tons more Here are some slides:. eager_image_captioning: Generating image captions with Keras and eager execution. Keras models. Using Keras and Deep Q-Network to Play FlappyBird. Basic techniques of Computer Vision using OpenCV, such as thresholding, edge detection, etc. 5705 - regression_loss: 0. The generator aims at reproducing sharp images. cross_validation import train_test_split: from keras. Batch size refers to the number of training examples utilized in one iteration. Is your Machine Learning project on a budget, and does it only need CPU power? Luckily, we have got you covered in this article, where we show you the necessary steps to deploy a model in a simple and cheap way (requiring no huge time investment). flow(x, y):. Problem Definition. For an example of the workflow of assembling a network, see Assemble Network from Pretrained Keras Layers. On the positive side, we can still scope to improve our model. We will generate some (mostly) random data and then fit a line to it using stochastic gradient descent. Arguments: image_path — path to an image database — database containing image encodings along with the name of the person on the image model — your Inception model instance in Keras. Loading the House Prices Dataset Figure 4: We'll use Python and pandas to read a CSV file in this blog post. We then add our imports: # Load dependencies from keras. com/profile/03334034022779238705 [email protected] If None, it will be inferred from the data. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. dtype: Dtype to use for the generated arrays. Regression is a process where a model learns to predict a continuous value output for a given input data, e. h, 1) pairs = [test_image, support_set] targets = np. So Keras is high-level API wrapper for the low-level API, capable of running on top of TensorFlow, CNTK, or Theano. Image Retrieval by Similarity using Tensorflow and Keras This tutorial will cover all the details (resources, tools, languages etc) that are necessary for image retrieval. One can setup an experiment with 100 people in data-set. As it falls under Supervised Learning, it works with trained data to predict new test data. Posted by: Chengwei 1 year, 8 months ago () The focal loss was proposed for dense object detection task early this year. Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples. 9%), none of which you can get for a nonlinear regression model. 0 CNN Model Architecture. What I did not show in that post was how to use the model for making predictions. Using Keras and Deep Q-Network to Play FlappyBird. Performing simple linear regression by hand Suppose you are using the following simple linear regression model to investigate the effect of studying on exam scores: score = Be + By hours + where SCORE number of points earned (out of 100) hours = number of hours spent studying or term You plan to calculate, by hand, a simple OLS regression of score on hours (score Be + B hours). The resulting text, Deep Learning with TensorFlow 2 and Keras, Second Edition, is an obvious example of what happens when you enlist talented people to write a quality learning resource. To do that you can use pip install keras==0. Fraction of images reserved for validation (strictly between 0 and 1). Logistic Regression model is created to train these features and labels. fit and pass in the training data and the expected output. reshape() method to perform this action. This guide uses tf. July 10, 2016 200 lines of python code to demonstrate DQN with Keras. We ask the model to make predictions about a test set — in this example, the test_images array. cross_validation import train_test_split: from keras. float1616-bit floating pointtf. We conduct our experiments using the Boston house prices dataset as a small suitable dataset which facilitates the experimental settings. Note that we would be using the Sequential model because our network consists of a linear stack of layers. We model our system with a linear combination of features to produce one output. Linear Regression in 2D: example 21. Synthetic Regression. Compare this with actual results for the first 4 images in the test set: y_test[:4] The output shows that the ground truth for the. Posted by: Chengwei 1 year, 8 months ago () The focal loss was proposed for dense object detection task early this year. 8 Train MSE 0. layers import Dense import matplotlib. Batch size refers to the number of training examples utilized in one iteration. Image Classification. When we fit a multiple regression model, we use the p-value in the ANOVA table to determine whether the model, as a whole, is significant. Python and machine learning. Contrast this with a classification problem, where we aim to predict a discrete label (for example, where a picture contains an apple or an orange). py # -*- coding: utf-8 -*-import numpy as np: import os: import cv2: import pandas as pd: from sklearn. eager_pix2pix: Image-to-image translation with Pix2Pix, using eager execution. i am trying to use a end to end nvidia model for self driving car in keras. Example of using. We compute the gradient of output category with respect to input image. How to Create, Use, and Interpret a Linear Regression Model with R Programming. This entry was posted in Computer Vision, Deep Learning and tagged Convolution Neural Network, feature extraction, food classification, Image classification, Keras, Logistic Regression, pre-trained model, Python, transfer learning, VGG16. A machine learning algorithm should decide how to utilize the difference between the predicted value and actual value to adjust the weights so that the model converges. We will use the Keras functions for loading and pre-processing the image. evaluate(test_images, test_labels) Use the trained model to classify or predict new input data. The presence of correlation in the data allows to summarize the data into few non-redundant components that can be used in the regression model. get_file dataset_path = keras. Model instance. It defaults to the image_data_format value found in your Keras config file at ~/. We provide contents related to computer science field such as Mathematics, Machine Learning, IT security, System Administration, Deep learning, Data Science, Natural language processing and so on with the aim of easing access of educational materials to mass people. Deploy Your Machine Learning Model For $5/Month. What is the functionality of the data generator. Compiling a model can be done with the method compile, but some optional arguments to it can cause trouble when converting from R types so we provide a custom wrapper keras_compile. Keras models are mainly based on a sequential model and functional APIs. You will learn how to implement convolutional neural networks (CNN)s on imagery data using the Keras framework. Fraction of images reserved for validation (strictly between 0 and 1). As usual, we’ll start by creating a folder, say keras-mlp-regression, and we create a model file named model. Building up high-performance deep learning models with a large scale of the structured and unlabeled image and metadata, automizing data pipelines, integrating the model into the existing software. dtype: Dtype to use for the generated arrays. Here are the steps for building your first CNN using Keras: Set up your environment. So I have 3D array of shape (total_seq, 20, 10) of the news' tokens from Tokenizer. Tweet Share Share AutoML refers to techniques for automatically discovering the best-performing model for a given dataset. We will use the Keras functions for loading and pre-processing the image. For example, the model focuses near the surfboard in the image when it predicts the word “surfboard”. You will learn how to implement convolutional neural networks (CNN)s on imagery data using the Keras framework. here the problem i am facing is when i predicting the angle using model. Import libraries and modules. Use lasso regression 2 to select the best subset of predictors for each industry over the history to date, to determine that e. I have copied the data to my…. This directory structure is a subset from CUB-200–2011. LinearRegression() # Train the model using the training sets regr. I have copied the data to my…. validation_split: Float. >>> model = load_model() >>> print model Since, the VGG model is trained on all the image resized to 224x224 pixels, so for any new image that the model will make predictions upon has to be resized to these pixel values. New data that the model will be predicting on is typically called the test set. So Keras is high-level API wrapper for the low-level API, capable of running on top of TensorFlow, CNTK, or Theano. Model Training with VGG16. reshape() method to perform this action. See full list on sicara. Generating images with Keras and TensorFlow eager execution. The model learns to associate images and labels. Define model architecture. Beer is predicted by Food, Clothing, Coal. Linear Regression. The Fashion MNIST dataset is a part of the available datasets present in the tf. predict(test_images). dtype: Dtype to use for the generated arrays. fit_generator: Fits the model on data yielded batch-by-batch by a generator. Keras will run the training process and print out the progress to the console. Keras offers a collection of datasets that can be used to train and test the model. When I build a deep learning model, I always start with Keras so that I can quickly experiment with different architectures and parameters. Preprocess input data for Keras. This directory structure is a subset from CUB-200–2011. MNIST Example. Step 2 – Train the model: We can train the model by calling model. This is a jupyter notebook for regression model using Keras for predicting the House prices using multi-modal input (Numerical Data + Images). This should tell us how output category value changes with respect to a small change in input image pixels. The network is based on ResNet blocks. Fraction of images reserved for validation (strictly between 0 and 1). Logistic Regression model is created to train these features and labels. Training the neural network model requires the following steps: Feed the training data to the model — in this example, the train_images and train_labels arrays. You’ll apply popular machine learning and deep learning libraries such as SciPy, ScikitLearn, Keras, PyTorch, and Tensorflow to industry problems involving object recognition, computer vision, image and video processing, text analytics, natural language processing (NLP), recommender systems, and other types of classifiers. Using Keras and Deep Q-Network to Play FlappyBird. These are regularizers used to prevent overfitting in your network. All three of them require data generator but not all generators are created equally. fit_generator: Fits the model on data yielded batch-by-batch by a generator. Below is an example of a finalized Keras model for regression. If None, it will be inferred from the data. python3 keras_script. Part 2: Regression with Keras and CNNs — training a CNN to predict house prices from image data (today's tutorial). The Fizyr framework allows us to perform inference using CPU, even if you trained the model with GPU. Then we are ready to build our very own image classifier model from scratch. normal ( 0. from keras. For this example, we use a linear activation function within the keras library to create a regression-based neural network. Linear Regression Ridge Regression Regularization None a = 0. Tweet Share Share AutoML refers to techniques for automatically discovering the best-performing model for a given dataset. This is tested on keras 0. 7$ on the leaderboard. Regression with Python, Keras and Tensorflow. We provide contents related to computer science field such as Mathematics, Machine Learning, IT security, System Administration, Deep learning, Data Science, Natural language processing and so on with the aim of easing access of educational materials to mass people. ConvNetJS for digit and image recognition; Keras. If you are using REST APIs or Python Client, retrain the model using the latest modeler and save the model in Watson Machine Learning repository with the model type ‘spss-modeler-18. In this post you will discover how to develop and evaluate neural network models using Keras for a regression problem. Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. This dataset contains 70 thousand images of fashion objects that spread across 10 categories such as shoe, bag, T-shirts etc. Regression is a supervised learning problem where given input examples, the model learns a mapping to suitable output quantities, such as “0. In our case, we’re going to create a simple, one-dimensional linear regression model to test TensorFlow and Keras. The restricted model has one or more of parameters in the full model restricted to some value (usually zero). i am trying to use a end to end nvidia model for self driving car in keras. Arguments: image_path — path to an image database — database containing image encodings along with the name of the person on the image model — your Inception model instance in Keras. which are scaled to 28 by 28. Note: For below exercise, we have shared the code for 4 different models but you can use only the required one. AutoKeras image regression class. So Keras is high-level API wrapper for the low-level API, capable of running on top of TensorFlow, CNTK, or Theano. keras/keras. What I did not show in that post was how to use the model for making predictions. So basically, we're showing the the model each pixel row of the image, in order, and having it make the prediction. Getting started. Linear Regression Project using Python (we work with a dataset) Implementation of Multiple Linear Regression using Gradient Descent Algorithm (Working with a dataset) Intuition and Conceptual Videos. These examples are extracted from open source projects. 4974 - classification_loss: 0. Prediction is the final step and our expected outcome of the model generation. When applied to neural networks, this involves both discovering the model architecture and the hyperparameters used to train the model, generally referred to as neural archi. Step 2 – Train the model: We can train the model by calling model. Fit model on training data. You can use a pretrained model like VGG-16, ResNet etc. Confidently practice, discuss and understand Deep Learning concepts; Have a clear understanding of Advanced Image Recognition models such as LeNet, GoogleNet, VGG16 etc. There are plenty of deep learning toolkits that work on top of it like Slim, TFLearn, Sonnet, Keras. load_img(filepath, target_size=(224, 224)) test_img = image. Binary classification - Dog VS Cat. Generating images with Keras and TensorFlow eager execution. MNIST consists of 28 x 28 grayscale images of handwritten digits like these: The dataset also includes labels for each image, telling us which digit it is. Keras will run the training process and print out the progress to the console. After completing this step-by-step tutorial, you will know: How to load a CSV dataset and make it available to Keras. Create CNN models in R using Keras and Tensorflow libraries and analyze their results. Regression - If the output variable to be predicted by our model is a real or continuous value (integer, float), then it is a Regression problem. From above it can be seen that Images is a parent directory having multiple class/label folder which happens to be species of birds (e. In our case, we’re going to create a simple, one-dimensional linear regression model to test TensorFlow and Keras. These are stripped down versions compared to the inference model and only contains the layers necessary for training (regression and classification values). In a regression problem, we aim to predict the output of a continuous value, like a price or a probability. pyplot as plt # create some data X = np. Next Steps : Try to put more effort on processing the dataset; Try other types of neural networks; Try to tweak the hyperparameters of the two models that we used; If you really want to get better at regression problems, follow this. which are scaled to 28 by 28. This code fragment defines a single layer with 12 artificial neurons, and it expects 8 input variables (also known as features):. In the spirit of Keras, AutoKeras provides an easy-to-use interface for different tasks, such as image classification, structured data classification or regression, and more. Session 05. This article is intended to target newcomers who are interested in Reinforcement Learning. This is useful to annotate TensorBoard graphs with semantically meaningful names. The training data is used to find the optimal model but the model should ultimately work for the test data! Conclusion. Kaggle is the leading data science competition platform and provides a lot of datasets you can use to improve your skills. Here are the steps for building your first CNN using Keras: Set up your environment. In this post you will discover how to develop and evaluate neural network models using Keras for a regression problem. Model Training with VGG16. hourly_wages, a keras script which uses a neural network to create a multivariable regression model from a set of hourly wage data. Keras allows you to run the same code on different back-ends. We are going to explain the basics of deep learning, starting with a simple example of a learning algorithm based on linear regression. These functions serialize Keras models as HDF5 files using the Keras library’s built-in model persistence. In a regression problem, we aim to predict the output of a continuous value, like a price or a probability. From above it can be seen that Images is a parent directory having multiple class/label folder which happens to be species of birds (e. How to make Fine tuning model by Keras; VGG16 Fine-tuning model. preprocessing. Deploy Your Machine Learning Model For \$5/Month. For this example, we use a linear activation function within the keras library to create a regression-based neural network. Here are the steps for building your first CNN using Keras: Set up your environment. What is the functionality of the data generator. dtype: Dtype to use for the generated arrays. Step 2 – Train the model: We can train the model by calling model. If None, it will be inferred from the data. KerasRegressor(). 图书Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition 介绍、书评、论坛及推荐. Is your Machine Learning project on a budget, and does it only need CPU power? Luckily, we have got you covered in this article, where we show you the necessary steps to deploy a model in a simple and cheap way (requiring no huge time investment). Linear regression is the simplest form of regression. All video and text tutorials are free. applications. Model Prediction. Training the neural network model requires the following steps: Feed the training data to the model — in this example, the train_images and train_labels arrays. Then we read the training data images. You will learn to apply these frameworks to real life data including credit card fraud data, tumor data, images among others for classification and regression applications. Regression with Python, Keras and Tensorflow. Load and pre-process an image. If you wish to do inference on a model (perform object detection on an. The network is based on ResNet blocks. It seems a little odd that chosen benchmark methods can’t match linear regression. It defaults to the image_data_format value found in your Keras config file at ~/. Ex: Predicting the stock price of a company. Take a look at the demo program in Figure 1. 8 Train MSE 0. The regression + Keras script is contained in mlp_regression. This dataset contains 70 thousand images of fashion objects that spread across 10 categories such as shoe, bag, T-shirts etc. Then we are ready to build our very own image classifier model from scratch. In order to test the trained Keras LSTM model, one can compare the predicted word outputs against what the actual word sequences are in the training and test data set. Linear Regression with Keras. Axiom Schema vs Axiom Where does this common spurious transmission come from? Is there a quality difference? Would this house-rule that. To begin with, we will define the model. The classic example which I can give for Logistic Regression is classifiy the mail as spam or not a spam. data", "https://archive. Linear regression is the simplest form of regression. These examples are extracted from open source projects. Article Title: Associations Between Behavioral Inhibition and Children's Social Problem Solving Behavior During Social Exclusion Article Snippet: Due to the categorical nature of the dependent variable, an ordinal logistic regression model was conducted using Stata version 11. pyplot as plt # create some data X = np. Image/Video,Quantization,Model-Optimization (beta) Quantized Transfer Learning for Computer Vision Tutorial Learn techniques to impove a model's accuracy - post-training static quantization, per-channel quantization, and quantization-aware training. If you are using REST APIs or Python Client, retrain the model using the latest modeler and save the model in Watson Machine Learning repository with the model type ‘spss-modeler-18. Machine Learning with keras 1. trainable = False # Use a Sequential model to add a trainable classifier on top model = keras. 01: Creating a Logistic Regression Model Using Keras Activity 2. The parameters in the nested model must be a proper subset of the parameters in the full model. In order to test the trained Keras LSTM model, one can compare the predicted word outputs against what the actual word sequences are in the training and test data set. Keras has built-in Pretrained models that you can use. So, what is our input data here? Recall we had to flatten this data for the regular deep neural network. MNIST consists of 28 x 28 grayscale images of handwritten digits like these: The dataset also includes labels for each image, telling us which digit it is. The Model is the core Keras data structure. There are many test criteria to compare the models. You will learn how to implement convolutional neural networks (CNN)s on imagery data using the Keras framework. Example how to train embedding layer using Word2Vec. 0732 Inference. Lambda, on the other hand, determines the penalty amount. shuffle ( X ) # randomize the data Y = 0. Python Programming tutorials from beginner to advanced on a massive variety of topics. The training procedure of keras-retinanet works with training models. This is a jupyter notebook for regression model using Keras for predicting the House prices using multi-modal input (Numerical Data + Images). jpg', target_size=(224, 224)) img_test = image. Compared to ridge regression and lasso (Chapter @ref(penalized-regression)), the final PCR and PLS models are more difficult to interpret, because they do not perform any kind of variable selection or. Once the model is saved in the project, it can be promoted to a deployment space and you can create a new deployment. You may also like. This dataset contains 70 thousand images of fashion objects that spread across 10 categories such as shoe, bag, T-shirts etc. image-classification fine-tuning A Deep Learning Model that has been trained to recognize 1000 different objects. 2 regularization. New data that the model will be predicting on is typically called the test set. For this tutorial you also need pandas. determine , which has a physical interpretation: an image of a 2D slice of a body in MRI, the spectrum of multisinusoidal signal in spectral super-resolution, re ection coe cients of strata in seismography, etc. The code below is a snippet of how to do this, where the comparison is against the predicted model output and the training data set (the same can be done with the test_data data). So, what is our input data here? Recall we had to flatten this data for the regular deep neural network. evaluate(), model. The resulting text, Deep Learning with TensorFlow 2 and Keras, Second Edition, is an obvious example of what happens when you enlist talented people to write a quality learning resource. The regression + Keras script is contained in mlp_regression. The restricted model has one or more of parameters in the full model restricted to some value (usually zero). In this tutorial we are going to do a quick and dirty estimation of house prices based on a dataset from a Kaggle competition. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. For example, suppose we have two IVs, one categorical and once continuous, and we are looking at an ATI design. So, let’s take a look at an example of how we can build our own image classifier. Regression with Python, Keras and Tensorflow. We'll check the model in both methods KerasRegressor wrapper and the sequential model itself. Prediction is the final step and our expected outcome of the model generation. In this tutorial, we walked through one of the most basic and important regression analysis methods called Linear Regression. Create CNN models in R using Keras and Tensorflow libraries and analyze their results. Load image data from MNIST. This dataset contains 70 thousand images of fashion objects that spread across 10 categories such as shoe, bag, T-shirts etc. There are a lot of possible parameters, but we’ll only supply these: The training data (images and labels), commonly known as X and Y, respectively. Start Writing. Batch size refers to the number of training examples utilized in one iteration. Choice is matter of taste and particular task; We’ll be using Keras to predict handwritten digits with the mnist. The generator aims at reproducing sharp images. eager_image_captioning: Generating image captions with Keras and eager execution. Feedforward - Regression. 3 (probably in new virtualenv). The training data is used to find the optimal model but the model should ultimately work for the test data! Conclusion.
n5qinbxjiy in66nzzzzg7o ueqfmhox77z18 8fn8rf54roxw hc8pw2if8i 80xyde4vmu2xcd ngr5ohpe9bd0knn 3emcxtbfp266 sxt6b9kpinr nh3clzd9qwhw swwgpkcfx9 b5z6phkwqdy tjc6f07d57a9 fnmyjbh5zt tkzup5upns3m 1ntrxkhmjfds42c 7803ifli2zh70m7 58iyv29wgm p4gypf1mmghusz nidz9l4maua rst53buyffy3yv m3mxbppjwex mwlznrz07x 1vag01rffg5m fytigaq2bz7gqwr i06z1yfjq8c jnyj64vlj1qy0o 3u8kipqst5h rlaf8r64ofoh916 | 2020-11-26 05:58:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2809561491012573, "perplexity": 1389.284638707339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186761.30/warc/CC-MAIN-20201126055652-20201126085652-00664.warc.gz"} |
https://www.semanticscholar.org/paper/Singularities-in-the-Andreev-spectrum-of-a-junction-Yokoyama-Nazarov/5e71351755bac5f3497931cd7ea214892d2feae8 | # Singularities in the Andreev spectrum of a multiterminal Josephson junction
@article{Yokoyama2015SingularitiesIT,
title={Singularities in the Andreev spectrum of a multiterminal Josephson junction},
author={Tomohiro Yokoyama and Yuli V. Nazarov},
journal={Physical Review B},
year={2015},
volume={92},
pages={155437}
}
• Published 1 August 2015
• Physics
• Physical Review B
The energies of Andreev bound states (ABS) forming in a $N$-terminal junction are affected by $N - 1$ independent macroscopic phase differences between superconducting leads and can be regarded as energy bands in $N - 1$ periodic solid owing to the $2\pi$ periodicity in all phases. We investigate the singularities and peculiarities of the resulting ABS spectrum combining phenomenological and analytical methods and illustrating with the numerical results. We pay special attention on spin-orbit… Expand
26 Citations
#### Figures from this paper
Josephson junctions of multiple superconducting wires
• Physics
• 2018
We study the spectrum of Andreev bound states and Josephson currents across a junction of $N$ superconducting wires which may have $s$- or $p$-wave pairing symmetries and develop a scattering matrixExpand
Order, disorder, and tunable gaps in the spectrum of Andreev bound states in a multiterminal superconducting device
• Physics
• 2017
We consider the spectrum of Andreev bound states (ABSs) in an exemplary four-terminal superconducting structure where four chaotic cavities are connected by quantum point contacts to the terminalsExpand
Topological phase diagram of a three-terminal Josephson junction: From the conventional to the Majorana regime
• Physics
• Physical Review B
• 2019
We study the evolution of averaged transconductances in three-terminal Josephson junctions when the superconducting leads are led throughout a topological phase transition from an $s$-wave to aExpand
Nodal Andreev spectra in multi-Majorana three-terminal Josephson junctions
We investigate the Andreev-bound-state (ABS) spectra of three-terminal Josephson junctions which consist of 1D topological superconductors (TSCs) harboring multiple zero-energy edge Majorana boundExpand
Topological transconductance quantization in a four-terminal Josephson junction
• Physics
• 2017
Recently we predicted that the Andreev bound-state spectrum of four-terminal Josephson junctions may possess topologically protected zero-energy Weyl singularities, which manifest themselves in aExpand
Drastic effect of weak interaction near special points in semiclassical multiterminal superconducting nanostructures
• Physics
• 2021
A generic semiclassical superconducting nanostructure connected to multiple superconducting terminals hosts a quasi-continuous spectrum of Andreev states. The spectrum is sensitive to theExpand
Majorana-Weyl crossings in topological multiterminal junctions
• Physics
• Physical Review B
• 2019
We analyze the Andreev spectrum in a four-terminal Josephson junction between one-dimensional topological superconductors in class D. We find that a topologically protected crossing in the space ofExpand
Weyl nodes in Andreev spectra of multiterminal Josephson junctions: Chern numbers, conductances and supercurrents
• Physics
• 2017
We consider mesoscopic four-terminal Josephson junctions and study emergent topological properties of the Andreev subgap bands. We use symmetry-constrained analysis for Wigner-Dyson classes ofExpand
Conductance quantization in topological Josephson trijunctions
• Physics
• 2019
The Josephson current flowing in a junction between two superconductors is a striking manifestation of macroscopic quantum coherence, with applications in metrology and quantum information. ThisExpand
Multiterminal Josephson Effect
Establishment of phase-coherence and a non-dissipative (super)current between two weakly coupled superconductors, known as the Josephson effect, plays a foundational role in basic physics andExpand | 2021-12-03 10:41:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5394958257675171, "perplexity": 8927.285099782892}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00285.warc.gz"} |
https://www.nature.com/articles/s41598-018-32036-7?error=cookies_not_supported&code=1a8bef66-5602-41d5-a4d2-008b1fcf2094 | Article | Open | Published:
# Alterations of blood pulsations parameters in carotid basin due to body position change
## Abstract
The velocity of the pulse wave (PWV) propagating through the vascular tree is an essential parameter for diagnostic the state of the cardiovascular system especially when it is measured in the pool of carotid arteries. In this research, we showed for the first time that the time of the blood-pressure-wave propagation from the heart to the face is a function of the body position. Significant asymmetry and asynchronicity of blood pulsations in the facial area were found in a recumbent position. Parameters of blood pulsations were measured by an advanced camera-based photoplethysmography system in 73 apparently healthy subjects. Most likely, observed changes of the blood-pulsation parameters are caused by variations of the arterial blood pressure due to hydrostatic pressure changes, and secondary reaction of blood vessels in response to these variations. Demonstrated feasibility of PWV measurements in the pool of carotid arteries provides considerable advantages over other technologies. Moreover, possibilities of the method to estimate physiological regulation of the peripheral blood flow (particularly, as a response to the gravitational changes) have been demonstrated. The proposed concept allows development of non-invasive medical equipment capable of solving a wide range of scientific and practical problems related to vascular physiology.
## Introduction
Pulse wave velocity (PWV) is a parameter, which measures how fast a pulse of blood pressure propagates through the vascular system. Propagation of the pressure wave from the heart to periphery might be affected by the cardiovascular system, including effects of vascular diseases1,2, blood pressure3,4, loss of compliance with age5,6 and low-frequency autonomic control7. Typically, an established methodology to assess PWV is the measurable parameter is estimations of the pulse transit time (PTT), i.e. time difference in the pulse wave arrival in two different sites (usually, in vicinity of carotid and femoral arteries), and measuring the distance between these sites5,8. Arterial applanation tonometry, which is achieved by contact pressing of mechanotransducers or piezoelectric sensors to the skin at the carotid, radial, or femoral arteries, is typically used for noninvasive PWV estimations9. In alternative technique, the pulse-arrival time in finger (or toe) is measured by photoplethysmography (PPG) to estimate the delay time in respect to the R-wave of simultaneously recorded electrocardiogram (ECG)2,4,10,11. It was shown that PWV depends on the blood pressure level: the higher the pressure, the faster the speed of wave travel, and the shorter PTT2,5,8. Repeatability of PWV measurements is usually considered as very good12,13. In the past few years, several research groups have proposed cuff-less blood pressure (BP) monitoring, which is based on the relationship between systolic BP and PTT, the latter measured by PPG14,15,16,17,18. However, only few experimental studies were carried out to study dependence of pulse wave propagation on the body position by single-point, contact-type PPG sensors19,20,21,22,23. It remains unclear, what kind of impacts might affect the propagation of the pressure pulse generated by ventricular ejection via the vascular system.
Currently, the most of the studies are devoted to PWV estimations for pulse waves propagating in vessels situated below the heart level (in upper and lower limbs). However, variations of PWV is more informative in the pool of carotid arteries. These are the vessels of this particular pool which are associated with a worsening of the prognosis of patients with essential hypertension24. At the same time, in recent years, with the development of the PPG technique, it has become possible to assess PWV in the most important regions. Recent advances of the camera-based (imaging) PPG technique (a modality in which blood pulsations are visualized simultaneously in a large region by means of digital camera25,26) allowed accurate measurements of the PTT and its spatial distribution over large areas of the body including the head27. The aims of the research were demonstrating a feasibility of the camera-based PPG system for measuring PTT in the pool of carotid arteries and analysis of gravitation influence on this parameter. By using this technique, we measured the spatial distribution of both the blood pulsations amplitude (BPA) and PTT at the subject’s face. It was revealed that both maps are significantly altered after the subject changed the sedentary position for the recumbent one or vice versa.
## Results
### Mapping Pulse Transit Time and Blood Pulsations Amplitude
Examples of mapping PTT and blood pulsations amplitude (BPA) calculated for two representative subjects are shown in Figs 1 and 2, respectively. These maps are overlaid on one of the image frames of the subject’s face. The transit time and pulsations amplitude are coded in pseudo-colour with the scale shown in the right side of each map. It is seen that spatial distribution of both the PTT and BPA is heterogeneous in the facial area. Such a heterogeneity was found for all studied subjects. It is worth noting that PTT and BPA patterns are significantly different in sedentary and recumbent positions. It usually takes longer time for the pulse wave to reach the facial area in the sedentary position than it is in any recumbent: the maps are more bluish in the sedentary position (Fig. 1b,d) whereas they are greenish in the recumbent position (Fig. 1a,c,e,f). Note that the PTT scale remains the same for different positions of each subject. The mean value of PTT in the sedentary position measured for the whole cohort was 160 ± 21 ms, whereas it was 124 ± 24 ms in the recumbent position, P < 0.001. Here and below the values are presented as Mean ± Standard Deviation.
Similarly, the amplitude of blood pulsations is also different in sedentary and recumbent positions. The mean BPA for the whole cohort measured in the sedentary position was 1.59 ± 0.75% versus 1.34 ± 0.58% in recumbent positions, P < 0.001. Nevertheless, the spatial pattern of BPA distribution may vary significantly: see Fig. 2a,c. For quantitative estimations of PTT/BPA dependence on the body position, we manually selected bigger ROIs sizing 42 × 42 pixels on the right and left cheeks (shown in Figs 1, 2 by red and blue square, respectively), and averaged the parameters within these ROIs.
Interestingly, no significant difference in blood pulsations parameters were found between younger and older group of subjects, the groups were divided in respect to the median age (24 years) of the whole cohort. Parameter PTT averaged over both cheeks and all positions in the younger group (131 ± 16 ms) was almost the same as in the older (134 ± 18 ms), P = 0.57, and the mean BPA was 1.435 ± 0.39% and 1.431 ± 0.54%, P = 0.97 for younger and older group, respectively. Moreover, no correlation between PTT and the subject’s age (r = −0.03, P = 0.79), and between BPA and age (r = 0.22, P = 0.2) was observed.
### Dependences of PTT and BPA on the body position
Figure 3a shows the histogram of the PTT-difference distribution among the studied participants. This difference between PTT measured in the sedentary and recumbent positions was calculated separately for left ($${\rm{\Delta }}PT{T}_{L}$$) and right ($$\Delta PT{T}_{R}$$) decubitus as
$$\begin{array}{c}{\rm{\Delta }}PT{T}_{L}=0.5(PT{T}_{S}^{R}+PT{T}_{S}^{L}-PT{T}_{L}^{R}-PT{T}_{L}^{L}),\\ {\rm{\Delta }}PT{T}_{R}=0.5(PT{T}_{S}^{R}+PT{T}_{S}^{L}-PT{T}_{R}^{R}-PT{T}_{R}^{L}),\end{array}$$
(1)
where $$PT{T}_{S}^{R}$$ and $$PT{T}_{S}^{L}$$ are the transit time measured in the sedentary position within the big ROIs at the right and left cheeks, respectively; $$PT{T}_{L}^{R}$$, $$PT{T}_{L}^{L}$$ and $$PT{T}_{R}^{R}$$, $$PT{T}_{R}^{L}$$ are the similar parameters measured in the left and right decubitus, respectively. The histogram in Fig. 3a includes both ΔPTTL and ΔPTTR. It is clearly seen that only one subject in one recumbent position has the PTT slightly longer than in the sedentary position. For the most of participants the situation was inversed: the pulse wave propagated faster in the recumbent position. The median decrease of PTT in the recumbent position was 38.2 ms, and ΔPTT larger than 14 ms was observed in 94% of subjects. Note that ΔPTT of 38 ms corresponds to about 25% decrease of the PTT measured in the sedentary position.
The difference of the pulsation amplitude ΔBPA was also estimated for the left (ΔBPAL) and right (ΔBPAR) decubitus positions separately. In contrast to estimations of ΔPTT it was calculated in the relative units:
$$\begin{array}{c}{\rm{\Delta }}BP{A}_{L}=\frac{BP{A}_{S}^{R}+BP{A}_{S}^{L}-BP{A}_{L}^{R}-BP{A}_{L}^{L}}{BP{A}_{S}^{R}+BP{A}_{S}^{L}}100 \% ,\\ {\rm{\Delta }}BP{A}_{R}=\frac{BP{A}_{S}^{R}+BP{A}_{S}^{L}-BP{A}_{R}^{R}-BP{A}_{R}^{L}}{BP{A}_{S}^{R}+BP{A}_{S}^{L}}100 \% ,\end{array}$$
(2)
Here $$BP{A}_{S}^{R}$$ and $$BP{A}_{S}^{L}$$ are the blood pulsation amplitude measured in the sedentary position within the same big ROIs at the right and left cheeks, respectively; $$BP{A}_{L}^{R}$$, $$BP{A}_{L}^{L}$$ and $$BP{A}_{R}^{R}$$, $$BP{A}_{R}^{L}$$ are the similar parameters measured in the left and right decubitus position, respectively. The histogram of ΔBPA distribution among the studied participants is shown in Fig. 3b. One can see that the mean amplitude of blood pulsations in the cheeks is decreasing for the majority of studied participants (76.6%) when a subject changes the sedentary position for a recumbent. The median relative change of BPA observed in the studied cohort was 17.1%. Nevertheless, there are some subjects showing increase of mean BPA in a recumbent position as compared to the sedentary.
### Asymmetry and asynchronicity of blood pulsations
Additionally to changes of the mean BPA/PTT with the body position, our experiments revealed significant variations in spatial distribution of these parameters. PTT and BPA maps of the facial area have individual spatial distribution of these parameters for each subject as one can see in Figs 1 and 2. Sometimes these distributions are clearly asymmetric (for example, Figs 1d and 2a, or Fig. 2c). Moreover, the degree asymmetry varies with the change of position. In the BPA map of Fig. 2a, the amplitude of pulsations in the left cheek (1.40%) is much higher than in the right cheek (0.86%). In contrast, for the same person in other recumbent position (Fig. 2c) BPA in the left cheek (1.09%) becomes smaller than in the right (1.63%). Asymmetry of PTT maps means asynchronicity of blood pulsations in the right and left sides. For quantitative estimation of the degree of asymmetry and asynchronicity, we calculated the respective coefficients SPTT and SBPA as
$$\begin{array}{c}{{S}}_{{PTT}}=\frac{2({PT}{{T}}^{{R}}-{PT}{{T}}^{{L}})}{{PT}{{T}}^{{R}}+{PT}{{T}}^{{L}}},\\ {{S}}_{{BPA}}=\frac{2({BP}{{A}}^{{R}}-{BP}{{A}}^{{L}})}{{BP}{{A}}^{{R}}+{BP}{{A}}^{{L}}},\end{array}$$
(3)
where PTTR and BPAR are measured in the big ROIs at the right cheek, whereas PTTL and BPAL are for the left cheek. The parameters SPTT and SBPA were measured for each subject in three positions: sedentary, right and left decubitus. Histograms of the results obtained for the whole cohort of the participants are shown in Fig. 4.
As one can see, in the sedentary position both histograms (Fig. 4a for degree of asynchronicity and Fig. 4c for degree of asymmetry) are centred on zero. It means that blood pulsations are symmetrical and synchronous in the right and left cheeks for the most of healthy subjects with equally probable deviations to the right or left side depending on the physiological features of the subject. In contrast, we observed significant increase of both the degree of asynchronicity and asymmetry when the subject is in the recumbent position. It is clearly seen in Fig. 4b,d that the histograms for left and right decubitus (shown by red and blue bins, respectively) are much wider than for the sedentary, and their distributions are not centred at zero values. Median degree of PTT asynchronicity in the right decubitus is −0.18, whereas in the left decubitus it is 0.15. Recalling that the left cheek is up in the right decubitus while the right cheek is up in the left decubitus and using SPTT definition in Eq. 3, we conclude that PTT measured in the upper cheek is usually longer than that in the lower one for any recumbent position. PTT asynchronicity can be clearly seen in Fig. 2d,f as well. For all subjects in recumbent positions, the mean PTT measured in the upper cheek was 133 ± 22 ms in contrast with 112 ± 21 ms in the lower cheek, P < 0.001.
Spreading of SBPA histogram (Fig. 4d) is even larger than of SPTT showing the median degree of asymmetry in the right decubitus of −0.39, whereas it is 0.35 in the left decubitus. Therefore, the amplitude of blood pulsations with higher probability is larger in the upper cheek in any recumbent position as it is clearly seen in BPA maps in Fig. 2a,c. In recumbent positions, the mean BPA in the upper cheek was 1.51 ± 0.57% versus 1.07 ± 0.43% in the lower cheek, P < 0.001.
## Discussion
According to the recent model proposed by our group28,29, interaction of the polarized green light with skin integument allows assessment of the erythrocytes speed in capillaries of the papillary layer. Recordings of the pulsatile components originating from the pulse wave arrived to the facial area allows us mapping the spatial distribution of both the amplitude and dynamic parameters of the capillary blood flow28. Due to described technological advances, these maps were calculated with high spatial and temporal resolution. Noteworthy that synchronization of the peripheral pulse wave with ECG allowed us to measure PTT with high accuracy in regions, which were previously inaccessible for such measurements. At the same time, blood flow changes in the pool of carotid arteries are the most informative in the prognosis estimation for patients with cardiovascular disease. Moreover, our approach is capable to carry out dynamic observations of the vascular parameters variations, which makes it an indispensable tool in the study of the physiological mechanisms of blood circulation regulation.
We assume that the PPG waveform under green illumination originates from the upper capillary level28,29. Considering the small length of capillaries (about 1 mm), the time delay of the pulse wave between the arteriole and capillary is negligible compare to the time needed to arrive the arteriole. Therefore, modulation of the erythrocytes speed in capillaries accurately enough describes the shape of the pulse wave in the point of measurements. In recent comparative measurements of the erythrocytes speed and PPG waveforms, the high correlation between these waveforms was reported28 suggesting smallness of the delay between them. However, one may assume existence of additional time delay between erythrocytes speed and light intensity modulation due to still debatable mechanism of light intensity modulation in capillaries. Therefore, more detailed study is needed to deeper understand the mechanism of light modulation.
Our experiments have shown that the camera-based PPG technique is capable to measure PTT between the heart and face. Therefore, we demonstrated feasibility to assess the velocity of the pulse wave propagating in the pool of carotid arteries. This provides considerable advantages over other technologies in estimating the PWV parameter. Moreover, possibilities of the method to estimate physiological regulation of the peripheral blood flow (particularly, as a response on the gravitational changes) have been demonstrated.
Observed decrease of PTT in recumbent position (Fig. 3a) corresponds to increase of the pulse wave velocity assuming the distance of the pulse wave propagation from the heart to the face gets negligible changes after the position change. Corresponding increase of PWV can be explained by increase of the systolic BP in the recumbent position due to the gravitational effects. Position of the head in respect to the heart in the sedentary position is in average by 20 cm higher than in the recumbent position. This difference of hydrostatic pressure corresponds to 15 mmHg. Therefore, we may assume that the systolic BP in the sedentary position is smaller than in the recumbent. This assumption is supported by our oscillometric measurements of BP carried out with 10 selected participants in lateral decubitus and supine positions. In the decubitus, the cuff on the shoulder was above the heart level by 15 cm in average, whereas in the supine position, the cuff was at the heart level. The mean systolic BP was measured as 109 ± 17 mmHg in the decubitus and 121 ± 12 mmHg in the supine showing the smaller BP in the higher position of the expected difference of 12 mmHg. However, the relative change of BP (10–13%) was smaller than the measured change of PTT. According to the histogram of Fig. 3a, PTT is decreasing by more than 41 ms (which corresponds to more than 25% of the relative change) in 44% of studied subjects. Moreover, dispersion of the hydrostatic pressure difference (related to the subject’s height of 173.9 ± 7.4 cm, the relative dispersion of 4.3%) was much smaller than the observed dispersion of PTT difference 37.9 ± 16.2 ms, the relative dispersion of 42.7%.
Hydrostatic pressure difference can be estimated more accurately between the big ROIs in right and left cheeks in decubitus position, which is about of 7.5 cm in average. It corresponds to the pressure difference of 5.7 mmHg, which is less than 5% of the relative change of the systolic blood pressure. Lower BP in the upper cheek leads to longer PTT, which is in accordance with the observed asynchronicity of blood pulsations, see Figs 1 and 4b. However, mean degree of asynchronicity in the decubitus position is 0.181 corresponding to 18.1% of the relative change of PTT. Moreover, 22% of subjects show the relative change of PTT in decubitus exceeded 31%, which is much higher than the expected hydrostatic pressure difference.
As commonly accepted in photoplethysmography30,31, the parameter BPA is the fraction of pulsatile variations over the mean signal intensity (see the next Section “Methods”). Therefore, change of this parameter in different body position might be attributed with either pulsatile or mean signal variations. In the current experiments, it was technically difficult to keep the same intensity of illumination on the cheeks when subject change his position. Nevertheless, recent study of BPA evolution during capsicum plaster application (with the stable skin illumination by the green light) has revealed that the relative increase of the pulsatile component due to precapillary sphincters opening caused by capsaicin is 25 times higher than concomitant decrease of the mean signal32. Therefore, we suppose that in our case change of the pulsatile component is the main reason of BPA alteration as well.
In view of these findings, possible explanation of the position-dependent dynamics of the PTT and BPA parameters by change in the tone of the sympathetic nervous system does not seem convincing. First, sympathetic vascular tone varies symmetrically on both sides of the body. Second, the tone is diminishing in the decubitus position resulting in deceleration of the pulse wave33, whereas we have observed significant acceleration of PWV: 3.38 ± 0.83 m/s in recumbent positions versus 2.54 ± 0.36 m/s in the sedentary, P < 0.001. Therefore, we hypothesize that observed changes of PTT and BPA are caused by two different factors. These factors are (i) variations of arterial blood pressure due to hydrostatic pressure changes, and (ii) secondary reaction of blood vessels in response to these variations. Both factors can affect nonlinearly the measured parameters.
Our experiments demonstrated that the camera-based PPG system is capable to carry out dynamic observation how the vascular parameters are changing under the influence of various physiological impacts. Particularly, a gravitational dependence of the pulse waves reaching the head has been revealed. On the one hand, observed variations of PTT with changes of a body’s position should be taken into account while calibrating future noncontact systems of BP monitoring using PTT. On the other hand, camera-based PPG system could be used as a simple and robust device for assessment of the vasomotor regulation mechanisms. This non-invasive and simple in implementation technique can be useful for (i) significant facilitation of the population study of vascular stiffness, (ii) clarification the mechanisms of vascular tone regulation in the carotid basin, and (iii) assessment the regress of the indicators deviation from the norm caused by therapeutic and surgical interventions. Moreover, the proposed concept will have important applications since it allows development of non-invasive medical equipment capable of solving a wide range of scientific and practical problems related to vascular physiology.
## Methods and Subjects
### Participants
The study involved 73 apparently healthy subjects (33 females and 40 males) between 18 and 65 years (29.2 ± 12.0) years. Persons with any neurologic, cardiovascular or skin diseases were not invited to participate in this study. This study was conducted in accordance with ethical standards presented in the 2013 Declaration of Helsinki. The study plan was approved by the research ethical committee of the Almazov National Medical Research Centre prior the experiments. All subjects provided their informed consent in the written form for participation in the experiment and for the publication of identifying information/images in an online open-access publication.
### Measurement system
A custom-made camera-based PPG system was used to collect the data from facial area of a subject. The system consisted of a digital monochrome CMOS camera (8-bit model GigE uEye UI-5220SE of the Imaging Development Systems GmbH) and an illuminator with a polarization filter. A photograph of the system is shown in Fig. 5. For illuminating subject’s face we designed and built a matrix which holds 8 light emitting diodes (LED) operating at the wavelength of 530 nm with the spectral bandwidth of 40 nm (green light) and the power of one watt per LED. Each LED was placed inside a separate parabolic mirror, which diminished initial divergence of light flux emerged from the LED (see Fig. 5b). All LEDs were assembled around the camera lens. The polarization filter consists of inner circle and outer part with mutually orthogonal transmission axes. Such polarization filtration reduces the skin specular reflections and motion artefacts influence on the detected PPG waveform34. Light emitted from LEDs passes through the outer polarizing part whereas the inner circle serves as a filter for light received by the camera lens. Position of LEDs with parabolic mirrors were adjusted to provide uniform illumination of an area centred with the axis of observation at the distance between 0.2 and 1.5 meter from the camera lens. The whole system was aligned so that perpendiculars to surfaces of both cheeks and forehead were as close to the optical axis of the camera lens as possible. In this geometry the light was directed almost orthogonally to cheeks and forehead thus diminishing the negative influence of the ballistocardiographic effects31.
All videos were recorded at 39 frames per second with pixel resolution of 752 × 480 and saved frame-by-frame in PNG format in the hard disk of a personal computer. The distance between the camera lens and subject’s face under study was about 0.7 m. Experiments were carried out in a black-out laboratory room providing that the intensity of the LED illumination at the subject’s face was at least ten times higher than the intensity of the ambient light. ECG was recorded simultaneously with video frames by a digital electrocardiograph. Synchronization accuracy between the electrocardiograph and the camera was better than 1 ms. To implement ECG recordings, disposable electrodes were attached to the left and right wrists with the reference electrodes on the legs.
### Study design
The video recordings of each subject’s face was carried out in three different positions: one sedentary and two recumbent (right and left decubitus). In sedentary position, subject was asked to sit comfortably and lean his head on a properly adjusted support. In recumbent positions, his head was on a pillow. The video recordings was implemented during 30–40 s in each position. After changing the position, a subject relaxed during 5 minutes before the recording in other position was started. Before video and ECG data recordings in each position, we measured the blood pressure (BP) of the participant by conventional oscillometric cuff-based BP device (A&D Medical).
### Data processing
All recorded video frames from the cameras were processed off-line by using custom software implemented in the MATLAB® platform. First, we manually designated a symmetry line in the recorded image of the subject’s face and selected an area for analysis on the right side of the face. This area was completely covered by small regions of interest (ROI) sizing 7 × 7 pixels, which approximately corresponds to 2 × 2 mm2 in the facial area. Each ROI was chosen to have a common border with adjacent ROIs without overlapping. Positions of the ROIs in the left side of the face image were chosen to be symmetrical with the ROIs in the right side in respect to the symmetry line as shown in Fig. 6a. Second, we calculated PPG waveform as frame-by-frame evolution of average pixel value in every chosen ROI. An example of raw waveform (without any filtering) is shown in Fig. 6a. Typically, it consists of alternating component (AC), which follows to the heartbeats, and slowly DC varying component. Both components are proportional to the incident light intensity26. To compensate unevenness of illumination, we calculated AC/DC ratio, deduced the unity from the calculated ratio, and inverted the sign. These transformation are typical in photoplethysmography providing the waveform to correlate positively with variations of arterial blood pressure29,30.
Abovementioned steps are similar to the algorithm described in our recent paper27 dealing with PTT calculations from the camera-based PPG signals. Aiming further increase of the algorithm reliability, we added here a pre-processing step for compensation of involuntary face motions during video recordings. Considering that different parts of the face are displaced stochastically and heterogeneously, we divided the whole image on the segments of 64 × 30 pixels, and compensated the motion of each segment independently. We assumed that the signal variations have two components: PPG and the motion-related parts. The motion-related component is proportional to the image gradient and lateral offset. The lateral offset was estimated in every segment by optical flow algorithm using gradient method35,36 and then the motion-related signal component was reconstructed and subtracted from the original signal.
All PPG waveforms were filtered to remove noise and DC components by means of a band-pass filter (0.12–20 Hz), which was used with the filtfilt function in Matlab to perform zero-phase digital filtering of the waveforms. An example of the filtered signal is shown in Fig. 6c along with the simultaneously recorded ECG signal (Fig. 6d). As seen, each oscillation of the waveform follows the R-peak of ECG signal. These oscillations are plotted together in Fig. 6e by thin coloured lines so that each R-peak is at the beginning of the time scale. Thick greenish line in Fig. 6e shows a mean waveform obtained by averaging the filtered one-cardiac-cycle oscillations during the 30 s. The transit time of the pulse wave, PTT, was calculated as the time delay between the R-peak (zero of the abscissa axis in Fig. 6e) and the minimum of the mean waveform (yellow circle in Fig. 6e) because the latter corresponds to the beginning of the anacrotic wave with fast blood-pressure increase. Amplitude of blood pulsations, BPA, is estimated as the difference between the maximum and minimum values of the mean PPG waveform (see Fig. 6e). Parameters PTT and BPA were calculated for each selected ROI thus allowing us their mapping in the image of the whole face.
Depending on the heart rate, from 30 to 50 cardiac cycles were used to estimate the PTT and BPA parameters in each ROI sizing 7 × 7 pixels. The standard deviation of the mean PTT in each ROI varied from 20 to 40 ms. Consequently, the standard error of the PTT estimation was the square root of the number of cycles smaller and varied from 3 to 8 ms. Similarly, the relative error of the BPA estimations was between 3.6 and 9.7%.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Blacher, J., Asmar, R., Djane, S., London, G. M. & Safar, M. E. Aortic pulse wave velocity as a marker of cardiovascular risk in hypertensive patients. Hypertension 33, 1111–1117 (1999).
2. 2.
Bortolotto, L. A., Blacher, J., Kondo, T., Takazawa, K. & Safar, M. E. Assessment of vascular aging and atherosclerosis in hypertensive subjects: second derivative of photoplethysmogram versus pulse wave velocity. Am. J. Hypertens. 13, 165–171 (2000).
3. 3.
Gribbin, B., Steptoe, A. & Sleight, P. Pulse wave velocity as a measure of blood pressure change. Psychophysiology 13, 86–90 (1976).
4. 4.
Loukogeorgakis, S., Dawson, R., Philips, N., Martyn, C. N. & Greenwald, S. E. Validation of a device to measure arterial pulse wave velocity by a photoplethysmographic method. Physiol. Meas. 23, 581–596 (2002).
5. 5.
Vaitkevicius, P. V. et al. Effects of age and aerobic capacity on arterial stiffness in healthy adults. Circulation 88, 1456–1462 (1993).
6. 6.
Vermeersch, S. J. et al. Age and gender related patterns in carotid-femoral PWV and carotid and femoral stiffness in a large healthy, middle-aged population. J. Hypertens. 26, 1411–1419 (2008).
7. 7.
Nitzan, M., Babchenko, A. & Khanokh, B. Very low frequency variability in arterial blood pressure and blood volume pulse. Med. Biol. Eng. Comput. 37, 54–58 (1999).
8. 8.
Asmar, R. et al. Assessment of arterial distensibility by automatic pulse wave velocity measurement. Hypertension 26, 485–490 (1995).
9. 9.
Pereira, T., Correia, C. & Cardoso, J. Novel methods for pulse wave velocity measurement. J. Med. Biol. Eng. 35, 555–565 (2015).
10. 10.
Payne, R. A., Symeonidis, C. N., Webb, D. J. & Maxwell, S. R. J. Pulse transit time measured from the ECG: an unreliable marker of beat-to-beat blood pressure. J. Appl. Physiol. 100, 136–141 (2006).
11. 11.
Pitson, D., Sandell, A., van den Hout, R. & Stradling, J. Use of pulse transit time as a measure of inspiratory effort in patients with obstructive sleep apnoea. Eur. Respir. J. 8, 1669–1674 (1995).
12. 12.
Liang, Y.-L. et al. Non-invasive measurements of arterial structure and function: repeatability, interrelationships and trial sample size. Clin. Sci. 95, 669–679 (1998).
13. 13.
Wilkinson, I. B. et al. Reproducibility of pulse wave velocity and augmentation index measured by pulse wave analysis. J. Hypertens. 16, 2079–2084 (1998).
14. 14.
Zheng, Y., Poon, C. C. Y., Yan, B. P. & Lau, J. Y. W. Pulse arrival time based cuff-less and 24-h wearable blood pressure monitoring and its diagnostic value in hypertension. J. Med. Syst. 40, 195 (2016).
15. 15.
Chen, Y., Wen, C., Tao, G. & Bi, M. Continuous and noninvasive measurement of systolic and diastolic blood pressure by one mathematical model with the same model parameters and two separate pulse wave velocities. Ann. Biomed. Eng. 40, 871–882 (2012).
16. 16.
Nakano, K., Ohnishi, T., Nishidate, I. & Haneishi, H. Noncontact sphygmomanometer based on pulse-wave transit time between the face and hand. In Optical Diagnostics and Sensing XVIII: Toward Point-of-Care Diagnostics (ed. Coté, G. L.). Proc. SPIE 10501, 1050110 (2018).
17. 17.
Mukkamala, R. et al. Towards ubiquitous blood pressure monitoring via pulse transit time: theory and practice. IEEE Trans. Biomed. Eng. 62, 1879–1910 (2015).
18. 18.
Sharma, M. et al. Cuff-less and continuous blood pressure monitoring: a methodological review. Technologies 5, 21 (2017).
19. 19.
Obata, Y. et al. Noninvasive assessment of the effect of position and exercise on pulse arrival to peripheral vascular beds in healthy volunteers. Front. Physiol. 8, 47 (2017).
20. 20.
Hickey, M., Phillips, J. P. & Kyriacou, P. A. Investigation of peripheral photoplethysmographic morphology changes induced during a hand-elevation study. J. Clin. Monit. Comput. 30, 727–736 (2016).
21. 21.
Tripathy, A. et al. A pulse wave velocity based method to assess the mean arterial blood pressure limits of autoregulation in peripheral arteries. Front. Physiol. 8, 855 (2017).
22. 22.
Erts, R., Kukulis, I., Spigulis, J. & Kere, L. Dual channel photoplethysmography studies of cardio-vascular response to the body position changes. In Photon Migration andDiffuse-Light Imaging II (eds Licha, K. & Cubeddu, R.). Proc. SPIE 5859, 58591K (2005).
23. 23.
Xin, S.-Z. et al. Investigation of blood pulse PPG signal regulation on toe effect of body posture and lower limb height. J. Zhejiang Univ. - Sci. A Appl. Phys. Eng. 8, 916–920 (2007).
24. 24.
van Sloten, T. T. et al. Carotid stiffness is associated with incident stroke: A systematic review and individual participant data meta-analysis. J. Am. Coll. Cardiol. 66, 2116–2125 (2015).
25. 25.
Verkruysse, W., Svaasand, L. O. & Nelson, J. S. Remote plethysmographic imaging using ambient light. Opt. Express 16, 21434–21445 (2008).
26. 26.
Kamshilin, A. A., Miridonov, S., Teplov, V., Saarenheimo, R. & Nippolainen, E. Photoplethysmographic imaging of high spatial resolution. Biomed. Opt. Express 2, 996–1006 (2011).
27. 27.
Kamshilin, A. A. et al. Accurate measurement of the pulse wave delay with imaging photoplethysmography. Biomed. Opt. Express 7, 5138–5147 (2016).
28. 28.
Volkov, M. V. et al. Video capillaroscopy clarifies mechanism of the photoplethysmographic waveform appearance. Sci. Rep. 7, 13298 (2017).
29. 29.
Kamshilin, A. A. et al. A new look at the essence of the imaging photoplethysmography. Sci. Rep. 5, 10494 (2015).
30. 30.
Allen, J. Photoplethysmography and its application in clinical physiological measurement. Physiol. Meas. 28, R1–R39 (2007).
31. 31.
Moço, A. V., Stuijk, S. & de Haan, G. Motion robust PPG-imaging through color channel mapping. Biomed. Opt. Express 7, 1737–1754 (2016).
32. 32.
Kamshilin, A. A. et al. Novel capsaicin-induced parameters of microcirculation in migraine patients revealed by imaging photoplethysmography. J. Headache Pain 19, 43 (2018).
33. 33.
Kalfon, R., Campbell, J., Alvarez-Alvarado, S. & Figueroa, A. Aortic hemodynamics and arterial stiffness responses to muscle metaboreflex activation with concurrent cold pressor test. Am. J. Hypertens. 28, 1332–1338 (2015).
34. 34.
Sidorov, I. S., Volynsky, M. A. & Kamshilin, A. A. Influence of polarization filtration on the information readout from pulsating blood vessels. Biomed. Opt. Express 7, 2469–2474 (2016).
35. 35.
Horn, B. K. P. & Schunk, B. G. Determining optical flow. Artif. Intellegence 17, 185–201 (1981).
36. 36.
Kearney, J. K., Thompson, W. B. & Boley, D. L. Optical flow estimation: An error analysis of gradient-based methods with local optimization. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 229–244 (1987).
## Acknowledgements
The Russian Science Foundation (grant 15-15-20012) provided the financial support of this research. SVM is grateful to CONACYT, Mexico, for support of his sabbatical stay at the ITMO University.
## Author information
### Affiliations
1. #### Department of Computer Photonics and Videomatics, ITMO University, 49 Kronverksky Pr., 197101, St. Petersburg, Russia
• Alexei A. Kamshilin
• , Tatiana V. Krasnikova
• , Maxim A. Volynsky
• & Oleg V. Mamontov
2. #### Department of Circulation Physiology, Almazov National Medical Research Centre, 2 Akkuratova St., 197341, St. Petersburg, Russia
• Tatiana V. Krasnikova
• & Oleg V. Mamontov
3. #### Optics Department, Centro de Investigación Cientfica y de Educación Superior de Ensenada, 3918 Carretera Tijuana-Ensenada, 22860, Ensenada, Baja California, Mexico
• Serguei V. Miridonov
### Contributions
Study design: A.A.K. System design and fabrication: M.A.V., A.A.K. Data processing software design and implementation: A.A.K., S.V.M. Data acquisition: T.V.K., O.V.M., A.A.K. Analysis and interpretation of data: O.V.M., T.V.K., A.A.K. Drafting and revising the manuscript: A.A.K., O.V.M., S.V.M. All authors read and approved the final version of the manuscript.
### Competing Interests
The authors declare no competing interests.
### Corresponding author
Correspondence to Alexei A. Kamshilin. | 2019-03-21 13:58:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5935366749763489, "perplexity": 3376.4399003351814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202525.25/warc/CC-MAIN-20190321132523-20190321154523-00363.warc.gz"} |
http://randomservices.org/random/apps/CauchyExperiment.html | ### Cauchy Experiment
Distribution graph
Light
#### Description
A light source is located $$b$$ units directly across from position $$a$$ on an infinite, straight wall. The random experiment consists of shining the light on the wall at an angle $$\Theta$$ with the perpendicular, that is uniformly distributed on the interval $$\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$$. The position $$X = a + b \tan(\Theta)$$ of the light beam on the wall has the Cauchy distribution with location parameter $$a$$ and scale parameter $$b$$. On each run of the experiment, the angle $$\Theta$$ and the position $$X$$ are recorded in the data table. The probability density function of $$X$$ is shown in blue in the distribution graph and is recorded in the distribution table. When the experiment runs, the empirical density function is shown in red in the distribution graph and is recorded in the distribution table. The parameters $$a$$ and $$b$$ can be varied with the input controls. | 2017-09-19 13:41:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6754046082496643, "perplexity": 146.65520750860554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685698.18/warc/CC-MAIN-20170919131102-20170919151102-00535.warc.gz"} |
http://mathhelpforum.com/math-software/131095-matlab-loops.html | # Math Help - matlab for loops?
1. ## matlab for loops?
So far i have this
VEC = input('Please type a value that is > 9 and < 21:')
if VEC < 10 || VEC > 21
error('Incorrect Input... Quitting!')
Vec1 = input('Type in a value for the first element:')
But not sure if I'm on the right track.. Could someone help me with this problem
2. Originally Posted by Mathhelpz
So far i have this
VEC = input('Please type a value that is > 9 and < 21:')
if VEC < 10 || VEC > 21
error('Incorrect Input... Quitting!')
Vec1 = input('Type in a value for the first element:')
But not sure if I'm on the right track.. Could someone help me with this problem
Have you tested it?
CB
3. yea,
this is my script so far
% initialize the vector
VEC = [];
n = 1;
VEC_length = input('Please type a value that is > 9 and < 21:');
if VEC_length < 10 || VEC_length > 21
error('Incorrect Input... Quitting!')
end
Vec1 = input('Type in a value for the first element:')
% Add first element to the vector
VEC(n)=Vec1;
% Use a for loop to add more number to the vector
for i=1:VEC_length
n=n+1;
VEC(n)= Vec1 +i;
end
I don't really know what to do on step 4? COuld you help me
4. Originally Posted by Mathhelpz
yea,
this is my script so far
% initialize the vector
VEC = [];
n = 1;
VEC_length = input('Please type a value that is > 9 and < 21:');
if VEC_length < 10 || VEC_length > 21
error('Incorrect Input... Quitting!')
end
Vec1 = input('Type in a value for the first element:')
% Add first element to the vector
VEC(n)=Vec1;
% Use a for loop to add more number to the vector
for i=1:VEC_length
n=n+1;
VEC(n)= Vec1 +i;
end
I don't really know what to do on step 4? COuld you help me
This is wrong:
Code:
VEC_length = input('Please type a value that is > 9 and < 21:');
if VEC_length < 10 || VEC_length > 21
error('Incorrect Input... Quitting!')
end
Try inputting a value of 21
CB
5. Originally Posted by Mathhelpz
I don't really know what to do on step 4? COuld you help me
Code:
if VEC(1)>=25
VEC(1)=0;
end
for k=2:VEC_LENGTH
VEC(k)=VEC(k-1)+1;
if VEC(k)>=25
VEC(k)=0;
end
end
CB
6. thanks for that, i almost finised it =D | 2014-09-23 01:39:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44060659408569336, "perplexity": 4610.168731197533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137841.57/warc/CC-MAIN-20140914011217-00085-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/595482/is-an-equivalence-an-adjunction | # Is an equivalence an adjunction?
Let $C$ and $D$ be categories and $F:C\to D$, $G:D\to C$ two functors.
$F$ is left-adjoint to $G$, if there are natural transformations $\eta:id_C\to GF$ and $\epsilon:FG\to id_D$ such that \begin{eqnarray} F&\xrightarrow{F\eta}&FGF&\xrightarrow{\epsilon F}F\\ G&\xrightarrow{\eta G}&GFG&\xrightarrow{G\epsilon}G \end{eqnarray} are the identity (!) transformations.
$F$ is an equivalence of categories (with inverse $G$) if there are natural isomorphisms $\eta:id_C\to GF$ and $\epsilon:FG\to id_D$ without any further properties.
Is $F$ left-adjoint to $G$, if $F$ is an equivalence of categories (with inverse $G$)?
If not, suppose that $F$ is an equivalence of categories with inverse $G':D\to C$ and suppose further that $F$ is left-adjoint to $G$. Does it follow that there is a natural isomorphism $G\to G'$ or is there even an identity $G=G'$?
• The answer to the first question is "yes", $F$ is both left-adjoint and right-adjoint to $G$. The answer to the second question is "yes, there exists a natural isomorphism $G\to G'$, but it is not necessarily an identity". Dec 6, 2013 at 14:47
• Suppose we have already shown that the natural iso $\eta : 1_{\mathbf{C}} \to GF$ witnessing the equivalence is the unit of the desired adjunction. Why can't the counit just be the other iso $\epsilon : FG \to 1_{\mathbf{D}}$ of the equivalence in general? It satisfies the UMP the counit since for each $g: FC \to D$, we have $\epsilon_D \circ (\epsilon_D^{-1} \circ g) = g$ and $\epsilon_D^{-1} \circ g = Ff$ for some unique $f : C \to GD$, since we can show that $F$ is fully faithful. What is wrong with this argument? Jan 28, 2016 at 7:35 | 2022-08-09 05:08:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9374719262123108, "perplexity": 103.42477461541984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00240.warc.gz"} |
https://arthurdouillard.com/post/efficient-graph-based-segmentation/ | # Efficient Graph-Based Segmentation
This post contains the notes taken from reading of the following paper:
I was also helped by the slides of Stanford’s CS231b.
Fast-RCNN was the state-of-the-art algorithm for object detection in 2015; its object proposal used Selective Search that itself used Efficient Graph-Based Segmentation.
The reason this segmentation was still useful almost 10 years later is because the algorithm is fast, while remaining efficient. Its goal is to segment the objects in an image.
# A Graph-Based Algorithm
The algorithm sees an image as a graph, and every pixels as vertices. Making of good segmentation for an image is thus equivalent to finding communities in a graph.
What separates two communities of pixels is a boundary based on where similarity ends and dissimilarity begins. A segmentation too fine would result in communities separated without real boundary between them; in a segmentation too coarse communities should be splitted.
The authors of the papers argue that their algorithm always find the right segmentation, neither too fine nor too coarse.
# Predicate Of A Boundary
The authors define their algorithm with a predicate $D$ that measures dissimilarity: That predicate takes two components and returns true if a boundary exists between them. A component is a segmentation of one or more vertice.
With $C1$ and $C2$ two components:
$$D(C1, C2) = \begin{cases} true & \text{if } \text{Dif}(C1, C2) > \text{MInt}(C1, C2)\newline false & \text{otherwise} \end{cases}$$
With:
$$\text{Dif}(C1, C2) = \min_{\substack{v_i \in C1, v_j \in C2 \newline (v_i, v_j) \in E_{ij}}} w(v_i, v_j)$$
The function $Dif(C1, C2)$ returns the minimum weight $w(.)$ edge that connects a vertice $v_i$ to $v_j$, each of them being in two different components. $E_{ij}$ is the set of edges connecting two vertices between components $C1$ and $C2$. This function $Dif$ measures the difference between two components.
And with:
$$\text{MInt}(C1, C2) = min (\text{Int}(C1) + \tau(C1), \text{Int}(C2) + \tau(C2))$$
$$\tau(C) = \frac{k}{|C|}$$
$$\text{Int}(C) = \max_{\substack{e \in \text{MST}(C, E)}} w(e)$$
The function $\text{Int}(C)$ returns the edge with maximum weight that connects two vertices in the Minimum Spanning Tree (MST) of a same component. Looking only in the MST reduces considerably the number of possible edges to consider: A spanning tree has $n - 1$ edges instead of the $\frac{n(n - 1)}{2}$ total edges. Moreover, using the minimum spanning tree and not just a common spanning tree allows to have segmentation with high-variability (but still progressive). This function $\text{Int}$ measures the internal difference of a component. A low $\text{Int}$ means that the component is homogeneous.
The function $\tau(C)$ is a threshold function, that imposes a stronger evidence of boundary for small components. A large $k$ creates a segmentation with large components. The authors set $k = 300$ for wide images, and $k = 150$ for detailed images.
Finally $\text{MInt}(C1, C2)$ is the minimum of internal difference of two components.
To summarize the predicate $D$: A large difference between two internally homogeneous components is evidence of a boundary between them. However, if the two components are internally heterogeneous it would be harder to prove a boundary. Therefore details are ignored in high-variability regions but are preserved in low-variability regions:
Notice how the highly-variable grass is correctly segmented while details like numbers on the back of the first player are preserved.
# Different Weight Functions
The predicate uses a function $w(v_i, v_j)$ that measures the edge’s weight between two vertices $v_i$ and $v_j$.
The authors provide two alternatives for this weight function:
## Grid Graph Weight
To correctly use this weight function, the authors smooth the image using a Gaussian filter with $\sigma = 0.8$.
The Grid Graph Weight function is:
$$w(v_j, v_i) = |I(p_i) - I(p_j)|$$
It is the intensity’s difference of the pixel neighbourhood. Indeed, the authors choose to not only use the pixel intensity, but also its 8 neighbours.
The intensity is the pixel-value of the central pixel $p_i$ and its 8 neighbours.
Using this weight function, they run the algorithm three times (for red, blue, and green) and choose the intersection of the three segmentations as result.
## Nearest Neighbours Graph Weight
The second weight function is based on the Approximate Nearest Neighbours Search.
It tries to find a good approximation of what could be the closest pixel. The features space is both the spatial coordinates and the pixel’s RGB.
Features Space = $(x, y, r, g, b)$.
# The Actual Algorithm
Now that every sub-function of the algorithm has been defined, let’s see the actual algorithm:
For the Graph $G = (V, E)$ composed of the vertices $V$ and the edges $E$, and a segmentation $S = (C_1, C_2, …)$:
1. Sort E into $\pi$ = ($o_1$, …, $o_m$) by increasing edge weight order.
2. Each vertice is alone in its own component. This is the initial segmentation $S^0$.
3. For $q = 1, …, m$:
• Current segmentation is $S^q$
• ($v_i$, $v_j$) $= o_q$
• If $v_i$ and $v_j$ are not in the same component, and the predicate $D(C_i^{q - 1}, C_j^{q - 1})$ is false then:
• Merge $C_i$ and $C_j$ into a single component.
4. Return $S^m$.
The superscript $q$ in $S^q$ or $C_x^Q$ simply denotes a version of the segmentation or of the component at the instant $q$ of the algorithm.
Basically what the algorithm is doing is a bottom-up merging of at first individual pixels into larger and larger components. At the end, the segmentation $S^m$ will neither be too fine nor too coarse.
# Conclusion
As you have seen, the algorithm of this paper is quite simple. What makes it efficient is the chosen metrics and the predicate defined beforehand.
If you have read until the bottom of the page, congrats! To thank you, here is some demonstrations by the authors: | 2019-02-17 22:06:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7824556827545166, "perplexity": 757.8616027045523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482788.21/warc/CC-MAIN-20190217213235-20190217235235-00595.warc.gz"} |
http://www.mpim-bonn.mpg.de/de/node/9327 | # Pressure forms on pure imaginary directions
Posted in
Speaker:
Andres Sambarino
Zugehörigkeit:
Sorbonne Université Paris/CNRS
Datum:
Don, 2019-05-16 16:30 - 17:30
Location:
MPIM Lecture Hall
Anosov groups are a class of discrete subgroups of semi-simple algebraic groups
analogue to what is known as \emph{convex-co-compact groups} in negative curvature.
Thermodynamical constructions equip the (regular points of the) moduli space of
Anosov representations from $\Gamma$ to $G$ with natural positive semi-definite
bi-linear forms, known as pressure forms. Determining whether such a pressure form
is Riemannian requires non-trivial work.
The purpose of the lecture is to explain some geometrical meaning of these forms,
via a higher rank version of a celebrated result for quasi-Fuchsian space by
Bridgeman-Taylor and McMullen on the Hessian of Hausdorff dimension on pure bending
directions. This is work in collaboration with M. Bridgeman, B. Pozzetti and A.
Wienhard.
© MPI f. Mathematik, Bonn Impressum & Datenschutz | 2019-08-22 02:48:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.519744336605072, "perplexity": 5522.03059188718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316718.64/warc/CC-MAIN-20190822022401-20190822044401-00383.warc.gz"} |
https://astarmathsandphysics.com/university-maths-notes/elementary-calculus/3993-integral-considered-as-a-potential-function.html | ## Integral Considered as a Potential Function
To find
$\int_s \frac{\alpha}{\sqrt{(x-x_0)^2 +(Y-Y_0)^2 +(Z-Z_0)^2}}dS$
, the surface being any surface whatsover with the point
$(x_0 , y_0, z_0 )$
inside it we can consider the integral as a potential function. If we treat the surface as a conducting surface, it will be an equipotential, and the potential inside it will be constant.
If the surface
$S$
$r$
$(x_0 , y_0, z_0 )$
$\int^{ \pi}_0 \int^{2 sin \theta \pi}_0 \frac{\alpha}{r} r^2 sin \theta d \theta d \phi dS= 2 \pi \alpha r \int^{\pi} sin \theta d \phi = 2 \pi \alpha r [- cos \theta]^{\pi}_0 = 2\pi \alpha r (-c0s \pi - (-cos 0))= 4 pi\alpha r$ | 2018-01-22 21:36:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669047951698303, "perplexity": 807.3870022041646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891543.65/warc/CC-MAIN-20180122213051-20180122233051-00765.warc.gz"} |
https://socratic.org/questions/what-is-dynamic-equilibrium-1 | # What is dynamic equilibrium?
May 2, 2018
A state of equilibrium in which the forward and backward reactions are occurring at the same rate with no net change.
#### Explanation:
To illustrate dynamic equilibrium, let's take a look at this reaction:
${N}_{2} \left(g\right) + 3 {H}_{2} \left(g\right) r i g h t \le f t h a r p \infty n s 2 N {H}_{3} \left(g\right)$
In this reaction, nitrogen gas and hydrogen gas are in a dynamic equilibrium with ammonia gas.
• When ${N}_{2}$ and ${H}_{2}$ are first placed into a reaction vessel, they will begin to react to form $N {H}_{3}$. The rate of the forward reaction, ${N}_{2} \left(g\right) + 3 {H}_{2} \left(g\right) \to 2 N {H}_{3} \left(g\right)$, is high.
• However, eventually, $N {H}_{3}$ will start to reform ${N}_{2}$ and ${H}_{2}$.
The rate of the backward reaction, $2 N {H}_{3} \left(g\right) \to {N}_{2} \left(g\right) + 3 {H}_{2} \left(g\right)$, begins to rise.
• Eventually, the rates of the two reactions will be the same. Equilibrium has been reached.
We should remember, though, that this is a dynamic equilibrium!
This means that, although it may seem like nothing is happening (because the concentrations of the reactants and products essentially stay constant), ${N}_{2}$ and ${H}_{2}$ are still constantly forming $N {H}_{3}$. $N {H}_{3}$ is also constantly reforming into ${N}_{2}$ and ${H}_{2}$.
It's just that the rates at which they do this are the same.
So, although reactions are occurring, there is no net change. | 2022-05-22 23:25:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7796439528465271, "perplexity": 561.2719740742031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662550298.31/warc/CC-MAIN-20220522220714-20220523010714-00253.warc.gz"} |
https://www.projecteuclid.org/euclid.aos/1017939241 | ## The Annals of Statistics
### Adaptive model selection using empirical complexities
#### Abstract
Given $n$ independent replicates of a jointly distributed pair $(X, Y) \in \mathscr{R}^d \times \mathscr{R}$, we wish to select from a fixed sequence of model classes $\mathscr{F}_1, \mathscr{F}_2,\dots$ a deterministic prediction rule $f: \mathscr{R}^d \to \mathscr{R}$ whose risk is small.We investigate the possibility of empirically assessing the complexity of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.
The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.
Finite sample performance bounds are established for the estimates, and these bounds are applied to several nonparametric estimation problems. The estimates are shown to achieve a favorable trade-off between approximation and estimation error and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent, and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.
For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
#### Article information
Source
Ann. Statist., Volume 27, Number 6 (1999), 1830-1864.
Dates
First available in Project Euclid: 4 April 2002
https://projecteuclid.org/euclid.aos/1017939241
Digital Object Identifier
doi:10.1214/aos/1017939241
Mathematical Reviews number (MathSciNet)
MR1765619
Zentralblatt MATH identifier
0962.62034
#### Citation
Lugosi, Gábor; Nobel, Andrew B. Adaptive model selection using empirical complexities. Ann. Statist. 27 (1999), no. 6, 1830--1864. doi:10.1214/aos/1017939241. https://projecteuclid.org/euclid.aos/1017939241 | 2019-11-20 23:29:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6602602005004883, "perplexity": 573.4281433245209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00310.warc.gz"} |
https://mathematica.stackexchange.com/questions/91627/how-to-transform-an-image-into-a-probability-density-function | # How to transform an image into a probability density function?
I'd like to use the Metropolis algorithm to randomly generate points with density according to the brightness of an image. I just need to transform a binary image into a pdf function to use this answer.
First attempt:
img = ColorNegate@ColorConvert[ImageResize[lena[], 50], "Grayscale"];
dims = ImageDimensions[img];
data = Flatten[
Table[{{i, j}, PixelValue[img, {i, j}]}, {i, dims[[1]]}, {j,
dims[[2]]}], 1];
f = Interpolation[data]
ContourPlot[f[x, y], {x, 1, dims[[1]]}, {y, 1, dims[[2]]}]
Metropolis /:
RandomDistributionVector[
Metropolis[pdf_, u0_, s_: 1, n0_: 100, chains_: 200], n_Integer,
prec_?Positive] :=
Module[{u, du, p, p1, accept, cpdf},
cpdf = Compile @@ {{#, _Real} & /@ #, pdf @@ #,
RuntimeAttributes -> {Listable}, RuntimeOptions -> "Speed"} &[
Unique["x", Temporary] & /@ u0];
u = ConstantArray[u0, chains];
p = cpdf @@ Transpose[u];
(Join @@
Table[du =
RandomVariate[
NormalDistribution[0, s], {chains, Length[u0]}];
p1 = cpdf @@ Transpose[u + du];
accept = UnitStep[p1/p - RandomReal[{0, 1}, chains]];
p += (p1 - p) accept;
u += du accept, {Ceiling[(n0 + n)/chains]}])[[n0 + 1 ;;
n0 + n]]];
p = RandomVariate[Metropolis[f, {25, 25}], 30000];
ListPlot[p, AspectRatio -> Automatic]
• Why not use HistogramDistribution[] or SmoothKernelDistribution[]? Related to this, look up the docs for ImageHistogram[]. – J. M.'s ennui Aug 14 '15 at 15:35
• @Guesswhoitis. There are no examples in the Docs for a 3d histogram from a black and white image... – M.R. Aug 14 '15 at 15:58
• If the intensity represents the PDF magnitude and not the raw data then HistogramDistribution and similar are not what you want. – rhermans Aug 14 '15 at 16:03
If you don't care about the algorithm and only want to sample points with density according to image brightness, you could just use RandomChoice:
using a test image that looks a little bit like a PDF:
img = Image[
Rescale[Array[
Sin[#1^2]*Cos[#2 + Sin[#1/5]] + Exp[-(#1^2 + #2^2)/2] &, {512,
512}, {{-2., 4.}, {-3., 3.}}]]];
I can then sample random pixel indices weighted by the corresponding pixel brightness:
weights = Flatten[ImageData[img]];
sample = RandomChoice[weights -> Range[Length[weights]], 10000];
And convert indices back to coordinates:
{w, h} = ImageDimensions[img];
pts = Transpose[{Mod[sample, w], h - Floor[sample/N[w]]}];
(this is blazingly fast. Sampling 10^6 points takes about 0.2 seconds.)
Show[img, Graphics[{Red, PointSize[Small], Opacity[.5], Point[pts]}]]
ADD: If you really want to use the MH algorithm, why not use ImageValue directly, instead of creating an interpolation function?
pdf = Function[{x, y}, ImageValue[img, {x, y}]];
ContourPlot[pdf[x, y], {x, 0, 511}, {y, 0, 511}]
` | 2021-04-16 16:16:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1893947869539261, "perplexity": 13539.752726256074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088245.37/warc/CC-MAIN-20210416161217-20210416191217-00441.warc.gz"} |
http://clear-lines.com/blog/?tag=/.NET | Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning.
14. January 2012 14:16
I am putting together a demo VSTO add-in for my talk at the Excel Developer Conference. I wanted to play with charts a bit, and given that I am working off a .NET model, I figured it would be interesting to produce charts directly from the data, bypassing the step of putting data in a worksheet altogether.
In order to do that, we simply need to create a Chart in a workbook, add a Series to the SeriesCollection of the chart, and directly set the Series Values and XValues as an array, along these lines:
var excel = this.Application;
var workbook = excel.ActiveWorkbook;
var charts = workbook.Charts;
chart.ChartType = Excel.XlChartType.xlLine;
chart.Location(XlChartLocation.xlLocationAsNewSheet, "Tab Name");
var seriesCollection = (SeriesCollection)chart.SeriesCollection();
var series = seriesCollection.NewSeries();
series.Values = new double[] {1d, 3d, 2d, 5d};
series.XValues = new string[] {"A", "B", "C", "D"};
series.Name = "Series Name";
This will create a simple Line chart in its own sheet – without any reference to a worksheet data range.
Now why would I be interested in this approach, when it’s so convenient to create a chart from data that is already in Excel?
Suppose for a moment that you are dealing with the market activity on a stock, which you can retrieve from an external data source as a collection of StockActivity .NET objects:
public class StockActivity
{
public DateTime Day { get; set; }
public decimal Open { get; set; }
public decimal Close { get; set; }
}
In this case, extracting the array for the X and Y values would be a trivial matter, making it very easy to produce a chart of, say, the Close values over time:
// Create a few fake datapoints
var day1 = new StockActivity()
{
Day = new DateTime(2010, 1, 1),
Open = 100m,
Close = 110m
};
var day2 = new StockActivity()
{
Day = new DateTime(2010, 1, 2),
Open = 110m,
Close = 130m
};
var day3 = new StockActivity()
{
Day = new DateTime(2010, 1, 3),
Open = 130m,
Close = 105m
};
var history = new List<StockActivity>() { day1, day2, day3 };
var excel = this.Application;
var workbook = excel.ActiveWorkbook;
var charts = workbook.Charts;
chart.ChartType = Excel.XlChartType.xlLine;
chart.Location(XlChartLocation.xlLocationAsNewSheet, "Stock Chart);
var seriesCollection = (SeriesCollection)chart.SeriesCollection();
var series = seriesCollection.NewSeries();
series.Values = history.Select(it => (double)it.Close).ToArray();
series.XValues = history.Select(it => it.Day).ToArray();
series.Name = "Stock";
Using LINQ, we Select from the list the values we are interested in, and pass them into an array, ready for consumption into a chart, and boom! We are done.
If what you need to do is explore data and produce charts to figure out potentially interesting relationships, this type of approach isn’t very useful. On the other hand, if your problem is to produce on a regular basis the same set of charts, using data coming from an external data source, this is a very interesting option!
23. April 2010 09:24
I gave a lightning talk on Pex at the San Francisco .Net user group this Wednesday, and figured I might as well post the slide deck. Pex is a fascinating free add-in to Visual Studio, totally worth looking into: unleash Pex on a method, and it will identify interesting input values for you.
18. January 2010 09:01
.Net events in the North California area, Feb 18, 2010 – Feb 24, 2010.
## Tuesday, Jan 19, 2010
7:00 PM, O'Reilly Media, Sebastopol: Mathias Brandewinder (that’s me) will talk “For Those about to Mock”. Free, organized by the North Bay .Net user group.
## Wednesday, Jan 20, 2010
6:30 PM, Microsoft San Francisco Office, San Francisco: Peter Kellner will talk about “Using WCF RIA Services in Silverlight 4 to Build n-tier LOB application”. Free, organized by the San Francisco .Net user group.
If you know of upcoming .Net-related events in North California that I would have missed, please let me know, and I’ll add them to the thread!
18. September 2009 06:12
I found a bug in my code the other day. It happens to everybody - apparently I am not the only one to write bugs – but the bug itself surprised me. In my experience, once you know a piece of code is buggy, it’s usually not too difficult to figure out what the origin of the problem might be (fixing it might). This bug surprised me, because I knew exactly the 10 lines of code where it was taking place, and yet I had no idea what was going on – I just couldn’t see the bug, even though it was floating in plain sight (hint: the pun is intended).
Here is the context. The code reads a double and converts it into a year and a quarter, based on the following convention: the input is of the form yyyy.q, for instance, 2010.2 represents the second quarter of 2010. Anything after the 2nd decimal is ignored, 2010.0 is “rounded up” to 1st quarter, and 2010.5 and above rounded down to 4th quarter.
Here is my original code:
public class DateConverter
{
public static int ExtractYear(double dateAsDouble)
{
int year = (int)dateAsDouble;
return year;
}
public static int ExtractQuarter(double dateAsDouble)
{
int year = ExtractYear(dateAsDouble);
int quarter = (int)(10 * (Math.Round(dateAsDouble, 1) - (double)year));
if (quarter < 1)
{
quarter = 1;
}
if (quarter > 4)
{
quarter = 4;
}
return quarter;
}
}
Can you spot the bug?
More...
27. May 2009 13:36
The iterative nature of writing code inevitably involves adding code which is good enough for now, but should be refactored later. The problem is that unless you have some system in place, later, you will just forget about it. Personally, I have been trying 3 approaches to address this: bug-tracking systems, the good old-fashion text to-do list, and its variant, the task list built in Visual Studio, and finally, comments embedded in the code.
Each have their pros and cons. Bug tracking systems are great for systematically managing work items in a team (prioritization, assignment to various members...), but work best for items at the level of a feature: in my experience, smaller code changes don't fit well. I am a big fan of the bare-bones text file to-do list; I tried, but never took to the Visual Studio to-do list (no clear reason there). I hardly embed comments in code anymore (like 'To-do: change this later'): on the plus side, the comment is literally tacked to the code that needs changing, but the comments cannot be displayed all as one list, which makes them too easy to forget.
Today I found a cool alternative via Donn Felker’s blog: #warning. You use it essentially like a comment, but preface it with #warning, like this:
#warning The tag name should not be hardcoded
XmlNodeList atBatNodes = document.GetElementsByTagName("atbat");
Now, when you build, you will see something like this in Visual Studio:
It has all the benefits of the embedded comment – it’s close to the code that needs to be changed - but will also show up as a list which will be in-your-face every time you build. I’ll try that out, and see how that goes, and what stays in the todo.txt!
#### Need help with F#?
The premier team for
F# training & consulting. | 2019-05-21 23:47:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2684513032436371, "perplexity": 2914.5753193729247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256586.62/warc/CC-MAIN-20190521222812-20190522004812-00331.warc.gz"} |
http://dergipark.gov.tr/ijot/issue/35770/316300 | | | | |
## Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel
#### Salman Abdu Ahmed [1] , Song Zhou [2] , Yuangqing Zhu [3]
##### 133 107
The demand for higher output efficiencies, greater specific power output, increased reliability, and ever reduced emissions has been rising. One promising alternative is the use of a gaseous fuel as partial supplement to liquid fuel. In this study, the effects of diesel-natural gas substitution ratios on the engine performance parameters like brake specific fuel consumption (BSFC), and gaseous emissions of nitrogen oxides (NOX), hydrocarbons (HC), carbon monoxide (CO) and carbon dioxide (CO2) were investigated for natural gas-diesel fuel operation and then compared with the original diesel operation. The engine was modeled with GT-Power computational simulation tool. The diesel fuel was injected into the cylinder while natural gas was injected in to air-intake pipe then compressed together with air. The simulation was carried out at constant engine speed of 1800rpm for four different natural gas fractions (15%, 25%, and 50% and 75%). NOX and CO2 emissions decreased sharply by more than 45% and 50% respectively in dual-fuel mode when compared to only diesel fuel mode. However an increase was observed in CO and HC emissions in dual fuel mode. The results also indicated that higher BSFC and lower brake thermal efficiency (BTE) in dual fuel mode when compared to those of the corresponding diesel engine.
Diesel, dual-fuel engine, natural gas, engine performance
• [1] H.Bayraktar, "An experimental study on the performance parameters of an experimental CI engine fueled with diesel–methanol–dodecanol blends," Fuel, vol. 87, pp. 158-164, 2008. [2] J. Liu, A. Yao, and C. Yao, "Effects of injection timing on performance and emissions of a HD diesel engine with DMCC," Fuel, vol. 134, pp. 107-113, 2014. [3] A. Broatch, J. Luján, S. Ruiz, and P. Olmeda, "Measurement of hydrocarbon and carbon monoxide emissions during the starting of automotive DI diesel engines," International Journal of Automotive Technology, vol. 9, pp. 129-140, 2008. [4] MARPOL Annex IV, Regulations 13 I. M. Organization, 2014. [5] B. Sahoo, N. Sahoo, and U. Saha, "Effect of engine parameters and type of gaseous fuel on the performance of dual-fuel gas diesel engines—A critical review," Renewable and Sustainable Energy Reviews, vol. 13, pp. 1151-1184, 2009. [6] O. Badr, G. Karim, and B. Liu, "An examination of the flame spread limits in a dual fuel engine," Applied Thermal Engineering, vol. 19, pp. 1071-1080, 1999. [7] J. Kusaka, T. Okamoto, Y. Daisho, R. Kihara, and T. Saito, "Combustion and exhaust gas emission characteristics of a diesel engine dual-fueled with natural gas," JSAE review, vol. 21, pp. 489-496, 2000. [8] G. A. Alla, H. Soliman, O. Badr, and M. A. Rabbo, "Effect of pilot fuel quantity on the performance of a dual fuel engine," Energy Conversion and Management, vol. 41, pp. 559-572, 2000. [9] Y. Karagöz, T. Sandalcı, U. O. Koylu, A. S. Dalkılıç, and S. Wongwises, "Effect of the use of natural gas–diesel fuel mixture on performance, emissions, and combustion characteristics of a compression ignition engine," Advances in Mechanical Engineering, vol. 8, p. 1687814016643228, 2016. [10]R. Papagiannakis and D. Hountalas, "Combustion and exhaust emission characteristics of a dual fuel compression ignition engine operated with pilot diesel fuel and natural gas," Energy conversion and management, vol. 45, pp. 2971-2987, 2004. [11]J. Egúsquiza, S. Braga, and C. Braga, "Performance and gaseous emissions characteristics of a natural gas/diesel dual fuel turbocharged and aftercooled engine," Journal of the Brazilian Society of Mechanical Sciences and Engineering, vol. 31, pp. 142-150, 2009. [12]V. K. Gaba and P. Nashine, "Shubhankar. Bhowmick,” Combustion Modeling of Diesel Engine Using Bio-Diesel as Secondary Fuel," in International Conference on Mechanical and Robotics Engineering (ICMRE'2012) May, 2012, pp. 26-27. [13]V. Ayhan, A. Parlak, I. Cesur, B. Boru, and A. Kolip, "Performance and exhaust emission characteristics of a diesel engine running with LPG," International Journal of Physical Sciences, vol. 6, pp. 1905-1914, 2011. [14]A. Kumaraswamy and B. D. Prasad, "Performance analysis of a dual fuel engine using LPG and diesel with EGR system," Procedia Engineering, vol. 38, pp. 2784-2792, 2012. [15] G. Theotokatos, S. Stoumpos, Y. Ding, L. Xiang, and G. Livanos, "COMPUTATIONAL INVESTIGATION OF A LARGE DUAL FUEL MARINE ENGINE." [16]K. D. Bob-Manuel and R. J. Crookes, "Performance and Emission Evaluation in Dual-Fuel Engine Using Renewable Fuels for Pilot Injection," SAE Technical Paper 0148-7191, 2007. [17]S. Randive and D. Thombare, "Modelling and Simulation of Methanol and Diesel Fuelled HCCI Engine for Improved Performance and Emission Characteristics." [18]R. Singh and S. Maji, "Performance and exhaust gas emissions analysis of direct injection cng-diesel dual fuel engine," Research Scholar, PhD Candidate, University of Delhi, Delhi, INDIA, 2012. [19]P. L. Mtui, "Performance And Emissions Modeling Of Natural Gas Dual Fuelling Of Large Diesel Engines," International Journal of Scientific & Technology Research, vol. 2, pp. 317-323, 2013. [20] C. R. Ferguson and A. T. Kirkpatrick, Internal combustion engines: applied thermosciences: John Wiley & Sons, 2015. [21] G. A. Karim, Dual-fuel diesel engines: CRC Press, 2015. [22] H. Köse and M. Ciniviz, "An experimental investigation of effect on diesel engine performance and exhaust emissions of addition at dual fuel mode of hydrogen," Fuel processing technology, vol. 114, pp. 26-34, 2013. [23] B. John, "Heywood internal combustion engine fndament als," ed: Mc Graw-Hill Book Company, 1988. [24] W. A. Majewski and M. K. Khair, "Diesel emissions and their control," SAE Technical Paper2006. [25] K. Cheenkachorn, C. Poompipatpong, and C. G. Ho, "Performance and emissions of a heavy-duty diesel engine fuelled with diesel and LNG (liquid natural gas)," Energy, vol. 53, pp. 52-57, 2013. [26] G. A. Karim, "A review of combustion processes in the dual fuel engine—the gas diesel engine," Progress in Energy and Combustion Science, vol. 6, pp. 277-285, 1980. [27] D. Kouremenos, D. Hountalas, and A. Kouremenos, "Experimental investigation of the effect of fuel composition on the formation of pollutants in direct injection diesel engines," SAE Technical Paper 0148-7191, 1999.
Birincil Dil en Mühendislik Regular Original Research Article Yazar: Salman Abdu AhmedKurum: Harbin Engineering UniversityÜlke: China Yazar: Song ZhouKurum: Harbin Engineering UniversityÜlke: China Yazar: Yuangqing ZhuKurum: Harbin Engineering UniversityÜlke: China
Bibtex @araştırma makalesi { ijot316300, journal = {International Journal of Thermodynamics}, issn = {1301-9724}, eissn = {2146-1511}, address = {Yaşar DEMİREL}, year = {2018}, volume = {21}, pages = {16 - 25}, doi = {10.5541/ijot.316300}, title = {Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel}, key = {cite}, author = {Ahmed, Salman Abdu and Zhou, Song and Zhu, Yuangqing} } APA Ahmed, S , Zhou, S , Zhu, Y . (2018). Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel. International Journal of Thermodynamics, 21 (1), 16-25. DOI: 10.5541/ijot.316300 MLA Ahmed, S , Zhou, S , Zhu, Y . "Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel". International Journal of Thermodynamics 21 (2018): 16-25 Chicago Ahmed, S , Zhou, S , Zhu, Y . "Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel". International Journal of Thermodynamics 21 (2018): 16-25 RIS TY - JOUR T1 - Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel AU - Salman Abdu Ahmed , Song Zhou , Yuangqing Zhu Y1 - 2018 PY - 2018 N1 - doi: 10.5541/ijot.316300 DO - 10.5541/ijot.316300 T2 - International Journal of Thermodynamics JF - Journal JO - JOR SP - 16 EP - 25 VL - 21 IS - 1 SN - 1301-9724-2146-1511 M3 - doi: 10.5541/ijot.316300 UR - http://dx.doi.org/10.5541/ijot.316300 Y2 - 2018 ER - EndNote %0 International Journal of Thermodynamics Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel %A Salman Abdu Ahmed , Song Zhou , Yuangqing Zhu %T Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel %D 2018 %J International Journal of Thermodynamics %P 1301-9724-2146-1511 %V 21 %N 1 %R doi: 10.5541/ijot.316300 %U 10.5541/ijot.316300 ISNAD Ahmed, Salman Abdu , Zhou, Song , Zhu, Yuangqing . "Performance and Emission Characteristics Analysis of Dual Fuel Compression Ignition Engine Using Natural Gas and Diesel". International Journal of Thermodynamics 21 / 1 (Mart 2018): 16-25. http://dx.doi.org/10.5541/ijot.316300 | 2019-03-25 17:56:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48056116700172424, "perplexity": 12877.136849759152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204086.87/warc/CC-MAIN-20190325174034-20190325200034-00492.warc.gz"} |
http://clay6.com/qa/7737/find-the-angle-between-two-vectors-overrightarrow-and-overrightarrow-if-ove | # Find the angle between two vectors $\overrightarrow{a}$ and $\overrightarrow{b}$ if $|\overrightarrow{a} \times \overrightarrow{b}|=\overrightarrow{a}.\overrightarrow{b}$
Toolbox:
• $\overrightarrow a.\overrightarrow b=| \overrightarrow a|| \overrightarrow b| \cos \theta$ $\therefore \cos \theta = \large\frac{ \overrightarrow a. \overrightarrow b}{| \overrightarrow a|| \overrightarrow b|} \Rightarrow \theta = \cos^{-1} \large\frac{ \overrightarrow a. \overrightarrow b}{| \overrightarrow a|| \overrightarrow b|}$
• For two vectors $\overrightarrow a \: and \: \overrightarrow b$, the vector product $\overrightarrow a$ x $\overrightarrow b=|\overrightarrow a||\overrightarrow b| \sin \theta \hat n$ with $\hat n \perp$ to $\overrightarrow a \: and \: \overrightarrow b\: and \: \overrightarrow a, \overrightarrow b, \hat n$ forming a right handed system.
$| \overrightarrow a \times \overrightarrow b| = |\overrightarrow a||\overrightarrow b| \sin \theta$
$\overrightarrow a.\overrightarrow b = |\overrightarrow a||\overrightarrow b| \cos \theta$
$|\overrightarrow a||\overrightarrow b| \sin \theta = |\overrightarrow a|.|\overrightarrow b| \cos \theta$
$\Rightarrow \tan \theta = 1 \:\: \: \: \: 0 \leq \theta \leq \pi$
$\therefore \theta = \tan^{-1}1 = \large\frac{\pi}{4}$
The angle between the vectors is $\large\frac{\pi}{4}$ | 2017-08-23 14:01:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654525279998779, "perplexity": 182.32600149626373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.0/warc/CC-MAIN-20170823132736-20170823152736-00289.warc.gz"} |
https://www.physicsforums.com/threads/impossible-curl-of-a-vector-field.908518/ | # A Impossible Curl of a Vector Field
1. Mar 21, 2017
### laplacianZero
Let's assume the vector field is NOT a gradient field.
Are there any restrictions on what the curl of this vector field can be?
If so, how can I determine a given curl of a vector field can NEVER be a particular vector function?
2. Mar 21, 2017
### Staff: Mentor
Can give us a context here or some example that you're looking at?
3. Mar 21, 2017
### laplacianZero
No example in particular... but I guess I can come up with one.
Here
Curl of vector field F = <2x, 3yz, -xz^2>
Is this possible??
4. Mar 21, 2017
### zwierz
sure. if a vector field v is a curl of some another vector field then $\mathrm{div}\,v=0$ Locally the inverse is also true
5. Mar 21, 2017
### laplacianZero
Well, is the above post #3 a possibility?
6. Mar 21, 2017
### laplacianZero
????
7. Mar 30, 2017
### laplacianZero
Nvm. I got it.
8. Mar 31, 2017
### jostpuur
You can obtain some results concerning that question by examining the Fourier transforms. This approach suffers from the obvious shortcoming that not all functions have Fourier transforms, but anyway, it could be that Fourier transforms still give something. | 2017-11-21 10:58:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.413259357213974, "perplexity": 1948.9851141498254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806338.36/warc/CC-MAIN-20171121094039-20171121114039-00191.warc.gz"} |
https://www.i4cy.com/maple/ | # Colour Cycle of a Maple Tree
Discovering why leaves change from green in late summer to deep red in autumn
Written by Guy Fernando
A leaf appears green because it contain an abundance of chlorophyll which is a green pigment. A green summer leaf contain so much chlorophyll that other coloured pigments in the leaf are masked out. Sunlight controls the amount of chlorophyll produced, so as autumn days grow shorter less chlorophyll is produced. This gradual reduction of chlorophyll allows other pigments in leaf to be revealed. During the autumn levels of chlorophyll decreases, as well as increased sugar concentration which cause an increase production of anthocyanin. Anthocyanin a redish pigment which gives leaves their red autumnal colour.
The spectrum graphs below show transmission plots, which is the measurement of light of varying wavelength that is able to pass through the sample. This is typically the inverse of absorption plots, which is the measurement of light that is absorbed by the sample. Also shown below are CIE colour space chromaticity diagrams which plot the actual perceived colour of the sample.
In plants there are two types of Chlorophyll, namely Chlorophyll-A and Chlorophyll-B. Primarily it is Chlorophyll-A that is responsible for performing photosynthesis, the role of converting light energy to chemical energy such as the production of sugars.
Two samples of leaves from the maple tree were taken. One set in early October when the leaves were still green, and the other set in mid November when the leaves were a deep red. Both leaf samples were crushed and placed in a solution of water and acetone, and optically tested using the PIC Optical Spectrum Analyser (POSA-1)
.
### Early-October Leaf Sample
Chlorophyll-A chemical formula is \begin{aligned} C _{55} H _{72} O _{5} N _{4} Mg \end{aligned}
Chlorophyll-B chemical formula is \begin{aligned} C _{55} H _{70} O _{6} N _{4} Mg \end{aligned}
The spectrum graph below shows that a combination of both Chlorophyll-A and Chlorophyll-B have absorption wavelengths (the troughs) at 390nm, 440nm, 620nm and 680nm. These wavelengths correspond to the blue and red parts of the spectrum, respectively. This accounts for why plants require both red and blue light to grow healthily.
The CIE plot below shows the grey dot in the yellow/green area which is the characteristic green colour of the chlorophyll sample.
### Mid-November Leaf Sample
Anthocyanin chemical formula is \begin{aligned} C _{15} H _{11} O ^{+} \end{aligned}
The spectrum graph below shows anthocyanin peak wavelength (the peak) at 610nm. This wavelength correspond to orange part of the spectrum. The actual colour of anthocyanin can vary between purple, blue, orange and red depending on the overall pH of the plant. In fact paper dyed using anthocyanin can be used like litmus paper. In this case the anthocyanin is orange/red colour.
The CIE plot below shows the grey dot in the orange area which is in this case the orange/red colour of the anthocyanin sample.
It is certainly possible through experiment to see two distinguishing spectra taken using samples from maple leaves in later summer and autumn. Both spectra characterising the distinctive chlorophyll and anthocyanin chemical signatures. | 2022-08-10 21:04:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6241530179977417, "perplexity": 3691.5195047677294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00270.warc.gz"} |
https://latex.org/forum/viewtopic.php?f=44&t=33240&p=111856 | ## LaTeX forum ⇒ Text Formatting ⇒ How to bold \tau Topic is solved
Information and discussion about LaTeX's general text formatting features (e.g. bold, italic, enumerations, ...)
sheenshahid
Posts: 20
Joined: Fri Jan 24, 2020 10:31 am
### How to bold \tau
\mathbf{\tau} changes shape of \tau when written in equation in latex for book document class. Axtually i want to bold \tau in an equation.
Code: [Select all] [Expand/Collapse] [Download] ({untitled.tex})
begin{equation}\label{eq1.3}\mathbf{\tau}=-p\textbf{I}+(1+\frac{1}{\beta})\textbf{A}_{1}\end{equation}
Bartman
Posts: 32
Joined: Fri Jan 03, 2020 2:39 pm
The example is incomplete. You can use the command \boldsymbol of amsmath to make it bold.
You may also be interested in section 8 of voss-mathmode, which describes commands for parentheses.
Return to “Text Formatting”
### Who is online
Users browsing this forum: No registered users and 4 guests | 2020-04-03 04:08:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198996424674988, "perplexity": 13893.213289291773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00149.warc.gz"} |
https://gravityloss.wordpress.com/tag/rocket/ | Feeds:
Posts
## RD-180 Engine Diagram
RD-180 flow diagram
This is a bit different from the NK-33 done previously, but it’s still a full flow oxidizer rich staged combustion lox-kerosene engine.
It has no gears and no flexible shaft coupling between the pumps like the NK-33, making it a real one axis engine – except that it has separate booster pumps at the engine inlet. The fuel booster pump is powered by the fuel tapoff after one main pump stage and the oxidizer booster pump by the turbine exhaust. The starting is also different, but I omitted the starter hardware from the already complicated diagram, as it’s connected to many places – the main chamber, the gas generator and the first main fuel pump inlet. Also various valves, controllers and the tank heat exchanger are left out. And naturally I left out the other chamber and nozzle as well.
Both the RD-180 and the NK-33 have the same amount of pump stages – 3 for fuel entering the gas generator and 2 for fuel entering the main chamber and 2 for the oxidizer (all enters the gas generator).
Perhaps it can be thought, simplified, that the boost pumps are only hydraulically and not axis coupled to the main shaft system, and hence both can be better optimized to their environment (like lower rpm for the boost pumps) and hence the system can reach higher pressures than the NK-33, where the two oxidizer pump stages are on the same shaft. Or then it’s the later materials or more advanced pump design, after all the engines have some ten to twenty years between them.
Source for the drawing and explanation is this patent no. 6226980. Also, lpre.de has awesome pictures of the hardware, including the shaft with all the pump stages included. I assume it’s machined from one solid piece. Also pictures of the pipe stack injector / mixer and more diagrams of the engine operations. I don’t know much Russian (having finished half a course back somewhen), but if you know most of the cyrillic alphabet (helps if you know math as it’s very close to Greek), it’s practically quite easy to read as there are so many loan words – gasogenerator should not be a mystery to anyone. 🙂
The workhorse Soyuz RD-107 and RD-108 engines are completely different as they use a hydrogen peroxide gas generator design – very old-fashioned – but the RD-0124 used on the more modern Russian upper stages is the third interesting kerosene staged combustion engine that might become even more actual if Orbital are going to use it as a second stage engine on their Taurus II (currently they are moving on with solids). The fourth staged combustion engine is the RD-120 that’s bigger than RD-0124 and is used on the Zenit second stage. And then there’s the often overlooked forefather of staged combustion, Proton’s RD-253 / RD-275, that uses hypergolic propellants. The RD-0120 hydrogen engine of Buran / Energia is interesting as a comparison to the similar SSME. So there’s still plenty of study subjects in the Russian engine families.
## The US Air Force Tries To Do Reusables
But is not a real RLV program. It’s just a narrow test for one technology. Hence I think naming it Reusable Booster System Pathfinder is misleading.
## Overspecification
They overspecify the problem by requiring a glide landing. Why is it superior to powered landing? At the moment, there’s no clear reason to believe it is! Both need to be developed further to understand their advantages and drawbacks. To my knowledge, there have been only six liquid rocket VTVL prototype manufacturers so far: McDonnell Douglas, JAXA (who was the contractor?), Armadillo Aerospace, Blue Origin, Masten Space Systems and Unreasonable Rocket. Only a few of those have flown to higher than a few hundred meters. The design and operations space is mostly totally unexplored.
Nevermind the large number of other alternatives to boostback. Jon Goff had a recent “lecture series” about these.
I understand that this is just one program, but this should not gain the status of the reusables approach of the air force – stuff like that easily happens.
## Master Design Fallacy
They also discard evolution and competition – instead just requiring a single masterfully designed prototype before something operational. Sure, this is much better than starting a multi-billion dollar program without a first lower cost prototype, but nevertheless, it sucks. Somebody brief them on newspace! Rand Simberg, Monte Davis, Jonathan Goff, Clark Lindsey, or one of the numerous people who get it. Or one of the prominent company leaders: John Carmack, Jeff Greason, David Masten.
## An Ideal Program
Just specify some boost delta vee points and let companies demonstrate progress towards that. A popup tailflame lander would perhaps give more vertical velocity while some good glider or even a booster that has engines for cruising back could boost far down range to give lots of horizontal velocity. There ain’t a clear winner – there might not even be and multiple approaches would have their uses.
## Lunar Lander Challenge 2009
Masten and Unreasonable are still flying for second place I think (I’m not 100% clear on the rules) today!
Spacetransportnews is the place to watch all this. (Or it has the links collected.)
It’s historical in a sense. These rockets will serve as the basis for reusable sounding rockets, possibly high altitude tourist vehicles and later orbital system lower or upper stages. When the operations are routine and landings safe, the cost per flight goes down orders of magnitude, compared to ordinary rockets.
A new era for rocketry is dawning.
## Optimum Rocket Cruise
With some caveats. 🙂 Let’s assume a rocket is launched, and accelerates to constant speed v_c. Then it stays cruising at this speed and at a constant altitude. Landing is disregarded.
### The cruise
We must modify the rocket equation slightly for the cruise: $\frac{-dm}{dt} v_{ex} = F = \frac{gm(t)}{L/D}$ dm/dt is mass flow, v_ex is effective exhaust velocity, F is the thrust, g is the gravitational acceleration 9.81 m/s^2, m(t) is the mass as function of time, L/D is the lift to drag ratio. If we use the $\Delta t = x/v_c$ for time, (x is the cruise distance) we can integrate it from start to final mass just like the rocket equation and get the cruise mass ratio: $R_{mc} = e^{\frac{xg}{v_c v_{ex}L/D}}$ Notice how with increasing cruise speed, the required mass ratio for cruise is lessened. This is because less time is spent in the air and thus the gravity losses are lessened.
### The acceleration
But we have to take into account the acceleration to cruise speed as well, which requires some mass ratio as well. $R_{ma} = e^{\frac{v_c}{v_{ex}}}$ We don’t take into account the distance traveled during acceleration, or lift, as the acceleration is a relatively short time and distance phenomenon with rockets that easily optimize to have high T/W.
### Total effect
Now, for the total required mass ratio, we multiply the two mass ratios. Then we search for the minimum total mass ratio by derivating it and searching for the zero point. We get: $v_c = \sqrt{\frac{xg}{L/D}}$the optimum cruise speed (smallest mass ratio) Notice how the exhaust velocity cancels out, the optimal speed doesn’t depend on it.
### More considerations
If I calculated right, for a 6000 km transatlantic rocket powered flight with a lift to drag of 7, the best cruise speed for minimum mass ratio is 3 km/s. If you go slower, you waste fuel by hanging in the air, if you go faster, you waste fuel by accelerating too much. I think that’s about Mach 9 at some altitude. This didn’t take into account the deceleration: faster cruise speed takes some advantage there! Even if you shut the engine, it glides further. In real life there are multiple issues:
• acceleration takes time and distance too
• engine T/W size has an effect as well
• there is varying mass during flight .which reduces lift needs with time o which in turn effects L/D as you go higher or reduce AoA .which also requires throttling
• And a million other things.
Also L/D 7 is probably much too good. Oh, and in the transatlantic case, mass ratio required with exhaust velocity 3 km/s would be 7. | 2019-07-23 16:34:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5460532903671265, "perplexity": 2543.423915484585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00278.warc.gz"} |
https://math.stackexchange.com/questions/2286116/what-is-sin-pi-2-iln2/2286126 | # What is $sin(\pi/2 +iln2)$
I have tried this is a couple of different ways and have different answers.
1. $\sin(\pi/2+i\ln2)=\cos(i\ln2)=\cosh(\ln2)=\frac{e^{\ln2}+e^{-\ln2}}{2}=5/4$
2. $\sin(\pi/2+i\ln2)={\rm Im}(e^{i(\pi/2+i\ln2)})={\rm Im}(ie^{-\ln2})={\rm Im}(1/2i)=1/2$
I have a feeling the first is correct, but I am not sure where the other could have gone wrong.
• Something's fishy about all of these up-votes in just 20 minutes. – user384138 May 18 '17 at 8:09
• @OpenBall It's not my doing if there is something going on! I just posted the question, went to a lecture, and am checking again now! I presume it is because the distinction of when the expansions/methods are valid is not taught well in schools(certainly I have never come across this) so it seems like an strange conundrum until you know one method is simply not valid. – Meep May 18 '17 at 9:28
My guess is that $\sin(x)=\mathrm{Im}(e^{ix})$ only works for $x\in \mathbb{R}$. Take for example the series representation of $$\sin(x)=\sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+1)!}$$ Now if we insert $\sin(ix)$ with real $x$, then we see that $\sin(ix)\in\mathbb{C}$. On the other hand we see that $\sin(ix)=\mathrm{Im}(e^{i(ix)}) = \mathrm{Im}(e^{-x})=0$ which clearly contradicts our earlier finding. Only valid solution for complex $x$ would be $$\sin(x)=\frac{1}{2i}(e^{ix}-e^{-ix})$$
The first is correct. In the second, you are using that $\sin(x)=\text{im}(e^{ix}),$ but that only applies if $x$ is real, which is not the case here. We know that $e^{ix}=\cos(x)+i\sin(x)$, which makes it look like $\sin(x)$ is the imaginary part of $e^{ix}$, but what if $\cos(x)$ and $\sin(x)$ themselves are not real? | 2019-09-20 22:40:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8058142066001892, "perplexity": 176.68138878189873}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574084.88/warc/CC-MAIN-20190920221241-20190921003241-00159.warc.gz"} |
http://mathhelpforum.com/differential-equations/272702-fourier-serier-engiineering-pulse.html | # Thread: Fourier serier on a engiineering pulse.
1. ## Fourier serier on a engiineering pulse.
Morning All
I have a question I am completely stuck on.
f(t) =3t+3, -1<t<0
=t+3, 0<t<1
=6t-2t, 1<t<3
Make a sketch of this pulse.
Time is measured in milliseconds and amplitude of vibration is in microns.
The diagnostic machinery repeats this pulse every 4 milliseconds. Carry out a Fourier decomposition of the resulting waveform and find the amplitude at the fundamental frequency and at the next six higher harmonics.
Make a plot of this waveform in the frequency domain, showing amplitude against frequency in Hertz.
The engineering company want to ensure that there are no vibrations with an amplitude of more than 0.1 millimetres in the frequency band between 600 Hz and 800Hz.
This is the question. Any help on where I should start. I understand this is a sawtooth wave, and need to apply the equation to get A0, An, B0 but I don’t get how to apply this to a wave with more than one function.
Any Help
Regards
2. ## Re: Fourier serier on a engiineering pulse.
do you mean 6t-2 for 1<t<3 ?
otherwise it's just 4t
3. ## Re: Fourier serier on a engiineering pulse.
sorry i did mean 6t-2.
regards
4. ## Re: Fourier series on a engineering pulse.
Can anyone point in where to start?
Regards
James
5. ## Re: Fourier serier on a engiineering pulse.
The problem says "Carry out a Fourier decomposition". Do you know what that is?
6. ## Re: Fourier serier on a engiineering pulse.
It a fourier series. I know you use the formulas for a sawtooth wave. What i can't figure out is how the three functions at different time intervals fit into the formula.
7. ## Re: Fourier serier on a engiineering pulse.
You know that $\displaystyle \int_a^b g(x)\,\mathrm dx = \int_a^c g(x)\,\mathrm dx + \int_c^b g(x)\,\mathrm dx$?
This means that you can break the interval of integration into subintervals. Does that help?
8. ## Re: Fourier serier on a engiineering pulse.
ok so i've attempted with the help on youtube. Can someone have a look and see if i'm anywhere close.
i'm assume its an odd function even though when i plot it. it doesn't look like it is. Hope that bit is correct. If so the A0 & An = 0
f(t)=A/T *t = 16/4t
bn 4/L int(between 3 & -1) f(t)sin(Nπt) .dt
4/2 int 16/4tsin(Nπt)dt
8(int)tsin(Nπt)dt
can you (int)sin(ax)dx = 1/a^2(sin(ax)-axcos(ax)
therefore i get
8/(N^2*π^2) ((sin((Nπt)-Nπcos(Nπt))
can somebody have a look and see if i'm way off here and please point out where am i going wrong
regards
James
9. ## Re: Fourier serier on a engiineering pulse.
Originally Posted by jblakes
i'm assume its an odd function even though when i plot it. it doesn't look like it is. Hope that bit is correct.
It's not. The fact that $\displaystyle \lim_{t \to 0} f(t)$ exists and is non-zero tells you that.
$\displaystyle \int_{-1}^3 f(t) \cos{\left((t-1)\tfrac{\pi}{2}\right)} \, \mathrm dt = \int_{-1}^0 f(t) \cos{\left((t-1)\tfrac{\pi}{2}\right)} \, \mathrm dt + \int_0^1 f(t) \cos{\left((t-1)\tfrac{\pi}{2}\right)} \, \mathrm dt + \int_{1}^3 f(t) \cos{\left((t-1)\tfrac{\pi}{2}\right)} \, \mathrm dt$
If you prefer, you could work with the interval $\displaystyle (0,4)$ by using $\displaystyle f(t)=f(t-4)$ to find the values in the interval $\displaystyle (3,4)$. It may be slightly easier.
10. ## Re: Fourier serier on a engiineering pulse.
so i'm guessing it neither even or odd. I did think that. i assume i need to work out both A0, An, &Bn? I'm struggling to find an example of one that has both. Or looking at you response it is just an even function?
regards
11. ## Re: Fourier serier on a engiineering pulse.
Originally Posted by jblakes
so i'm guessing it neither even or odd. I did think that. i assume i need to work out both A0, An, &Bn? I'm struggling to find an example of one that has both. Or looking at you response it is just an even function?
regards
$c_n = \dfrac 1 T \displaystyle{\int_T}~f(t)e^{j \frac{n 2 \pi t}{T}}~dt$
where any interval of length $T$ is satisfactory for integration.
Here
$c_n = \dfrac 1 4 \displaystyle{\int_T}~f(t)e^{j \frac{n \pi t}{2}}~dt=$
$\dfrac 1 4 \displaystyle{\int_{-1}^0}~(3t+3)e^{j \frac{n \pi t}{2}}~dt + \dfrac 1 4 \displaystyle{\int_0^1}~(t+3)e^{j \frac{n \pi t}{2}}~dt _ + \dfrac 1 4 \displaystyle{\int_1^3}~(6t+2)e^{j \frac{n \pi t}{2}}~dt$
none of these are particularly hard to integrate.
12. ## Re: Fourier serier on a engiineering pulse.
No, it's not even either. You will also need
$\displaystyle \int_{-1}^3 f(t) \sin{\left((t-1)\tfrac{\pi}{2}\right)} \, \mathrm dt = \int_{-1}^0 f(t) \sin{\left((t-1)\tfrac{\pi}{2}\right)} \, \mathrm dt + \int_0^1 f(t) \sin{\left((t-1)\tfrac{\pi}{2}\right)} \, \mathrm dt + \int_{1}^3 f(t) \sin{\left((t-1)\tfrac{\pi}{2}\right)} \, \mathrm dt$
I rather thought you might be able to work that out for yourself.
Edit: Romsek is correct, you can replace $\displaystyle (t-1)$ with $\displaystyle t$ in those integrals.
13. ## Re: Fourier serier on a engiineering pulse.
i did think that Archie. Thanks for you reply, and you too Romsek.
((3t^2/2)-(6cos(pit/2))/pi |-1&0 + (t^2/2)-((6cos(pit/2)/pi)|0&1 + 2cos(t)+3t^2 |1&3
does that look right for An values? Just need to stick values in.
14. ## Re: Fourier serier on a engiineering pulse.
Is that correct for the Bn?
((3t^2)-3sin(t))|-1&0 +(t^2-3sin(t))|0&1+ (2sin(t)+3t^2) |1&3
regards
15. ## Re: Fourier serier on a engiineering pulse.
I don't think i have intergrated these correctly.
intergration between -1&0 3t+3(sin(t-1(pi/2))
Comes to
3/2π(sin(1)-sin(2)+cos(1) is what i get on symbolab. Can I ask if that is correct?
regards
Page 1 of 3 123 Last | 2018-05-23 19:24:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7880867719650269, "perplexity": 1909.9771818494949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865702.43/warc/CC-MAIN-20180523180641-20180523200641-00015.warc.gz"} |
https://www.nature.com/articles/s41467-020-19252-4?error=cookies_not_supported&code=2f1e93eb-cd52-427c-8d67-54dbccb2ba2f | # General destabilizing effects of eutrophication on grassland productivity at multiple spatial scales
## Abstract
Eutrophication is a widespread environmental change that usually reduces the stabilizing effect of plant diversity on productivity in local communities. Whether this effect is scale dependent remains to be elucidated. Here, we determine the relationship between plant diversity and temporal stability of productivity for 243 plant communities from 42 grasslands across the globe and quantify the effect of chronic fertilization on these relationships. Unfertilized local communities with more plant species exhibit greater asynchronous dynamics among species in response to natural environmental fluctuations, resulting in greater local stability (alpha stability). Moreover, neighborhood communities that have greater spatial variation in plant species composition within sites (higher beta diversity) have greater spatial asynchrony of productivity among communities, resulting in greater stability at the larger scale (gamma stability). Importantly, fertilization consistently weakens the contribution of plant diversity to both of these stabilizing mechanisms, thus diminishing the positive effect of biodiversity on stability at differing spatial scales. Our findings suggest that preserving grassland functional stability requires conservation of plant diversity within and among ecological communities.
## Introduction
Humans are altering global nutrient cycles via combustion of fossil fuels and fertilizer application1. We have more than doubled preindustrial rates of nitrogen (N) and phosphorus (P) supply to terrestrial ecosystems2. Terrestrial N and P inputs are predicted to reach levels that are three to four times preindustrial rates by 2050 (ref. 3). This pervasive global eutrophication will have dramatic consequences on the structure and functioning of terrestrial and aquatic ecosystems3. In grasslands, nutrient enrichment usually increases primary productivity, but reduces plant diversity, and alters the ability of ecosystems to reliably provide functions and services for humanity4,5,6,7.
Concerns that eutrophication compromises both the diversity and stability of ecosystems have led to a growing number of theoretical and empirical studies investigating how these ecosystem responses may be mechanistically linked4,6,8,9,10,11. These studies have repeatedly shown that the positive effect of plant species richness on the temporal stability of community productivity in ambient (unfertilized) conditions is usually reduced with fertilization4,5,6. However, these studies have primarily focused on plant responses at relatively small scales (i.e., within single local communities). Whether fertilization reduces the positive effect of diversity on temporal stability at larger scales (i.e., among neighboring local communities) remains unclear. Filling this knowledge gap is important because the stable provision of ecosystem services is critical for society12. This is especially true, given an increasing concern for large variability of environmental conditions due to multiple anthropogenic influences, including eutrophication and climate change13.
A recent theoretical framework allows the quantification of the processes that determine the stability of ecosystem functioning at scales beyond the single local community (Fig. 1)14,15,16. Stability at any given scale is defined as the temporal mean of primary productivity divided by its standard deviation17. Higher local scale community stability (alpha stability) can result from two main processes. First, a higher average temporal stability of all species in the community (species stability) can stabilize community productivity due to lower variation in individual species abundances from year to year (Fig. 1b). Second, more asynchronous temporal dynamics among species in response to environmental fluctuations (species asynchrony) can stabilize community productivity because declines in the abundance of some species through time are compensated for by increases in other species (Fig. 1c). Higher stability at the larger scale (gamma stability) can result from higher alpha stability and more asynchronous dynamics across local communities (spatial asynchrony; Fig. 1d). Thus, the stabilizing effect of spatial asynchrony on productivity at the larger scale (spatial insurance hypothesis)14,18 mirrors the stabilizing effect of species asynchrony on productivity at the local scale (species or local insurance hypothesis)8,16,19,20. Higher species asynchrony and species stability can result from higher local species diversity through higher species richness9,21,22, higher species evenness8, or both (e.g., higher values of diversity indices—such as the Shannon index—that combines the two23; Fig. 1e). Higher spatial asynchrony can result from greater local species diversity or higher variation in species composition among communities (beta diversity)16.
According to this framework, fertilization can affect the links between diversity, asynchrony, and stability across spatial scales (Fig. 1e and Table 1). At the local scale, fertilization can decrease niche dimensionality, and favor a few dominant plant species by affecting the competitive balance among species, potentially reducing the insurance effects of local diversity7,22. At the larger scale, fertilization can reduce spatial heterogeneity in community composition, and decrease variations among local plant community structure, potentially reducing the spatial insurance effect of beta diversity16. Moreover, fertilization often reduces plant diversity, which could in turn reduce asynchrony and stability at multiple scales4,9,17,24. However, the role of fertilization in mediating the functional consequences of biodiversity changes (variations in the number, abundance, and identities of species) and compensatory mechanisms (variation and compensation in species responses) that can affect the stable provisioning of ecosystem functions at larger spatial scales remains to be elucidated25.
To our knowledge, only one recent study has assessed the effect of nutrient enrichment on stability within and among interconnected communities in a temperate grassland26. By adding different nitrogen treatments to communities in ten blocks spread out within a single site, that study found that 5 years of chronic nitrogen addition reduced alpha stability through a decline in species asynchrony, but had no effect on spatial asynchrony. However, these conclusions were based on a single grassland site manipulating a single nutrient, with the implicit assumption that the relationship between diversity and stability was unaffected by eutrophication. This argues for multisite comparative studies assessing the generality of the mechanistic links between these ecosystem responses to eutrophication.
Here, we use a coordinated, multisite and multiyear nutrient enrichment experiment (±chronic nitrogen, phosphorus, and potassium addition, Nutrient Network (NutNet)27) to assess the scale dependence of fertilization impacts on plant diversity and stability. Treatments were randomly assigned to 25 m2 plots and were replicated in three blocks at most sites (Supplementary Data 1). Samples were collected in 1 m2 subplots across 243 communities from 42 grassland sites on six continents and followed a standardized protocol at all sites27. We selected these sites as they contained between 4 and 9 years of experimental duration (hereafter “period of experimental duration”), and three blocks per site, excluding additional blocks from sites that had more than three (Supplementary Data 1). Sites spanned a broad range of seasonal variation in precipitation and temperature (Supplementary Fig. 1), and a wide range of grassland types (Supplementary Data 1). In our analysis, we treated each 1 m2 subplot as a “community” and the replicated subplots within a site as the “larger scale” sensu Whittaker28. We computed diversity, asynchrony, and stability within a community (local “alpha” scale) and across the three replicated communities within a site (larger “gamma” scale) (see “Methods”). We then used bivariate analysis and structural equation modeling (SEM)29 to assess fertilizer impacts, and disentangle the relative contributions of diversity and asynchrony to stability (Fig. 1e).
## Results and discussion
### Fertilization effects on diversity, asynchrony, and stability
Analyses of variance revealed the negative effects of nutrient inputs on biodiversity and stability at the two scales investigated, consistent with recent findings from a single site26. Fertilization consistently reduced species richness, alpha, and gamma stability, but had no effect on beta diversity (Supplementary Fig. 2). Bivariate analyses further revealed the negative effects of nutrient inputs on biodiversity–stability relationships at the two scales investigated (Fig. 2). Relationships were generally consistent across the different periods of experimental duration considered (Supplementary Table 1). Under ambient (unfertilized) conditions, species richness was positively associated with alpha and gamma stability (Fig. 2a, b), but fertilization weakened the positive effect of species richness on stability at the two scales (Fig. 2c, d). Fertilization reduced local stability of grassland functioning by increasing temporal variability in species-rich communities (Supplementary Fig. 3). Similarly, high beta diversity (variation in species composition among communities) was positively associated with spatial asynchrony and gamma stability under ambient conditions (Fig. 2e, f), but again fertilization weakened the positive effect of beta diversity on spatial asynchrony and gamma stability (Fig. 2g, h). These results remained when accounting for variation in climate using residual regression (Supplementary Fig. 4), when using local diversity indices accounting for species abundance (Supplementary Fig. 5), and when data were divided into overlapping intervals of 4 years (Supplementary Fig. 6). Our results extend previous evidence of the negative impact of fertilization on the diversity–stability relationship obtained within local plots and over shorter experimental periods4,6,26. Importantly, they show that these negative effects propagate from within to among communities. To our knowledge, our study is the first to report the negative impacts of fertilization on the relationships of beta diversity, with spatial asynchrony and gamma stability.
### Mechanisms linking diversity and stability
To understand the relative role of local vs. larger scale community properties in determining asynchrony and stability at different spatial scales, we conducted SEM analyses, including all measures in a single causal model (Fig. 3, Supplementary Fig. 7 and Supplementary Table 2). Under ambient conditions, SEM revealed that higher plant species richness contributed to greater alpha and gamma stability largely through higher asynchronous dynamics among species (species asynchrony, standardized path coefficient = 0.39), and not necessarily through greater species stability (standardized path coefficient = 0.01; Fig. 3a and Supplementary Fig. 8a, b). The positive association between species richness and alpha stability is consistent with existing experimental17,24 and shorter-term observational evidence4,30,31. Our results confirm that the stabilizing effects of species richness in naturally assembled grassland communities is largely driven by species asynchrony, but not species stability4,6,22,26. In addition, they show that the positive impact of species richness on the stability of community productivity via species asynchrony in turn leads to greater stability of productivity at the larger spatial scale.
While correlated with species richness, higher beta diversity also contributed to greater gamma stability through an independent pathway, namely via higher asynchronous dynamics among local communities (spatial asynchrony, standardized path coefficient = 0.20, Fig. 3a). While theoretical studies have suggested a role for beta diversity in driving spatial asynchrony15,16, previous empirical studies conducted along a nitrogen gradient at a single site26 or across 62 sites with non-standardized protocols21 did not find an association between these two variables. Here, we show that the presence of different species among local communities is linked to higher variation in dynamics among them, demonstrating the stabilizing role of beta diversity at larger spatial scales through spatial asynchrony. This also indicates the need for multisite replication with standardized treatments and protocols to detect such effects.
Importantly, fertilization acted to destabilize productivity at the local and larger spatial scale through several mechanisms (Fig. 3 and Table 2). At the local scale, fertilization weakened the positive effects of plant species richness on alpha and gamma stability (Fig. 2a–d) via a combination of two processes (Fig. 3b and Supplementary Fig. 8c, d). First, the positive relationship between species richness and species asynchrony in the control communities (standardized path coefficient = 0.39, Fig. 3a), was weaker in the fertilized communities (standardized path coefficient = 0.20, Fig. 3b). Moreover, this general positive effect of richness on asynchrony was counteracted by a second stronger negative relationship of richness with species stability (standardized path coefficient = −0.37). Such negative effect of fertilization on species stability was not observed under ambient conditions, and could be due to shifts in functional composition in species-rich communities from more stable conservative species to less stable exploitative species in a temporally variable environment32,33. Together, these two effects explain the overall weaker alpha stability at higher richness with fertilization. We did not find evidence that the loss of diversity caused by fertilization (an average of −1.8 ± 0.5 species m−2, Supplementary Fig. 2a and Supplementary Fig. 9a) was related to the decline of alpha stability, confirming results from other studies5,6 and earlier NutNet results4 obtained over shorter time periods. This could be because the negative feedback of the loss of richness caused by fertilization on stability requires a longer experimental duration, or greater loss of plant diversity, to manifest9,34. Another possible explanation is that fertilization may have a direct positive effect on stability, by increasing community biomass (t = 2.41, d.f. = 326, P = 0.016) and enhancing stability via overyielding effects35, a formal test that would require monocultures.
At the larger scale, fertilization reduced the strength of the relationship between beta diversity and gamma stability by reducing the strength of the relationship between beta diversity and spatial asynchrony (standardized path coefficient = 0.20 in Fig. 3a vs. standardized path coefficient = 0.03 in Fig. 3b). This result provides evidence that fertilization can reduce the stabilizing role of spatial asynchrony among initially dissimilar communities. We did not find evidence that this was due to a negative feedback of changes in beta diversity caused by fertilization on gamma stability (Supplementary Fig. 2b and Supplementary Fig. 9b). The positive relationship between beta diversity and spatial asynchrony, and the negative impact of fertilization on that relationship, suggests that the spatial insurance effect caused by variation in species composition among local communities may be disrupted in a eutrophic world.
### Implications
Our results support the idea that asynchronous dynamics among species in species-rich communities play a stabilizing role, and show that this effect propagates to larger spatial scales21,26. Furthermore, to our knowledge, our study is the first to report the positive association between beta diversity and gamma stability through spatial asynchrony in real-world grasslands. Importantly, fertilization reduced the contribution of biodiversity to these stabilizing mechanisms at both scales, diminishing the local and spatial insurance of biodiversity on stability. Such diminished insurance effects lead to a reduced ecosystem stability at larger scales. Future climate will be characterized by more variability, including more frequent extreme events13. Our results indicate that preserving ecosystem stability across spatial scales in a changing world requires conserving biodiversity within and among local communities. Moreover, policies and management procedures that prevent and mitigate eutrophication are needed to safeguard the positive effects of biodiversity on stability at multiple scales.
## Methods
### Study sites and experimental design
The study sites are part of the NutNet experiment (Supplementary Data 1; http://nutnet.org/)27. Plots at each site are 5 × 5 m separated by at least 1 m. All sites included in the analyses presented here included unmanipulated plots and fertilized plots with nitrogen (N), phosphorus (P), and potassium and micronutrients (K) added in combination (NPK+). N, P, and K were applied annually before the beginning of the growing season at rates of 10 gm−2 y−1. N was supplied as time-release urea ((NH2)2CO) or ammonium nitrate (NH4NO3). P was supplied as triple super phosphate (Ca(H2PO4)2), and K as potassium sulfate (K2SO4). In addition, a micronutrient mix (Fe, S, Mg, Mn, Cu, Zn, B, and Mo) was applied at 100 gm−2 y−1 to the K-addition plots, once at the start of the experiment, but not in subsequent years to avoid toxicity. Treatments were randomly assigned to the 25 m2 plots and were replicated in three blocks at most sites (some sites had fewer/more blocks or were fully randomized). Sampling was done in 1 m2 subplots and followed a standardized protocol at all sites27.
### Site selection
Data were retrieved on 1 May 2020. To keep a constant number of communities per site and treatment, we used three blocks per site, excluding additional blocks from sites that had more than three (Supplementary Data 1). Sites spanned a broad envelope of seasonal variation in precipitation and temperature (Supplementary Fig. 1), and represent a wide range of grassland types, including alpine, desert and semiarid grasslands, prairies, old fields, pastures, savanna, tundra, and shrub-steppe (Supplementary Data 1).
Stability and asynchrony measurements are sensitive to taxonomic inconsistencies. We adjusted the taxonomy to ensure consistent naming over time within sites. This was usually done by aggregating taxa at the genus level when individuals were not identified to species in all years. Taxa are however referred to as “species”.
We selected sites that had a minimum of 4 years, and up to 9 years of posttreatment data. Treatment application started at most sites in 2008, but some sites started later resulting in a lower number of sites with increasing duration of the study, from 42 sites with 4 years of posttreatment duration to 15 sites with 9 years of duration (Supplementary Data 1). Longer time series currently exist, but for a limited number of sites within our selection criteria.
### Primary productivity and cover
We used aboveground live biomass as a measure of primary productivity, which is an effective estimator of aboveground net primary production in herbaceous vegetation36. Primary productivity was estimated annually by clipping at ground level all aboveground live biomass from two 0.1 m2 (10 × 100 cm) quadrats per subplot. For shrubs and subshrubs, leaves and current year’s woody growth were collected. Biomass was dried to constant mass at 60 °C and weighed to the nearest 0.01 g. Areal percent cover of each species was measured concurrently with primary productivity in one 1 × 1 m subplot, in which no destructive sampling occurred. Cover was visually estimated annually to the nearest percent independently for each species, so that total summed cover can exceed 100% for multilayer canopies. Cover and primary productivity were estimated twice during the year at some sites with strongly seasonal communities. This allowed to assemble a complete list of species and to follow management procedures typical of those sites. For those sites, the maximum cover of each species and total biomass were used in the analyses.
### Diversity, asynchrony, and stability across spatial scales
We quantified local scale and larger-scale diversity indices across the three replicated 1-m2 subplots for each site, treatment and duration period using cover data37,38. In our analysis, we treated each subplot as a “community” and the collective subplots as the “larger scale” sensu Whittaker28. Local scale diversity indices (species richness, species evenness, Shannon, and Simpson) were measured for each community, and averaged across the three communities for each treatment at each site resulting in one single value per treatment and site. Species richness is the average number of plant species. Shannon is the average of Shannon–Weaver indices39. Species evenness is the average of the ratio of the Shannon–Weaver index and the natural logarithm of average species richness (i.e., Pielou’s evenness40). Simpson is the average of inverse Simpson indices41. Due to strong correlation between species richness and other common local diversity indices (Shannon: r = 0.90 (95% confidence intervals (CIs) = 0.87–0.92), Simpson: r = 0.88 (0.86–0.91), Pielou’s evenness: r = 0.62 (0.55–0.68), with d.f. = 324 for each), we used species richness as a single, general proxy for those variables in our models. Results using these diversity indices did not differ quantitatively from those presented in the main text using species richness (Supplementary Fig. 5), suggesting that fertilization modulate diversity effects largely through species richness. Following theoretical models15,16, we quantified abundance-based gamma diversity as the inverse Simpson index over the three subplots for each treatment at each site and abundance-based beta diversity, as the multiplicative partitioning of abundance-based gamma diversity: abundance-based beta equals the abundance-based gamma over Simpson28,42, resulting in one single beta diversity value per treatment and site. We used abundance-based beta diversity index because it is directly linked to ecosystem stability in theoretical models15,16, and thus directly comparable to theories. We used the R functions “diversity”, “specnumber”, and “vegdist” from the vegan package43 to calculate Shannon–Weaver, Simpson, and species richness indices within and across replicated plots.
Stability at multiple scales was determined both without detrending and after detrending data. For each species within communities, we detrended by using species-level linear models of percent cover over years. We used the residuals from each regression as detrended standard deviations to calculate detrended stability17. Results using detrended stability did not differ quantitatively from those presented in the main text without detrending. Stability was defined by the temporal invariability of biomass (for alpha and gamma stability) or cover (for species stability and species asynchrony), calculated as the ratio of temporal mean to standard deviation14,17. Gamma stability represents the temporal invariability of the total biomass of three plots with the same treatment, alpha stability represents the temporal invariability of community biomass averaged across three plots per treatment and per site, and species stability represents the temporal invariability of species cover averaged across all species and the three plots per treatment14. The mathematical formula are:
$${\mathrm{Species}}\,{\mathrm{stability}} = \frac{{\sum _{i,k}m_{i,k}}}{{\sum _{i,k}\sqrt {w_{ii,kk}} }},$$
(1)
$${\mathrm{Alpha}}\,{\mathrm{stability}} = \frac{{\sum _k\mu _k}}{{\sum _k\sqrt {v_{kk}} }},$$
(2)
$${\mathrm{Gamma}}\,{\mathrm{stability}} = \frac{{\sum _k\mu _k}}{{\sqrt {\sum _{k,l}\nu _{kl}} }},$$
(3)
where mi,k and wii,kk denote the temporal mean and variance of the cover of species i in subplot k; μk and vkk denote the temporal mean and variance of community biomass in subplot k, and vkl denotes the covariance in community biomass between subplot k and l. We then define species asynchrony as the variance-weighted correlation across species, and spatial asynchrony as the variance-weighted correlation across plots:
$${\mathrm{Species}}\,{\mathrm{asynchrony}} = \frac{{\sum _{i,k}\sqrt {w_{ii,kk}} }}{{\sum _k\sqrt {\sum _{ij,kl}w_{ij,kl}} }},$$
(4)
$${\mathrm{Spatial}}\,{\mathrm{asynchrony}} = \frac{{\sum _k\sqrt {v_{kk}} }}{{\sqrt {\sum _{k,l}\nu _{kl}} }},$$
(5)
where wij,kl denotes the covariance in species cover between species i in subplot k and species j in subplot l.
These two asynchrony indices quantify the incoherence in the temporal dynamics of species cover and community biomass, respectively, which serve as scaling factors to link stability metrics across scales14 (Fig. 1). To improve normality, stability, and asynchrony measures were logarithm transformed before analyses. We used the R function “var.partition” to calculate asynchrony and stability across spatial scales14.
### Climate data
Precipitation and temperature seasonality were estimated for each site, using the long-term coefficient of variation of precipitation (MAP_VAR) and temperature (MAT_VAR), respectively, derived from the WorldClim Global Climate database (version 1.4; http://www.worldclim.org/)44.
### Analyses
All analyses were conducted in R 4.0.2 (ref. 45) with N = 42 for each analysis unless specified. First, we used analysis of variance to determine the effect of fertilization, and period of experimental duration on biodiversity and stability at the two scales investigated. Models including an autocorrelation structure with a first-order autoregressive model (AR(1)), where observations are expected to be correlated from 1 year to the next, gave substantial improvement in model fit when compared with models lacking autocorrelation structure. Second, we used bivariate analyses and linear models to test the effect of fertilization and period of experimental duration on biodiversity–stability relationships at the two scales investigated. Again, models including an autocorrelation structure gave substantial improvement in model fit (Supplementary Table 1)46,47,48. We ran similar models based on nutrient-induced changes in diversity, stability, and asynchrony. For each site, relative changes in biodiversity, stability, and asynchrony at the two scales considered were calculated, as the natural logarithm of the ratio between the variable in the fertilized and unmanipulated plots (Supplementary Fig. 9). Because plant diversity, asynchronous dynamics, and temporal stability may be jointly controlled by interannual climate variability22, we ran similar analyses on the residuals of models that included the coefficient of variation among years for each of temperature and precipitation. Results of our analyses controlling for interannual climate variability did not differ qualitatively from the results presented in the text (Supplementary Fig. 4). In addition, to test for temporal trends in stability and diversity responses to fertilization, we used data on overlapping intervals of four consecutive years. Results of our analyses using temporal trends did not differ qualitatively from the results presented in the text (Supplementary Fig. 6). Inference was based on 95% CIs.
Second, we used SEM29 with linear models, to evaluate multiple hypothesis related to key predictions from theories (Table 1). The path model shown in Fig. 1e was evaluated for each treatment (control and fertilized), and we ran separate SEMs for each period of experimental duration (from 4 to 9 years of duration). We generated a summary SEM by performing a meta-analysis of the standardized coefficients across all durations for each treatment. We then tested whether the path coefficients for each model differed by treatment by testing for a model-wide interaction with the “treatment” factor. A positive interaction for a given path implied that effects of one variable on the other are significantly different between fertilized and unfertilized treatments. We used the R functions “psem” to fit separate piecewise SEMs49 for each duration and combined the path coefficients from those models, using the “metagen” function50.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
The data that support the findings of this study are available via GitHub (https://github.com/YannHautier/NutNetStabilityScaleUp). Data sources are provided with this paper. WorldClim global climate database is freely available through the World Data Center for Climate (WDCC; cera-www.dkrz.de), as well as through the CCAFS-Climate data portal (http://ccafs-climate.org).
## Code availability
R code of all analyses are available via https://github.com/YannHautier/NutNetStabilityScaleUp.
## References
1. 1.
Erisman, J. W. et al. Consequences of human modification of the global nitrogen cycle. Philos. Trans. R. Soc. B Biol. Sci. 368, 20130116 (2013).
2. 2.
Galloway, J. N. The global nitrogen cycle: past, present and future. Sci. China Ser. C. Life Sci. 48, 669–677 (2005).
3. 3.
Tilman, D. et al. Forecasting agriculturally driven global environmental change. Science 292, 281–284 (2001).
4. 4.
Hautier, Y. et al. Eutrophication weakens stabilizing effects of diversity in natural grasslands. Nature 508, 521–525 (2014).
5. 5.
Xu, Z. W. et al. Environmental changes drive the temporal stability of semi-arid natural grasslands through altering species asynchrony. J. Ecol. 103, 1308–1316 (2015).
6. 6.
Zhang, Y. H. et al. Nitrogen enrichment weakens ecosystem stability through decreased species asynchrony and population stability in a temperate grassland. Glob. Change Biol. 22, 1445–1455 (2016).
7. 7.
Harpole, W. S. et al. Addition of multiple limiting resources reduces grassland diversity. Nature 537, 93–96 (2016).
8. 8.
Thibaut, L. M. & Connolly, S. R. Understanding diversity-stability relationships: towards a unified model of portfolio effects. Ecol. Lett. 16, 140–150 (2013).
9. 9.
Hautier, Y. et al. Anthropogenic environmental changes affect ecosystem stability via biodiversity. Science 348, 336–340 (2015).
10. 10.
Koerner, S. E. et al. Nutrient additions cause divergence of tallgrass prairie plant communities resulting in loss of ecosystem stability. J. Ecol. 104, 1478–1487 (2016).
11. 11.
Yang, H. J. et al. Diversity-dependent stability under mowing and nutrient addition: evidence from a 7-year grassland experiment. Ecol. Lett. 15, 619–626 (2012).
12. 12.
Millennium Ecosystem Assessment. Ecosystems and Human Well-being: Synthesis (Island Press, 2005).
13. 13.
Shukla, P. R. et al. in Climate Change and Land: an IPCC Special Report on Climate Change, Desertification, Land Degradation, Sustainable Land Management, Food Security, and Greenhouse Gas Fluxes in Terrestrial Ecosystems (IPCC, 2019).
14. 14.
Wang, S. P., Lamy, T., Hallett, L. M. & Loreau, M. Stability and synchrony across ecological hierarchies in heterogeneous metacommunities: linking theory to data. Ecography 42, 1200–1211 (2019).
15. 15.
Wang, S. P. & Loreau, M. Ecosystem stability in space: alpha, beta and gamma variability. Ecol. Lett. 17, 891–901 (2014).
16. 16.
Wang, S. P. & Loreau, M. Biodiversity and ecosystem stability across scales in metacommunities. Ecol. Lett. 19, 510–518 (2016).
17. 17.
Tilman, D., Reich, P. B. & Knops, J. M. H. Biodiversity and ecosystem stability in a decade-long grassland experiment. Nature 441, 629–632 (2006).
18. 18.
Loreau, M., Mouquet, N. & Gonzalez, A. Biodiversity as spatial insurance in heterogeneous landscapes. Proc. Natl Acad. Sci. USA 100, 12765–12770 (2003).
19. 19.
Lamy, T. et al. Species insurance trumps spatial insurance in stabilizing biomass of a marine macroalgal metacommunity. Ecology 100, e02719 (2019).
20. 20.
Loreau, M. & de Mazancourt, C. Biodiversity and ecosystem stability: a synthesis of underlying mechanisms. Ecol. Lett. 16, 106–115 (2013).
21. 21.
Wilcox, K. R. et al. Asynchrony among local communities stabilises ecosystem function of metacommunities. Ecol. Lett. 20, 1534–1545 (2017).
22. 22.
Gilbert, B. et al. Climate and local environment structure asynchrony and the stability of primary production in grasslands. Glob. Ecol. Biogeogr. 29, 1177–1188 (2020).
23. 23.
Zhang, Y., Loreau, M., He, N., Zhang, G. & Han, X. Mowing exacerbates the loss of ecosystem stability under nitrogen enrichment in a temperate grassland. Funct. Ecol. 31, 1637–1646 (2017).
24. 24.
Hector, A. et al. General stabilizing effects of plant diversity on grassland productivity through population asynchrony and overyielding. Ecology 91, 2213–2220 (2010).
25. 25.
Mori, A. S., Isbell, F. & Seidl, R. beta-Diversity, community assembly, and ecosystem functioning. Trends Ecol. Evolution 33, 549–564 (2018).
26. 26.
Zhang, Y. H. et al. Nitrogen addition does not reduce the role of spatial asynchrony in stabilising grassland communities. Ecol. Lett. 22, 563–571 (2019).
27. 27.
Borer, E. T. et al. Finding generality in ecology: a model for globally distributed experiments. Methods Ecol. Evolution 5, 63–73 (2013).
28. 28.
Whittaker, R. H. Evolution and measurement of species diversity. Taxon 21, 213–225 (1972).
29. 29.
Grace, J. B. et al. Guidelines for a graph-theoretic implementation of structural equation modeling. Ecosphere 3, art73 (2012).
30. 30.
Bai, Y., Han, X., Wu, J., Chen, Z. & Li, L. Ecosystem stability and compenatory effects in the inner Mongolia grassland. Nature 431, 181–184 (2004).
31. 31.
Tilman, D. Biodiversity: population versus ecosystem stability. Ecology 77, 350–353 (1996).
32. 32.
Polley, H. W., Isbell, F. I. & Wilsey, B. J. Plant functional traits improve diversity-based predictions of temporal stability of grassland productivity. Oikos 122, 1275–1282 (2013).
33. 33.
Majekova, M., de Bello, F., Dolezal, J. & Leps, J. Plant functional traits as determinants of population stability. Ecology 95, 2369–2374 (2014).
34. 34.
Isbell, F. et al. Nutrient enrichment, biodiversity loss, and consequent declines in ecosystem productivity. Proc. Natl Acad. Sci. USA 110, 11911–11916 (2013).
35. 35.
de Mazancourt, C. et al. Predicting ecosystem stability from community composition and biodiversity. Ecol. Lett. 16, 617–625 (2013).
36. 36.
Oesterheld, M. & McNaughton, S. J. Herbivory in terrestrial ecosystems, in Methods in ecosystem science (eds Sala, O. E., Jackson, R. B., Mooney, H. A. & Howarth, R. W.) 151–157 (Springer, New York, 2000).
37. 37.
Tuomisto, H. An updated consumer’s guide to evenness and related indices. Oikos 121, 1203–1218 (2012).
38. 38.
Jost, L. et al. Partitioning diversity for conservation analyses. Diversity Distrib. 16, 65–76 (2010).
39. 39.
Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948).
40. 40.
Pielou, E. C. Measurement of Diversity in different types of biological collections. J. Theor. Biol. 13, 131-& (1966).
41. 41.
Simpson, E. H. Measurement of diversity. Nature 163, 688–688 (1949).
42. 42.
Olszewski, T. D. A unified mathematical framework for the measurement of richness and evenness within and among multiple communities. Oikos 104, 377–387 (2004).
43. 43.
Dixon, P. VEGAN, a package of R functions for community ecology. J. Veg. Sci. 14, 927–930 (2003).
44. 44.
Hijmans, R. J., Cameron, S. E., Parra, J. L., Jones, P. G. & Jarvis, A. Very high resolution interpolated climate surfaces for global land areas. Int. J. Climatol. 25, 1965–1978 (2005).
45. 45.
R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria http://www.R-project.org/ (2013).
46. 46.
Pinheiro, J. C. & Bates, D. M. in Mixed-Effects Models in S and S-Plus (Spinger, New York, 2000).
47. 47.
Trikalinos, T. A. & Olkin, I. Meta-analysis of effect sizes reported at multiple time points: a multivariate approach. Clin. Trials 9, 610–620 (2012).
48. 48.
Hedges, L. V. & Olkin, I. Statistical Methods For Meta-analysis (Academic, 1985).
49. 49.
Lefcheck, J. S. PIECEWISESEM: piecewise structural equation modelling in R for ecology, evolution, and systematics. Methods Ecol. Evol. 7, 573–579 (2016).
50. 50.
Viechtbauer, W. Conducting meta-analyses in R with the metafor package. J. Stat. Softw. 36, 1–48 (2010).
51. 51.
Loreau, M. & de Mazancourt, C. Species synchrony and its drivers: neutral and nonneutral community dynamics in fluctuating environments. Am. Nat. 172, E48–E66 (2008).
## Acknowledgements
The research leading to these results has received funding from the European Union. Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 298935 to Y.H. (with A.H. and E.W.S.). This work was generated using data from the Nutrient Network collaborative experiment, funded at the site scale by individual researchers and coordinated through Research Coordination Network funding from NSF to E.B. and E.W.S. (grant #DEB-0741952). Nitrogen fertilizer was donated to the Nutrient Network by Crop Production Services, Loveland, CO. We acknowledge support from the LTER Network Communications Office and DEB-1545288. M.L. was supported by the TULIP Laboratory of Excellence (ANR-10-LABX-41), and by the BIOSTASES Advanced Grant funded by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 666971). S.W. was supported by the National Natural Science Foundation of China (31988102). We thank Rita S. L. Veiga and George A. Kowalchuk for suggestions that improved the manuscript.
## Author information
Authors
### Contributions
Y.H., P.Z., K.R.W., M.L., and S.W. developed and framed research questions. Y.H. and S.W. analyzed the data with help from P.Z., K.R.W., E.W.S., J.E.K.B., S.E.K., K.J.K., and J.S.L. Y.H. wrote the paper with contributions and input from all authors. E.W.S. and E.T.B. are Nutrient Network coordinators. The author contribution matrix is provided as Supplementary Data 2.
### Corresponding authors
Correspondence to Yann Hautier or Shaopeng Wang.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Hautier, Y., Zhang, P., Loreau, M. et al. General destabilizing effects of eutrophication on grassland productivity at multiple spatial scales. Nat Commun 11, 5375 (2020). https://doi.org/10.1038/s41467-020-19252-4
• Accepted:
• Published: | 2020-12-05 19:16:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6067709922790527, "perplexity": 6729.225931999523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141748276.94/warc/CC-MAIN-20201205165649-20201205195649-00209.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=4218193 | # Atomic Excitation transition energy
by Sekonda
Tags: atomic, energy, excitation, transition
P: 209 Hey, Here's the question Now I wanted to check if my thought process was correct and thus my formulation. Some sodium atom is initially excited, emits EM wave with stated λ and then de-excites to the ground state. However, this is Sodium so I think we approximate the energy levels using the equation: $$E=-Z^{2}\frac{13.61eV}{(n-\delta_{i})^{2}}$$ Such that we have $$E_{initial}-E_{ground}=E_{wave}$$ which is $$E_{initial}=\frac{hc}{\lambda}-Z^{2}\frac{13.61}{(3-1.35)^{2}}$$ For Z=11 and since we write the ground state of sodium as 1s{2}2s{2}2p{6}3s{1} I used n=3 and δi=δs. I got the initial state energy as -600eV though have no idea if I've done this correct.
Mentor P: 11,580 I think you should not mix so many different calculation parts here: - Which energy difference does the transition have (in eV)? - How do the energy levels for s,p,d with n=3 look like? Do you see the calculated difference between two levels here? With the quantum defects reducing the effective n, I think you should work with the shielded charge for each shell instead of the full charge of the nucleus. The photon has an energy of a few eV, I would not expect any energy level of -600 eV involved.
P: 209 The difference in energy between the excited (initial) and ground (final) state I believe is just the energy associated with the emission which is about 4.8eV using hc/λ Though I'm unsure how to determine the ground state of Sodium, I have the formula: $$E=-\frac{13.61}{(n-\delta_{i})^{2}}$$ Which is true for Hydrogen, for an atom of proton number Z I think we just multiply this value by Z^2 (provided we neglect electron-electron interactions). Though I'm unsure how to use the above equation to attain the ground state of Sodium with Z=11.
Mentor
P: 11,580
Atomic Excitation transition energy
(provided we neglect electron-electron interactions)
Well, you cannot neglect them for sodium.
As first approximation, you can consider all electrons in "lower" shells and neglect electrons in the same shell. For sodium, this gives 11 protons and 10 electrons to consider, so the remaining charge is 1.
The inner electrons do not provide a perfect shielding, and this leads to the quantum defects numbers, which are just a numerical way to take this into account.
P: 209 Oh yeah, I was thinking that was a stupid/self-contradictory thing to say... Still confused on how to compute the ground state energy of Sodium - I wasn't sure if I have to consider each orbital of s and p in the electronic configuration.
P: 209 Right, So I think I'm right in saying that the excited sodium atom state is equal to the sum of the photon and the ground state of sodium: $$E_{excited}=E_{initial}+E_{photon}$$ The photon energy is given by $$E_{photon}=\frac{hc}{\lambda}$$ and is about 4.8eV The photon energy is equal to the difference in the ground state and excited state energies of sodium. The groundstate configuration of sodium is $$1s^{2}2s^{2}2p^{6}3s$$ and so the energy of the electron in the 3s orbital is given by $$E_{3s}=\frac{-13.6}{(3-1.35)^{2}}$$ So all we need to take is the energy difference between the electron orbitals that are changing? Though I'm not sure if I'm correct in saying only one electron changes state during this transition?
Mentor
P: 11,580
So all we need to take is the energy difference between the electron orbitals that are changing?
Right.
You have the ground-state energy there, calculate the energy for p and d and find a difference of 4.8 eV.
Though I'm not sure if I'm correct in saying only one electron changes state during this transition?
That is correct.
P: 209 Marvellous, thanks for confirming this mfb - despite probably stating it before. I'm a bit "slow". Cheers, SK
P: 209 I think I am doing something wrong as I use this conservation of energy equation: $$-\frac{13.6eV}{(n-\delta_{i})^{2}}+\frac{13.6eV}{(3-1.35)^{2}}=4.8eV$$ The first term being the excited orbital energy, second term the ground state orbital energy and final term the energy of the photon. Rearranging I find that $$n-\delta_{i}=8.47$$ And neither the quantum defects for p or d give an integer 'n' or even close. Do you know what I am doing wrong? Thanks, SK
Mentor P: 11,580 Hmm, interesting. Looks like you need a different n, with an unknown quantum defect. I don't know.
P: 209 Maybe we just approximate due to using approximations in h-bar and 'c', as well as the rydberg constant. Perhaps it is just a 9s orbital -considering this : http://hyperphysics.phy-astr.gsu.edu...um/sodium.html The only transition to the ground state capable of making an emission of 4.8eV is n>7, so perhaps what I have done is correct. But I prefer to doubt myself! Thanks, SK
Mentor P: 11,580 A cross-check with this database: It does not have any spectral line with 258.67nm with the ground state as lower state. The closest one is 259.383 (or 259.393) which corresponds to 7p -> 3s. It has a line with 258.631nm, but I am not sure how to interpret the notation of the excited state, and the lower state is not the ground state.
P: 209 Maybe I'm missing out something, or maybe the question hasn't been thought out - though I find this unlikely and much more likely I've made an error - though I'm struggling to see where. Oh well, can only hope a question like this doesn't pop up again. Let me know if you find the problem and thanks for walking me through this mfb. Thanks, SK
P: 209 Woops..... I got the signs the wrong way round, I was using the ground state as the initial state. Now I'm getting $$n-\delta_{i}=1.18$$ Which gives me, almost, n=2 for the 'p' quantum defect - though I'm pretty sure this doesn't make any sense.
Mentor P: 11,580 No, this time your sign is wrong, it was right before.
P: 209 Yeah I was correct the first time, just going mad. Upon using more accurate values I attain $$n-\delta_{i}=8.33$$ There still is no delta for this, well at least given - I'm guessing I've made a mistake elsewhere, or it's a bad question or we just round it, but I reckon the former.
Quote by Sekonda Upon using more accurate values I attain $$n-\delta_{i}=8.33$$
Then you are still not using values which are accurate enough. Using the values given in the exercise I get $$n-\delta_{i}=8.1286$$ which is a rather good match. | 2014-07-26 01:09:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6975275874137878, "perplexity": 455.1487584086327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894931.59/warc/CC-MAIN-20140722025814-00022-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://brilliant.org/discussions/thread/4th-power-is-1-mod-p/ | ×
# 4th power is -1 mod p
Let $$p$$ be a prime such that there exists an integer $$x$$ satisfying $x^4\equiv -1\pmod{p}$
Prove that $$p\equiv 1\pmod{8}$$
Source: Classic
Note by Daniel Liu
1 year, 1 month ago
Sort by:
If $$x$$ satisfies $$x^4\equiv -1 \pmod p$$, then we claim that $$8$$ is the order of $$x$$, this will imply $$8|p-1$$.
Suppose the order is not $$8$$, then by properties of order it must be a divisor of $$8$$. This is absurd since either $$x\equiv -1$$ or $$x^2\equiv -1$$ would contradict $$x^4\equiv -1 \pmod p$$.
This can clearly be generalized for any powers of 2. · 1 year, 1 month ago | 2016-10-24 08:50:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836077094078064, "perplexity": 262.09822136150734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719547.73/warc/CC-MAIN-20161020183839-00008-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=t69ac6l186o33gmkdulkku6fa1&topic=182.msg1040 | ### Author Topic: Comparison of 9th and 10th textbook editions (Read 29123 times)
#### Brian Bi
• Moderator
• Full Member
• Posts: 31
• Karma: 13
##### Comparison of 9th and 10th textbook editions
« on: January 06, 2013, 05:43:03 PM »
Hi all,
I'm looking for a volunteer---someone who has already bought the 10th edition of the textbook, or will do so shortly---who will meet me weekly, or bi-weekly, or however often problem sets are posted, so we can compare problem numbers between the 10th edition and the 9th edition (which I already have).
If I am able to get a volunteer, I will post corresponding problem numbers in this thread after each time we meet. This way, other students who already have the 9th edition, or can obtain it much more cheaply than the 10th, won't have to buy another edition of the textbook just so they can do the problems.
I'm sure people would give you karma for it (which means bonus marks apparently)
Thanks!
« Last Edit: January 06, 2013, 05:52:56 PM by Brian Bi »
#### Victor Ivrii
• Elder Member
• Posts: 2563
• Karma: 0
##### Re: Comparison of 9th and 10th textbook editions
« Reply #1 on: January 06, 2013, 06:56:43 PM »
I'm sure people would give you karma for it (which means bonus marks apparently)
Thanks!
Well, commoners (people) cannot give karma in this forum as it translates to the bonus mark. I can--and will.
#### Yanyuan Jing
• Newbie
• Posts: 4
• Karma: 3
##### Re: Comparison of 9th and 10th textbook editions
« Reply #2 on: January 06, 2013, 09:13:30 PM »
I wouldn't mind helping, though I haven't bought the book yet. Also, I don't think it'll need to be a weekly thing. I think all the assigned questions have already been posted, and there aren't any evaluated problem sets, just suggested practice problems.
I'll private message you to set up meeting times?
Yan
#### Victor Ivrii
• Elder Member
• Posts: 2563
• Karma: 0
##### Re: Comparison of 9th and 10th textbook editions
« Reply #3 on: January 07, 2013, 01:09:07 AM »
I wouldn't mind helping, though I haven't bought the book yet. Also, I don't think it'll need to be a weekly thing. I think all the assigned questions have already been posted, and there aren't any evaluated problem sets, just suggested practice problems.
I'll private message you to set up meeting times?
Yan
Some problems could be added (I will mark them by a special colour) so it is not one time thing but not every week either
Nice avatar!
#### Victor Ivrii
• Elder Member
• Posts: 2563
• Karma: 0
##### Re: Comparison of 9th and 10th textbook editions
« Reply #4 on: January 07, 2013, 04:30:14 AM »
Let me clarify this issue once and for all. Officially you use 10th edition. Neither 9th, nor 8th, etc.
Content however is only marginally different. Main issue here is home assignments. Usually publishers when preparing new edition shuffle problems like card sharpers to discourage of usage of the old edition because they want bigger sales. In this case there are some discrepancies, not very significant. Therefore using older edition you can add up with solving the wrong problem.
However you do not submit home assignments, they are not graded at all, but you are given quizzes drawn from the problems in the home assignments. So if you solved the right problem, you would solve during the quiz exactly same problem. If you solved the wrong problem, you would solve during the quiz similar problem, which makes a difference.
Yes, we posted online required problems for sections 1.1-2.2, but it was done only because textbook has not arrived to bookstore yet, so we were dealing with a problem of not our making. However it is time consuming. Creation of comparison table is also time consuming and error prone. So, you should not expect instructors to be involved in this. if there are volunteers--go ahead! (IMHO, if there are two independent pairs it would be more reliable).
PS. I have no idea how much the textbooks authors profit. AFAIK none of them is on the Forbes list.
#### Brian Bi
• Moderator
• Full Member
• Posts: 31
• Karma: 13
##### Re: Comparison of 9th and 10th textbook editions
« Reply #5 on: January 07, 2013, 10:16:05 AM »
I wouldn't mind helping, though I haven't bought the book yet. Also, I don't think it'll need to be a weekly thing. I think all the assigned questions have already been posted, and there aren't any evaluated problem sets, just suggested practice problems.
I'll private message you to set up meeting times?
Yan
Yes, do that.
#### Jason Hamilton
• Jr. Member
• Posts: 14
• Karma: 8
##### Re: Comparison of 9th and 10th textbook editions
« Reply #6 on: January 11, 2013, 12:00:56 AM »
I've noticed for almost all the questions so far, the 10th edition and the 9th edition are identical in the questions, no "shuffling" has taken place. However I only checked the assigned questions up to chapter 2.2
#### Jihyun Bang
• Newbie
• Posts: 2
• Karma: 0
##### Re: Comparison of 9th and 10th textbook editions
« Reply #7 on: January 18, 2013, 11:27:09 AM »
Can somebody check if the rest of chapter 2
(from 2.2 till the end) has the same questions between 9th and 10th edition ?
thanks for ur help
#### Victor Ivrii
• Elder Member
• Posts: 2563
• Karma: 0
##### Re: Comparison of 9th and 10th textbook editions
« Reply #8 on: January 18, 2013, 11:44:15 AM »
Can somebody check if the rest of chapter 2
(from 2.2 till the end) has the same questions between 9th and 10th edition ?
thanks for ur help
Why anyone needs Chapter 2? You can download Ch 10th edition legally. And please, change Name to a proper one
#### Brian Bi
• Moderator
• Full Member
• Posts: 31
• Karma: 13
##### Re: Comparison of 9th and 10th textbook editions
« Reply #9 on: January 22, 2013, 04:18:33 PM »
Yanyuan and I compared problem numbers in chapters 1 through 4 today. We found that:
• Most questions and question numbers are identical.
• A few question numbers are different. Yanyuan will post these shortly.
• There are some minor stylistic differences, such as writing equations in the form $$M(x,y) + N(x,y)y' = 0$$ instead of $$M(x,y)\, dx + N(x,y)\, dy = 0$$
There is one problem whose text was actually changed between the two editions. In section 2.3, problem 12 in the 10th edition is similar to problem 11 in the 9th edition. The 9th edition reads:
Quote from: 9th edition
11. A recent college graduate borrows $100,000 at an interest rate of 9% to purchase a condominium. Anticipating steady salary increases, the buyer expects to make payments at a monthly rate of 800(1 + t/120), where t is the number of months since the loan was made. (a) Assuming that this payment schedule can be maintained, when will the loan be fully paid? (b) Assuming the same payment schedule, how large a loan could be paid off in exactly 20 years? The 10th reads: Quote from: 10th edition 12. A recent college graduate borrows$150,000 at an interest rate of 6% to purchase a condominium.
Anticipating steady salary increases, the buyer expects to make payments at a
monthly rate of 800 + 10t, where t is the number of months since the loan was made.
(a) Assuming that this payment schedule can be maintained, when will the loan be fully
paid?
(b) Assuming the same payment schedule, how large a loan could be paid off in exactly
20 years?
A comparison of chapters 7, 9, 5, and 6 will be posted next week.
« Last Edit: January 22, 2013, 04:38:21 PM by Victor Ivrii »
#### Jason Hamilton
• Jr. Member
• Posts: 14
• Karma: 8
##### Re: Comparison of 9th and 10th textbook editions
« Reply #10 on: January 22, 2013, 11:06:46 PM »
Thanks a lot Brian and yanyuan!! I'm surprised how little effort the publisher put into "updating" the new edition haha
#### Yanyuan Jing
• Newbie
• Posts: 4
• Karma: 3
##### Re: Comparison of 9th and 10th textbook editions
« Reply #11 on: January 23, 2013, 12:11:33 AM »
Hey guys,
*EDIT*
Here are the differences in the homework questions for those using the 9th Edition. Since there aren't that many changes, I won't bother retyping all the individual question numbers, so I guess you can all refer to this page for specifics: http://www.math.toronto.edu/courses/mat244h1/20131/homeassignments.html
PLEASE NOTE: We didn't check ALL the questions in the textbook, only the suggested ones listed in the above link.
Sections 1.1-2.2: same
Section 2.3 had one difference, which Brian has already posted (see 2 posts above)
Sections 2.4-3.3: same.
Section 3.4: 6, 14, 22, 27, 43
Section 3.5: 2, 12, 13, 22, 25
Section 3.6-4.2: same
Section 4.3: #19 does not have any parts (it doesn't have parts in the 10th edition either...)
Section 4.4: #1, the restriction is $$-\pi/2 < t < \pi/2$$
Brian and I are meeting up next week to do the rest of the suggested problems. I'll update this post when we do!
« Last Edit: January 23, 2013, 02:14:23 PM by Yanyuan Jing »
#### Victor Ivrii
• Elder Member
• Posts: 2563
• Karma: 0
##### Re: Comparison of 9th and 10th textbook editions
« Reply #12 on: January 23, 2013, 07:51:02 AM »
Good job, Brian and Yanyuan. I think you need to state explicitly that you are comparing only problems given as a home work, not all problems in general (may be I am mistaken).
#### Jingwei Chen
• Newbie
• Posts: 1
• Karma: 0
##### Re: Comparison of 9th and 10th textbook editions
« Reply #13 on: January 23, 2013, 09:46:52 AM »
Great job! Thank you so much guys!
#### Yanyuan Jing
• Newbie
• Posts: 4
• Karma: 3
##### Re: Comparison of 9th and 10th textbook editions
« Reply #14 on: January 23, 2013, 02:16:06 PM »
Good job, Brian and Yanyuan. I think you need to state explicitly that you are comparing only problems given as a home work, not all problems in general (may be I am mistaken).
Thanks for the suggestion, Dr. Ivrii. I added a clarification in my previous post. | 2021-05-19 00:42:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.771697998046875, "perplexity": 4261.198326396899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00175.warc.gz"} |
https://planetmath.org/compassandstraightedgeconstructionofparallelline | # compass and straightedge construction of parallel line
Task. Construct the line parallel to a given line $\ell$ and passing through a given point $P$ which is not on $\ell$.
Solution.
1. 1.
Draw a circle $c_{1}$ with center (http://planetmath.org/Center8) $P$ and intersecting $\ell$ at two points, one of which is $A$.
2. 2.
Draw a second circle $c_{2}$ with center $A$ and the same radius $r$ as $c_{1}$. This circle also intersects $\ell$ at two points, one of which is $B$.
3. 3.
Draw a third circle $c_{3}$ with center $B$ and radius $r$. Let $C$ be the intersection point of $c_{3}$ (drawn below in red) with $c_{1}$ (drawn below in green) which lies on the same side of $\ell$ as $P$ does. The line $PC$ (drawn below in blue) is the required parallel to $\ell$.
Note 1. The construction is based on the fact that the quadrilateral $PABC$ is a parallelogram. In fact, $PABC$ is a rhombus. The reasoning is as follows:
• The green circle shows that $\overline{PC}$ and $\overline{PA}$ are congruent.
• The black circle shows that $\overline{PA}$ and $\overline{AB}$ are congruent.
• The red circle shows that $\overline{AB}$ and $\overline{BC}$ are congruent.
• Since $PABC$ is a quadrilateral with all sides congruent, it is a rhombus (and therefore a parallelogram).
Note 2. It is clear that the construction only needs the compass, not a straightedge: In determining the point $C$, the straightedge is totally superfluous, and the points $P$ and $C$ determine the desired line (which thus is not necessary to actually draw!). It may be proved that all constructions with compass and straightedge are possible using only the compass.
Note 3. Another construction of the parallel uses the fact that the endpoints of two congruent chords (red) in a circle determine two parallel chords:
If you are interested in seeing the rules for compass and straightedge constructions, click on the provided.
Title compass and straightedge construction of parallel line CompassAndStraightedgeConstructionOfParallelLine 2013-03-22 17:11:18 2013-03-22 17:11:18 pahio (2872) pahio (2872) 24 pahio (2872) Algorithm msc 51-00 msc 51M15 construction of parallel construction of parallel line ParallelPostulate NSectionOfLineSegmentWithCompassAndStraightedge | 2019-03-24 19:03:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 51, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754613399505615, "perplexity": 407.14667848612896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203491.1/warc/CC-MAIN-20190324190033-20190324212033-00072.warc.gz"} |
https://stats.stackexchange.com/questions/202770/binary-logistic-regression-with-multiple-independent-variables | # Binary logistic regression with multiple independent variables
I have a group of 196 patients. I want to know if infection (the outcome, or dependent variable) depends on other variables. I am running a binary logistic regression with 8 independent variables (age, gender, type of surgery—6 different types, type of fixation, type of antibiotics). The categorical variables are automatically put into dummies by SPSS.
Some of my categorical variables have low frequencies (<5).
Can I run a binary logistic regression? Are the results reliable?
Update:
I have no categories with 0 patients, only some with only 1 or 2 patients. So I ran the regression and SPSS gives me the output above. Can I say that TRTCD2 and QSORRES are statistically significant? And that the p value or 1 or almost 1 are due to the small frequencies in this group?
• I edited your question. I assume that you switched dependent (the variable you want to explain) and independent variables (the variables that do the explaining). Correct me if I am wrong. – Maarten Buis Mar 21 '16 at 10:33
• You can say it is significant based on the P values...but we usually like to check for multicollinearty and reduce the number of predictors before assessing significance. I would suggest providing more information about your hypotheses and predictors. You should also note that some people do not consider Wald tests to be reliable and if you have a particular hypothesis in mind, you might be better off comparing nested models using a likelihood ratio test. – coreydevinanderson Dec 16 '17 at 3:11
At the heart of binary logistic regression is the estimation of the probability of an event. As detailed in RMS Notes 10.2.3 the minimum sample size needed just to estimate the intercept in a logistic model is 96 and that still results in a not great margin of error of +/- 0.1 in the estimated (constant) probability of event. If you had a single binary predictor the minimum sample size is 96 per each of the levels. So your sample size is insufficient for the task at hand. Not that p-values do not help this situation in any way.
Let's start with the easy case: If an independent variable has 0 people in one category, that category can't add anything to the model as ... well, there is nothing to model.
When categories have small numbers (but not 0), the standard errors tend to be large. E,g,
set.seed(123)
age <- rnorm(100, 25, 10)
catvar <- c("A", rep("B", 99))
depvar <- c(rep(0, 50), rep(1, 50))
mod1 <- glm(depvar~age + catvar, family = "binomial")
summary(mod1)
• – erica Mar 21 '16 at 12:04
• That seems logic. I have no categories with 0 patients, only some with only 1 or 2 patients. So I ran the regression and SPSS gives me the output above. Can I say that TRTCD2 and QSORRES are statistically significant. And that the p value or 1 or almost 1 are due to the small frequencies in this group? – erica Mar 21 '16 at 12:04 | 2019-08-22 09:46:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.718084454536438, "perplexity": 737.8229540413432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317037.24/warc/CC-MAIN-20190822084513-20190822110513-00100.warc.gz"} |
https://pos.sissa.it/336/077/ | Volume 336 - XIII Quark Confinement and the Hadron Spectrum (Confinement2018) - B: Light quarks
Chiral symmetry breaking corrections to thepseudoscalar pole contribution of the HadronicLight-by-Light piece of $a_\mu$
A.E. Guevara Escalante*, P. Roig Garcés and J.J. Sanz-Cillero
Full text: pdf
Pre-published on: September 12, 2019
Published on: September 26, 2019
Abstract
We have studied the $P\to\gamma^\star\gamma^\star$ form factor in Resonance Chiral Theory, with $P = \pi^0\eta\eta'$, to compute the contribution of the pseudoscalar pole to the hadronic light-by-light piece of the anomalous magnetic moment of the muon. In this work we allow the leading $U(3)$ chiral symmetry breaking terms, obtaining the most general expression for the form factor up to $\mathcal{O}(m_P^2)$. The parameters of the Effective Field Theory are obtained by means of short distance constraints on the form factor and matching with the expected behavior from QCD. Those parameters that cannot be fixed in this way are fitted to experimental determinations of the form factor within the spacelike region. Chiral symmetry relations among the transition form factors for $\pi^0,\eta$ and $\eta'$ allow for a simultaneous fit to experimental data for the three mesons. This shows an inconsistency between the BaBar $\pi^0$ data and the rest of the experimental inputs. Thus, we find a total pseudoscalar pole contribution of $a_\mu^{P,HLbL}=(8.47\pm 0.16)\cdot 10^{-10}$ for our best fit (that neglecting the BaBar $\pi^0$ data). Also, a preliminary rough estimate of the impact of NLO in $1/N_C$ corrections and higher vector multiplets (asym) enlarges the uncertainty up to $a_\mu^{P,HLbL}=(8.47\pm 0.16_{\rm stat}\pm 0.09_{N_C}{}^{+0.5}_{-0.0_{\rm asym}})10^{-10}$.
DOI: https://doi.org/10.22323/1.336.0077
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access | 2023-02-09 13:10:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7227585911750793, "perplexity": 1112.1104069590078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00639.warc.gz"} |
https://space.stackexchange.com/questions/15518/how-to-construct-b-drag-term-in-tle | # How to construct $B^*$ drag term in TLE?
Is it possible to calculate $B^*$ term in TLE data using only a few or more velocity and position vectors for a satellite?
Dwight E. Andersen claims that its an impossible task in his thesis "Computing Norad Mean Elements From a State Vector" (1994)
An extension of this [calculating other TLE elements than drag and mean motion] would be to compute, from just the state, the mean elements and the drag terms and B* in SGP4, SDP4, SGP8, and SDP8. Future research on satellite drag and methods of estimating the drag would be valuable. Since the drag term is a function of the physical geometry as well as atmospheric conditions, some aspects of the satellites' physical characteristics would be needed. | 2019-08-25 03:05:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6612298488616943, "perplexity": 1365.2737986100071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00209.warc.gz"} |
https://math.stackexchange.com/questions/1683951/the-three-coin-flip-riddle | # The three-coin-flip riddle
Is the following true (It seems obvious to me that it's not... but... a PhD in physics, Derek Abbott, seems to think others explanation at end of post):
Someone flips 3 coins on the table, they are then covered with paper so I can't see. Two of the coins are on the left of the table, one coin is on the right of the table. I go to the left of the table and start to slide the paper back slowly, eventually revealing the first of the two coins, if it's tails I just have the person re-flip, but if it's heads then I predict the other coin is tails and continue pulling the paper back to reveal tails 66% of the time.
Ted-ed (a TED talks division) published a video recently via Physics PHD Derek Abbott called "The Frog Riddle" - you can watch it here: https://www.youtube.com/watch?v=cpwSGsb-rTs and now I'm very confused, please tell me their video is just wrong.
I don't understand mathematically how they are arriving at their conclusion. If this were any other youtube video in my feed I would just wave it off as erroneous, but TED is a very large and very famous organization with lots of editors and is also notoriously intellectual. Also I was having a hard time finding many who disagreed in the youtube comments.
• You should stop drinking TED-flavoured Kool-aid and look up a thread on reddit (or here) that explains why the video's reasoning is wrong. In a nutshell, $MM$ is more likely (x2) to produce a crock as a opposed to $MF$ and that makes chances the same as 1 frog and zero crocks. – A.S. Mar 5 '16 at 9:10
• @A.S. +1, Sorry it's just that he is a PhD in physics so I assume I am wrong (as I did not even go to college (pursued business) haha). Just wanted to get multiple opinions with the scenario I posed. – Albert Renshaw Mar 5 '16 at 9:12
• As with the Monty-Hall paradox, there is a simple way to cast out any doubt: build the tree of possible outcomes, and count the number of branches. – Graffitics Mar 5 '16 at 9:12
• See reddit.com/r/askscience/comments/48br02/… and my answer: math.stackexchange.com/questions/1683658/the-frog-puzzle/… (see other comments as well). If you assume $\lambda t$ small, you'll get 1:1 odds of a female. – A.S. Mar 5 '16 at 9:12
• @Graffitics but the monty hall problem relies on a guaranteed pre-set of 2 goats and 1 car, and also the pre-knowledge (by the hose) of which doors contain which objects. – Albert Renshaw Mar 5 '16 at 9:13
The coin scenario you describe differs from the frog scenario in the video in that you peek at a particular one of the coins and try to predict the other coin on that basis. You're right that this doesn't work and the probability for tails is still $\frac12$.
By contrast, in the video you hear a croak from the general direction of the two frogs, so you only know that one of them is male, but not which one. The video does make a bit of a mistake in assuming that one male and two males would have been equally likely to give a single croak while you were listening. However, if you did somehow get the information that at least one of the frogs is male in a manner that doesn't distinguish between the possibilities of one male or two males, then indeed the probability of there being one male would be $\frac23$.
• So if I was in the forest and I heard but did not see the croak it's $2/3$, but had I turned my head a few seconds earlier and also seen which croaked all of a sudden it's $1/2$? This doesn't make much sense to me, it seems as if real world objects can behave like quantum mechanics, just by observing you change another's odds? – Albert Renshaw Mar 5 '16 at 13:03
• @AlbertRenshaw, turning your head earlier doesn't change the probability. Also, surviving probability in the first case is $\frac{2}{3}$ only if male frog's croak probability when he sees you is $\frac{1}{2}$. Look at grand_chat's answer. – Alistair Mar 5 '16 at 14:48
• @AlbertRenshaw: No, you're talking about a single identifiable croak, and as I wrote that's not the assumption that leads to $\frac23$. It's $\frac23$ if you get precisely the information that at least one of the frogs is male. It's conceivable that you might get that information from croaking, e.g. if you listen long enough that you'd be sure to hear croaks if there's a male, but you don't know enough about the frequency of croaking to tell whether it's one or two males croaking. In that case turning your head and looking would allow you to identify both frogs correctly. – joriki Mar 5 '16 at 17:34
• I can't think of any reasonable non-quantum process (with assumption of independence btw frogs) generating croaks that won't make MM more likely to produce a croak than MF: math.stackexchange.com/questions/1683658/the-frog-puzzle/… for example. Hearing $1$ croak from $2$ frogs is like hearing $0$ croaks from $1$ frog. – A.S. Mar 5 '16 at 18:55 | 2021-07-29 16:34:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6517235636711121, "perplexity": 776.0917475352486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00002.warc.gz"} |
https://mailman.ntg.nl/pipermail/ntg-context/2014/078469.html | # [NTG-context] hyperlinks within a PDF
Robert Zydenbos context at zydenbos.net
Mon Jun 16 03:11:09 CEST 2014
Forgive me for what must seem a beginners’ question, but I really could not find the solution in the documentation or the Wiki:
How do I create hyperlinks within a PDF to another spot in the text of that same PDF? I had expected I could do something like: | 2022-01-29 13:58:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194709658622742, "perplexity": 1477.1856170955118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306181.43/warc/CC-MAIN-20220129122405-20220129152405-00558.warc.gz"} |
https://math.stackexchange.com/questions/3684353/lipschitz-constant-of-continuous-and-piecewise-linear-functions | # Lipschitz constant of continuous and piecewise linear functions
I want to calculate the Lipschitz constant of a continuous and piecewise linear function $$f:[0,1]^2\rightarrow R$$, like this \begin{equation*} f(x_1,x_2)=\left\{ \begin{aligned} 2x_1+x_2, &\quad\text{if} \quad x_1+x_2\leq 1\\ x_1+1, &\quad\text{if} \quad x_1+x_2>1 \end{aligned} \right. \end{equation*} I guess it is equal to the greatest Lipschitz constant among all pieces. Is there any textbook that contain related theorem?
• Yes, but also check the case in which one point is in each piece. May 20, 2020 at 22:29
• @Ramita I don't know how to prove it. I'm looking for a textbook on this issue. May 20, 2020 at 22:33
• There is no well known theorem but it is not difficult to prove either. For the above it is $\sqrt{5}$ with the Euclidean norm. May 20, 2020 at 22:36
• @copper.hat I find a theorem of the vector-valued form for this issue, threesquirrelsdotblog.com/2018/03/16/…, and I feel the proof not easy. I want to cite such results, but I can not find any textbook that contain this issue. And it is not proper for me to cite a website. May 21, 2020 at 10:37
Note that $$f(x_1,x_2) = \min (2 x_1+x_2,x_1+1)$$.
To see that the $$\min$$ of Lipschitz functions is Lipschitz:
Suppose $$f_1,...,f_m$$ are Lipschitz with rank $$L$$, then $$f_k(x)-f_k(y) \le L \|x-y\|$$ for all $$k,x,y$$. Then $$\min_i f_i(x)-f_k(y) \le L \|x-y\|$$ and choosing $$k$$ such that $$\min_j f_j(y) = f_k(y)$$ we see that $$\min_i f_i(x)-\min_j f_j(y) \le L \|x-y\|$$. Swapping $$x,y$$ shows that $$\min_k f_k$$ is Lipschitz with rank $$L$$. (This result is true more generally, but the finite case contains the basic idea.)
Note that $$x \mapsto 2x_1+x_2$$ has Lipschitz rank $$\sqrt{5}$$ and $$x \mapsto x_1+1$$ has Lipschitz rank $$1$$, so the smallest $$L$$ that will work is $$L= \max(1,\sqrt{5})$$. | 2022-10-06 14:01:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352271556854248, "perplexity": 245.17373969164993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00236.warc.gz"} |
http://nlab-pages.s3.us-east-2.amazonaws.com/nlab/show/concordance | cohomology
# Contents
## Idea
A concordance between cocycles in cohomology is a relation similar to but different from a plain coboundary, it is a “coboundary after geometric realization”.
A concordance is a left homotopy in an (∞,1)-topos with respect to a topological interval object, not with respect to the categorical interval .
For instance for $S = Diff$ the site of smooth manifolds, there is
• the “topological interval” $I \in \mathbf{H}_{diff}$ which is the smooth ∞-stack on $Diff$ represented by the manifold $I = [0,1]$;
• the “categorical interval” $Ex^\infty \Delta^1 \in \mathbf{H}_{Diff}$ is the smooth ∞-stack that is constant on the free groupoid on a single morphism.
## Definition
For $\mathbf{H}$ and (∞,1)-topos with a fixed notion of topological interval object $I$, for $A \in \mathbf{A}$ any coefficient object and $X \in \mathbf{H}$ any other object, a concordance between two objects
$c,d \in \mathbf{H}(X,A)$
(two cocycles in $A$-cohomology on $X$)
is an object $\eta \in A(X \times I)$ such that
$\begin{matrix} X&&\\ \downarrow&\searrow^{c}&\\ X \times I&\stackrel{\eta}{\to}& A\\ \uparrow& \nearrow_{d}&\\ X&& \end{matrix} \,.$
## Examples
### For topological principal bundles
###### Proposition
(concordant topological principal bundles are isomorphic)
With $kTopSp$ denoting the category of compactly generated weakly Hausdorff spaces, for $X \,\in\, kTopSp$ a k-topological space and $G \,\in\, Grp(kTopSp)$ a $k$-topological group, consider a concordance between a pair of $G$-principal bundles over $X$,
If
or
(e.g. if $X$ admits the structure of a smooth manifold)
then there exists already an isomorphism of principal bundles
(e.g. Roberts & Stevenson 2016, Cor. 15)
###### Proof
Observe that isomorphisms $f \,\colon\, P \xrightarrow{\;} P'$ between principal bundles over $X$ are equivalently global sections of the fiber bundle $(P \times_X P')/G$:
Here, from left to right, the dashed section follows by the universal property of the quotient space $X = P/G$. From right to left, the top morphism follows by pullback along the dashed section, using that
1. the bundle projections are effective epimorphisms by local triviality,
2. their kernel pairs are as shown, by principality,
3. in a regular category pullback preserves effective epimorphisms (this Prop.) and, of course, their kernel pairs.
In particular, for every $P$ the identity morphism on it corresponds to the canonical section of $(P \times_X P)/G$.
In the given situation, this means that we have a canonical local section $\sigma_0$ making the following solid diagram commute, exhibiting that the restriction of the bundle $P_0 \times [0,1]$ to $\{0\} \subset [0,1]$ is isomorphic to $P_0$, by construction:
Now
or
In either case, this implies that a lift exists, as shown by the dashed arrow above.
The resulting commutativity of the bottom right triangle says that this lift is a global section which hence exhibits an isomorphism of principal bundles (over $X \times [0,1]$) of this form:
$P \;\; \simeq_X \;\; P_0 \times [0,1] \,.$
The restriction of this isomorphism to $\{1\} \subset [0,1]$ is hence an isomorphism of the form $P_1 \,\simeq_X\, P_0$, as required.
### For topological vector bundles
For topological vector bundles over paracompact Hausdorff spaces, concordance classes coincide with plain isomorphism classes:
###### Proposition
(concordance of topological vector bundles)
Let $X$ be a paracompact Hausdorff space. If $E \to X \times [0,1]$ is a topological vector bundle over the product space of $X$ with the closed interval (hence a concordance of topological vector bundles on $X$), then the two endpoint-restrictions
$E|_{X \times \{0\}} \phantom{AA} \text{and} \phantom{AA} E|_{X \times \{1\}}$
are isomorphic topological vector bundles over $X$.
For proof see at topological vector bundle this Prop..
### More examples
• For $A = VectrBund(-)$ the difference between concordance of vectorial bundles and isomorphism of vectorial bundles plays a crucial rule in the construction of K-theory from this model.
• The notions of coboundary and concordance exist in every cohesive (∞,1)-topos.
## References
Discussion of concordance of topological principal bundles (in fact for simplicial principal bundles parameterized over some base space):
Discussion of concordance in terms of the shape modality in the cohesive (∞,1)-topos of smooth ∞-groupoids (see at shape via cohesive path ∞-groupoid for more): | 2022-01-21 18:26:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 47, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838908314704895, "perplexity": 787.6360247537141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00086.warc.gz"} |
http://www.sciforums.com/threads/try-out-the-oddball-logic-test.102938/page-6 | # Try out the ODDBALL logic test?
Discussion in 'Intelligence & Machines' started by Alan McDougall, Jul 15, 2010.
Not open for further replies.
1. ### Alan McDougallAlan McDougallRegistered Senior Member
Messages:
410
You can do what you like with the balls except weigh them on a bathroom type scale to get to the solution
3. ### MaikaRegistered Member
Messages:
86
that sounds naughty.
why not a bathroom scale? i guess they'd roll off.
5. ### MaikaRegistered Member
Messages:
86
what does that mean?
Messages:
18,942
8. ### Captain KremmenAll aboard, me Hearties!Valued Senior Member
Messages:
12,738
What if you made them orbit a planet?
One of them would have a larger or small orbit, or go quicker or slower.
Or.
You could put them in a salt solution, and slowly add salt until one or all except one of them floated.
No.
The only thing that you can do is weigh them three times using the pan scales.
A very light marking in felt tip is permissible to mark them if you wish.
Last edited: Aug 10, 2010
9. ### nirakar( i ^ i )Registered Senior Member
Messages:
3,383
It took me less than two hours but probably more than one hour to solve the puzzle. I needed about six tries to find a method that worked.
That was a good puzzle.
After solving the puzzle I started reading posts 2 through- .
Trying to read and understand what people were trying to say was more difficult than the puzzle.
I have looked at Stryder's hidden stuff. Thats not the way I solved the puzzle. I don't understand what he is saying.
None of the people's proposed solutions look like mine. I wonder if they are wrong. Mine seems to pass the tests I gave it.
Solution in below in blank box: You'll have to mouse over to see it
Code:
[color=white] My way weighs four balls against four balls in the first weighing, Three balls against three balls in the second weighing, and one ball against one ball in the third weighing.
I am calling the balls 1 through 12. Following the image of a balance scale with two pans I refer to one pan as left and one as right just for visualization sake. It does not affect the puzzle.
You can test this. Decide which ball is heavier or lighter and then just follow the lines according to what you chose.
Line 1, First weighing: place balls {1,2,3,4} in left pan and balls {5,6,7,8} in the right pan and note which pan was heavier.
Line 2, If the balls {1,2,3,4} in the left pan were heavier than balls {3,4,5,6} then proceed to Line 5.
Line 3, If the balls {1,2,3,4} in the left pan were lighter than the balls {5,6,7,8} in the right pan then proceed to Line 15
Line 4, If the balls {1,2,3,4} in the left pan were equal to the balls {5,6,7,8} in the right pan then proceed to Line 25
Line 5, Second weighing: place balls {1,2,5} in the left pan and balls {3,6,9} in the right pan and note which pan was heavier.
Line 6, If the balls {1,2,5} in the left pan were heavier than balls {3,6,9} in the right pan then proceed to Line 9.
Line 7, If the balls {1,2,5} in the left pan were lighter than balls {3,6,9} in the right pan then proceed to Line 11.
Line 8, If the balls {1,2,5} in the left pan were equal to balls {3,6,9} in the right pan then proceed to Line 13
Line 9, Third weighing: place ball {1} in the left pan and ball {2} in the right pan and note which pan was heavier.
Line 10, Answer, the heavier of ball 1 and ball 2 is heavier than the other 11 balls but if they weigh the same then ball 6 is lighter than the other 11 balls.
Line 11, Third weighing: place ball {5} in the left pan and ball {9} in the right pan and note which pan was heavier.
Line 12, Answer, if ball 5 is lighter than ball 9 then ball 5 is lighter than the other 11 but if they weigh the same then ball 3 is heavier than the other 11 balls Ball 9 can not be lighter than ball 5.
Line 13 Third weighing: place ball {7} in the left pan and ball {8} in the right pan and note which pan was heavier.
Line 14, Answer, the lighter of ball 7 and ball 8 is lighter than the other 11 balls but if they weigh the same then ball 4 is heavier than the other 11 balls.
Line 15, Second weighing: place balls {1,2,5} in the left pan and balls {3,6,9} in the right pan and note which pan was heavier.
Line 16, If the balls {1,2,5} in the left pan were lighter than balls {3,6,9} in the right pan then proceed to Line 19.
Line 17, If the balls {1,2,5} in the left pan were heavier than balls {3,6,9} in the right pan then proceed to Line 21.
Line 18, If the balls {1,2,5} in the left pan were equal to balls {3,6,9} in the right pan then proceed to Line 23
Line 19, Third weighing: place ball {1} in the left pan and ball {2} in the right pan and note which pan was lighter.
Line 20, Answer, the lighter of ball 1 and ball 2 is lighter than the other 11 balls but if they weigh the same then ball 6 is heavier than the other 11 balls.
Line 21, Third weighing: place ball {5} in the left pan and ball {9} in the right pan and note which pan was heavier.
Line 22, Answer, if ball 5 is heavier than ball 9 then ball 5 is heavier than the other 11 but if they weigh the same then ball 3 is lighter than the other 11 balls Ball 9 can not be heavier than ball 5.
Line 23, Third weighing: place ball {7} in the left pan and ball {8} in the right pan and note which pan was heavier.
Line 24, Answer, the heavier of ball 7 and ball 8 is heavier than the other 11 balls but if they weigh the same then ball 4 is lighter than the other 11 balls.
Line 25, Second weighing: place balls {9,10} in the left pan and balls {11,2} in the right pan and note which pan was heavier.
Line 26, If the balls {9,10} in the left pan were heavier than balls {11,2} in the right pan then proceed to Line 29.
Line 27, If the balls {9,10} in the left pan were lighter than balls {11,2} in the right pan then proceed to Line 31 .
Line 28, If the balls {9,10} in the left pan were equal to balls {11,2} in the right pan then proceed to Line 33
Line 29, Third weighing: place ball {9} in the left pan and ball {10} in the right pan and note which pan was heavier.
Line 30, Answer, the heavier of ball 9 and ball 10 is heavier than the other 11 balls but if they weigh the same then ball 11 is lighter than the other 11 balls.
Line 31, Third weighing: place ball {9} in the left pan and ball {10} in the right pan and note which pan was lighter.
Line 32, Answer, the lighter of ball 9 and ball 10 is lighter than the other 11 balls but if they weigh the same then ball 11 is heavier than the other 11 balls.
Line 33, Third weighing: It is knowing that ball 12 is the odd ball prior to this weighing but compare ball 12 to any of the other balls to determine whether ball 12 is heavier or lighter than the other 11 balls.
My method of solving the puzzle was primarily about eliminating methods that could not work.
[/color]
Last edited: Aug 24, 2010
10. ### John99BannedBanned
Messages:
22,046
I just think it is two entirely different puzzles, if you can mark them.
11. ### dsdsdsValued Senior Member
Messages:
1,677
I haven’t solved it yet but it would be cool to create some code to solve the puzzle – not only solve but to automatically create the method (groups, etc.. ) to solve it. Actually the problem would be restated to “find the minimum required weigh-ins to find the oddball”. – and if that is too easy, then make it so the user enters the number of balls AND number of oddballs.
12. ### nirakar( i ^ i )Registered Senior Member
Messages:
3,383
I sort of stylized my telling of my solution given three posts above as if it was programming code. I had one course an BASIC decades ago.
13. ### John99BannedBanned
Messages:
22,046
Actually a few are very much like yours.
Not necessarily wrong but wishful thinking.
14. ### nirakar( i ^ i )Registered Senior Member
Messages:
3,383
That approach is incorrect because if the scales are unbalanced on your first and second tests you will not be able to construct a third test that can identify the oddball for all six remaining scenarios as to which ball is the oddball.
15. ### nirakar( i ^ i )Registered Senior Member
Messages:
3,383
Alan McDougall post 61 used the same technique I did. I did not read it before.
16. ### John99BannedBanned
Messages:
22,046
Will it work ALL the time = NO
Will it work most of the time = YES
Will it work some of the time = YES
17. ### nirakar( i ^ i )Registered Senior Member
Messages:
3,383
Mine will work all the time. Since Alan McDougall was working the same way I assume his will work all the time but I did not check his third weighings or how he handled the easier balanced scenario because the second weighing was the tricky part of that method. Unless he made some silly mistake his will work because he got past the hard part.
Post 62 that Captain Kremmen got from the internet also works all the time. It is a completely different technique from what I and Alan Mcdougall came up with.
Now that I see Captain Kremmen post 62 the stuff Stryder was putting in white text is no longer so nonsensical to me.
I wonder if there is a radically different third approach that will work.
18. ### John99BannedBanned
Messages:
22,046
No, it wont work all the time. Your method is similar to some of the others and none of them will work 100% of the time and max. 98% of the time is only possibility. 98% is according to my calculation and has margin of error of 1-2% in the negative.
19. ### nirakar( i ^ i )Registered Senior Member
Messages:
3,383
You can test mine with all 24 scenarios. It will work every time.
There is a chance that I could have gotten something wrong in the transferring from my flow chart to my lines of instructions but the approach works 100% not 98%. Abandoning symmetry by adding just one known ball on the second test was the key difference between what I and McDougall did and other 4-4 3-3 approaches. There was something addictive about symmetry.
Last edited: Aug 25, 2010
20. ### John99BannedBanned
Messages:
22,046
One thing that surprises me is peoples reaoning for using less than 6+6 for first weighing if you can number the balls.
21. ### John99BannedBanned
Messages:
22,046
The problem is lighter or heavier equals 2, you will be left with 2 balls and there are two places to put the last two balls.
22. ### nirakar( i ^ i )Registered Senior Member
Messages:
3,383
You learn less from 6+ 6 than you learn from 4 + 4. With 4 plus four you eliminate 4 or 8 balls with your first weighing.
Doing 6+6 all twelve balls could still be the odd ball after the first weighing.
Because you can only weigh the ball three times you need to rule out as many balls as you can with every weighing.
23. ### John99BannedBanned
Messages:
22,046
is it lighter or heavier? | 2019-08-19 04:20:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5926055312156677, "perplexity": 1434.4234455742246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314641.41/warc/CC-MAIN-20190819032136-20190819054136-00218.warc.gz"} |
https://www.drmaciver.com/2007/12/type-classes-in-scala/ | # Type classes in Scala
Some backstory on this post: I got about halfway through writing it before realising that in fact it didn’t work because of a missing feature. I sent an email to the mailing list about this feature and after some discussion it was concluded that in fact this missing feature was a failure to meet the specification and that it had been fixed in 2.6.1. There’s still one thing lacking, but more on that later. The post now resumes.
I mentioned in a recent post that Scala could emulate Haskell type classes with its implicit defs and first class modules.
In actual fact, the situation is much happier than that. Implicit defs + first class modules give you significantly more than Haskell type classes (although with a tiny loss of type safety). At least, much more than Haskell 98. In particular, multiparameter type classes and associated types come for free. You also gain a number of other advantages, such as type class instantiation scopes lexically, so you can redefine type classes locally.
So, how does all this work? I’ll begin with a general introduction to this style of programming with no real reference to Haskell, and then I’ll tie this back in to encoding Haskell type classes at the end of it.
Here’s a class that’s familiar to anyone who has written non-trivial Java:
trait Comparator[T]{
def compare(x : T, y : T) : Int;
}
trait Comparable[T]{
def compareTo(t : T);
}
Now, let’s define the following method:
def sort[T](array : Array[T])(implicit cmp : Comparator[T]) = stuff
So our sort method can either have a comparator passed to it explicitly or it will pull one from the surrounding environment. For example we could do:
object SortArgs{
implicit val alphabetical = new Comparator[String]{
def compare(x : String, y : String) = x.compareTo(y);
}
def main(args : Array[String]){
println(sort(args));
}
}
Will pick up the alphabetical instance.
It would be nice if we could also have defined the more general version:
object SortArgs{
implicit def naturalOrder[T <: Comparable] = new Comparator[T]{
def compare(x : T, y : T) = x.compareTo(y);
}
def main(args : Array[String]){
println(sort(args));
}
}
But this doesn't seem to work. :-/ Hopefully this will be fixed - it seems like wrong behaviour. Matt Hellige came up with the following workaround, but it's not very nice:
object Sorting{
trait Comparator[T]{
def compare(x : T, y : T) : Int;
}
def sort[T](arr : Array[T])()(implicit cmp : () => Comparator[T]) = null;
implicit def naturalOrder[T <: Comparable]() : Comparator[T] = null;
implicit def lexicographical[T]() (implicit cmp : () => Comparator[T]) :
Comparator[List[T]] = null;
def main(args : Array[String]){
sort(args);
sort(args.map(List(_)))
sort(args.map(List(_)).map(List(_)))
}
}
Moreover I haven't the faintest notion of why it works. :-)
However we can also chain implicit defs:
implicit def lexicographicalOrder[T] (implicit cmp : Comparator[T]) : Comparator[List[T]] = stuff;
So now the following code *does* work:
def main(args : Array[String]){
sort(args);
sort(args.map(List(_)))
sort(args.map(List(_)).map(List(_)))
}
}
An amazingly nice feature of this which some encodings of type classes miss is that you don't need an instance of a type to select on that type. Take for example the following:
object BinaryDemo{
import java.io._;
trait Binary[T]{
def put(t : T, stream : OutputStream);
def get(stream : InputStream) : T;
}
implicit val utf8 : Binary[String] = null;
implicit def binaryOption[T] (implicit bin : Binary[T]) : Binary[Option[T]] = null;
val myStream : InputStream = null;
def readText(implicit bin : Binary[Option[String]]) : Option[String] = bin.get(myStream);
case None => println("I found nothing. :(");
case Some(x) => println("I found" + x);
}
}
Unfortunately this example betrays a weakness in our encoding. I can't just randomly call "get" like in Haskell's Data.Binary - because I need to invoke it on an instance of Binary[T] I need to ensure at the method level that one is available. There doesn't appear to be a good way of getting access to implicits from the enclosing scope directly.
However, here's a silly hack:
def fromScope[T] (implicit t : T) = t;
And we get:
fromScope[Binary[Option[String]].get(myStream) match{
case None => println("I found nothing. :(");
case Some(x) => println("I found" + x);
}
}
This works just as well with multiple type parameters. For example, if we wanted to port Java's AtomicArray classes without wrapping everything (although admittedly wrapping everything would be more idiomatic Scala) we could do the following:
object ArrayDemo{
import java.util.concurrent.atomic._;
trait AtomicArray[S, T]{
def get(s : S, i : Int) : T;
def set(s : S, i : Int, t : T);
def compareAndSet(s : S, i : Int, expected : T, update : T);
}
implicit val long = new AtomicArray[AtomicLongArray, Long]{
def get(s : AtomicLongArray, i : Int) = s.get(i);
def set(s : AtomicLongArray, i : Int, t : Long) = s.set(i, t);
def compareAndSet(s : AtomicLongArray, i : Int, expected : Long, update : Long) = s.compareAndSet(i, expected, update);
}
}
So, that's multi-parameter type classes. But one thing you'll notice about the above encoding is that it's a bloody nuisance to invoke - you need to know the type of the array class you're using, which is annoying. Far better would be if the trait took care of that. In Haskell terms this would be an associated type.
No problem.
object ArrayDemo{
import java.util.concurrent.atomic._;
trait AtomicArray[T]{
type S;
def get(s : S, i : Int) : T;
def set(s : S, i : Int, t : T);
def compareAndSet(s : S, i : Int, expected : T, update : T);
}
implicit val long = new AtomicArray[Long]{
type S = AtomicLongArray;
def get(s : AtomicLongArray, i : Int) = s.get(i);
def set(s : AtomicLongArray, i : Int, t : Long) = s.set(i, t);
def compareAndSet(s : AtomicLongArray, i : Int, expected : Long, update : Long) = s.compareAndSet(i, expected, update);
}
}
Scala's classes can have abstract types. So we just encode associated types as those.
So, to recap on our encoding:
class Foo a where
bar :: a
baz :: a -> a
becomes
trait Foo[A]{
def bar : A;
def baz(a : A) : A;
}
instance Foo Bar where
stuff
becomes
implicit Foo[Bar] bar = new Foo[A]{
stuff
}
instance (Foo a) => Foo [a]
becomes
implicit def[T] (implicit foo : Foo[A]) : Foo[List[A]];
And for invoking:
foo = bar
becomes
val yuck = fromScope[Foo[A]].bar
So it's a more verbose encoding, but not a terrible one. And it has some abstraction advantages too. For example:
sortBy :: (a -> a -> Ord) -> [a] -> [a]
sortBy = stuff;
sort :: (Ord a) => [a] -> [a]
becomes
def sort[A](xs : List[A])(implicit cmp : Comparator[A]);
Because our type classes are a form of implicit object passing, we can also use them with *explicit* object passing. Thus we can redefine behaviour much more nicely to behave equally well with an ordered type and explicitly provided comparison functions.
This has disadvantages too - you need to be more careful to ensure that you can't accidentally use two instances of the type class. This isn't a major burden though. The general solution is that when you have something which needs to maintain type class consistency between invocations you pass it an instance at construction type. Take for example Haskell's Data.Set. In Haskell getting a new Set (Set.empty) works for any type but almost all the functions for building sets have a constraint that the type belongs to Ord. In Scala you would require an Ord instance to be passed for Set construction but after that would not need one (analagous to Java's TreeSet providing a constructor that takes a Comparator).
One thing I haven't covered is type classes which abstract over type constructors rather than types. The reason I haven't covered them is that I've yet to peek into that corner of Scala's type system. However, I assume they work as Tony Morris has done some stuff with monads in Scala. Also, see this paper (which I've not read yet)
This entry was posted in programming and tagged , on by . | 2020-10-01 18:48:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4026384651660919, "perplexity": 5772.0012628274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00739.warc.gz"} |
https://math.stackexchange.com/questions/2262786/how-im-conceptualizing-compactness-does-this-make-sense | # How I'm conceptualizing compactness. Does this make sense?
So from what I know about compactness
• a compact set, $S$, is defined as a set for which every open cover of $S$ has a finite subcover
• where an open cover, $O$, is a union of open intervals such that $S$ is a subset of $O$
• and a subcover, $C$, is a subset of the open cover $O$ for which $S$ is still a subset of $C$
but it wasn't really clicking for me why $[0, 1]$ was compact while $(0, 1)$ was not. After thinking for a while, here's how I ended up conceptualizing it using the above definitions:
• $(0, 1)$ has the open cover $(0, 1)$, which is equivalent to itself, and you cannot possibly find a subcover of $(0, 1)$ that still covers $(0, 1)$
• $[0, 1]$ being a closed interval isn't covered by $(0, 1)$, but is covered by infinitely many open intervals just slightly bigger than $(0, 1)$ e.g. $\{(-1,2),(-\frac12, \frac32), (-\frac14,\frac54),...(0 - \frac1n, 1 + \frac1n)\}$ and all those open intervals have subcovers that still cover $[0,1]$
Does that make sense? Also I assume a single interval rather than a union of intervals is an open cover; is that fine?
• A sub-cover can be the original cover itself, therefore the proof of non-compactness of $(0,1)$ is not correct. Also to prove compactness, you need to consider all posible open covers. In your proof of the compactness of $[0,1]$ you only dealt with open covers with a particular property, namely all intervals in the open cover contains $[0,1]$, but there are more open covers. As long as the union of intervals contains $[0,1]$, then these intervals form an open cover of $[0,1]$. – Frank Lu May 2 '17 at 19:31
• In fact if you want to prove compactness by definition, in most cases it could be complicated. That's why we characterise compactness via the Heine-Borel theorem, which states that a subset of the Euclidean space is compact if and only if it is closed and bounded. – Frank Lu May 2 '17 at 19:33
• Probably of interest: Finding open covers that do not contain finite subcovers. – Andrew D. Hwang May 2 '17 at 19:35
• @FrankLu thanks a lot this was very helpful. – m0meni May 2 '17 at 19:41
• An open cover of $S$ is not a union of open sets. It is a collection of open sets whose union contains $S.$ – zhw. May 2 '17 at 21:22
You're right that $(0,1)$ is non-compact, but you can't use the open cover $\{ (0,1) \}$ to prove that $(0,1)$ is non-compact. Remember, we need to find an open cover of $(0,1)$ such that no FINITE subset of it is an open cover. But $\{ (0,1) \}$ is already finite! It contains only one open set! So there is a finite subset of $\{ (0,1) \}$ that covers $(0,1)$, namely, $\{ (0,1) \}$ itself!
To show that $(0,1)$ is non-compact, you may like to consider the open cover: $$\{ (\tfrac 1 2, 1 ), (\tfrac 1 3, 1), (\tfrac 1 4, 1), (\tfrac 1 5, 1) , \dots \}$$ I hope it's clear that this collection of open sets covers $(0,1)$, but no finite subcollection within this collection covers $(0,1)$.
For $[0,1]$, you gave an example of an open cover. Indeed, your open cover admits a finite refinement, but not for the reason you gave. Your open cover admits a finite refinement because $\{ (-1,2) \}$ is a finite subcollection within your open cover that covers $[0,1]$, and $\{ (-1,2) \}$ is finite, containing only one open set!
Anyway, you can't prove that $[0,1]$ is compact by exhibiting a single open cover that admits a finite refinement. You need to prove that ALL open covers of $[0,1]$ admit a finite refinement. This is quite tricky to prove, and is (a special case of) the Heine-Borel theorem.
• Thank you for your answer. I get it now. Regarding the proof for $[0, 1]$ being compact I found this really nice and simple one just now, which doesn't require Heine-Borel math.stackexchange.com/a/189053/173829. – m0meni May 2 '17 at 19:41
• Here is (what I believe to be) an even simpler argument: Suppose for contradiction that $\{ U_\alpha \}$ is an open cover of $[0,1]$ such that $[0,1]$ cannot be covered by finitely many $U_\alpha$'s. Let's bisect the interval $[0,1]$ into $[0, \tfrac 1 2]$ and $[\tfrac 1 2, 0 ]$. Then either $[0, \tfrac 1 2]$ cannot be covered by finitely many $U_\alpha$'s or $[0, \tfrac 1 2]$ cannot be covered by finitely many $U_\alpha$'s. Suppose without loss of generality that $[0, \tfrac 1 2]$ cannot be covered by finitely many $U_\alpha$'s... – Kenny Wong May 2 '17 at 19:47
• Then we divide $[0, \tfrac 1 2]$ into $[0, \tfrac 1 4]$ and $[\tfrac 1 4, \tfrac 1 2]$ and do the same thing again. If we keep doing this, then eventually, we will get a sequence of closed intervals $I_n$ of length $1 / 2^n$, with $[0,1] = I_0 \supset I_1 \supset I_2 \supset \dots$ such that each $I_n$ cannot be covered by finitely many $U_\alpha$'s. – Kenny Wong May 2 '17 at 19:48
• Since the $I_n$'s are CLOSED intervals, the intersection of all $I_n$'s is a single point, which I'll call $x$. Clearly, $x$ is inside some $U_\alpha$ (since the $U_\alpha$'s form an open cover for $x$). But then, since $U_\alpha$ is OPEN, there is some $\delta > 0$ such that $(x - \delta, x + \delta ) \subset U_\alpha$... – Kenny Wong May 2 '17 at 19:50
• But then, if we pick an $n$ such that $1 / 2^n < \delta$, we see that $I_n \subset U_\alpha$. And this contradicts that statement that $I_n$ cannot be covered by finitely many $U_\alpha$'s. – Kenny Wong May 2 '17 at 19:50 | 2019-10-22 23:45:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700951337814331, "perplexity": 107.41060543735948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987826436.88/warc/CC-MAIN-20191022232751-20191023020251-00286.warc.gz"} |
https://www.dcode.fr/logarithm | Search for a tool
Logarithm
Tool for calculating logarithms. The logarithm function is denoted log or ln and is defined by a base (the base e for the natural logarithm).
Results
Logarithm -
Tag(s) : Functions
Share
dCode and you
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Please, check our community Discord for help requests!
Thanks to your feedback and relevant comments, dCode has developped the best Logarithm tool, so feel free to write! Thank you !
# Logarithm
## Logarithm Solver Log(?)=x
Tool for calculating logarithms. The logarithm function is denoted log or ln and is defined by a base (the base e for the natural logarithm).
### What is the natural logarithm? (Definition)
The definition of the natural logarithm is the function whose derivative is the inverse function of $x \mapsto \frac 1 x$ defined for $x \in \mathbb{R}_+^*$.
The natural logarithm is noted log or ln and is based on the number $e \approx 2.71828\ldots$ (see decimals of number e).
Example: $\log(7) = \ln(7) \approx 1.94591$
Some people and bad calculators use $\log$ for $\log_{10}$, so make sure to know which notation is used.
### How to turn a base N logarithm into a natural logarithm?
Any base $N$ logarithm can be calculated from a natural logarithm with the formula: $$\log_{N}(x) = \frac {\ln(x)} {\ln(N)}$$
### What is the neperian logarithm?
The neperian logarithm is the other name of the natural logarithm (with base e).
### What is the decimal logarithm (log10)?
The decimal logarithm noted $\log_{10}$ or log10 is the base $10$ logarithm. This is one of the most used logarithms in calculations and logarithmic scales. $$\log_{10}(x) = \frac { \ln(x)} { \ln(10) }$$
Example: $\log_{10}(1000) = 3$
### What is the binary logarithm (log2)?
The binary logarithm noted $\log_{2}$ (or sometimes $lb$) is the base $2$ logarithm. This logarithm is used primarily for computer calculations. $$\log_2(x) = \frac {\ln(x)} {\ln(2)}$$
Use the formula above to calculate a log2 with a calculator with only the log key.
### Why the logaritm can transform product into sum?
Any logarithm has as for properties:
- $\log_b(x \cdot y) = \log_b(x) +\log_b(y)$ (transformation of a product into a sum)
- $\log_b \left( \frac{x}{y} \right) = \log_b(x) - \log_b(y)$ (transformation of a quotient into subtraction)
- $\log_b (x^a) = a \log_b(x)$ (transformation of a power into a multiplication)
### What are remarkable values of the logarithm function?
- $\log_b(b) = 1$
- $\log(e) = \ln(e) = 1$
- $\log_b(1) = ln(1) = 0$
- $\log_b(b^n) = \ln(e^n) = n$ (inverse function of exponentiation)
## Source code
dCode retains ownership of the online 'Logarithm' tool source code. Except explicit open source licence (indicated CC / Creative Commons / free), any algorithm, applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (PHP, Java, C#, Python, Javascript, Matlab, etc.) no data, script or API access will be for free, same for Logarithm download for offline use on PC, tablet, iPhone or Android !
## Need Help ?
Please, check our community Discord for help requests! | 2020-09-29 05:11:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.673617422580719, "perplexity": 3350.016531057211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401624636.80/warc/CC-MAIN-20200929025239-20200929055239-00454.warc.gz"} |
http://msp.org/gt/2010/14-1/p07.xhtml | #### Volume 14, issue 1 (2010)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Subscriptions Author Index To Appear Contacts ISSN (electronic): 1364-0380 ISSN (print): 1465-3060
Prescribing the behaviour of geodesics in negative curvature
### Jouni Parkkonen and Frédéric Paulin
Geometry & Topology 14 (2010) 277–392
##### Abstract
Given a family of (almost) disjoint strictly convex subsets of a complete negatively curved Riemannian manifold $M$, such as balls, horoballs, tubular neighbourhoods of totally geodesic submanifolds, etc, the aim of this paper is to construct geodesic rays or lines in $M$ which have exactly once an exactly prescribed (big enough) penetration in one of them, and otherwise avoid (or do not enter too much into) them. Several applications are given, including a definite improvement of the unclouding problem of our paper [Geom. Func. Anal. 15 (2005) 491–533], the prescription of heights of geodesic lines in a finite volume such $M$, or of spiraling times around a closed geodesic in a closed such $M$. We also prove that the Hall ray phenomenon described by Hall in special arithmetic situations and by Schmidt–Sheingorn for hyperbolic surfaces is in fact only a negative curvature property.
##### Keywords
geodesics, negative curvature, horoballs, Lagrange spectrum, Hall ray
##### Mathematical Subject Classification 2000
Primary: 53C22, 11J06, 52A55
Secondary: 53D25
##### Publication
Received: 1 June 2007
Revised: 21 July 2009
Accepted: 15 April 2009
Preview posted: 27 October 2009
Published: 2 January 2010
Proposed: Martin Bridson
Seconded: Walter Neumann, Jean-Pierre Otal
##### Authors
Jouni Parkkonen Department of Mathematics and Statistics PO Box 35 40014 University of Jyväskylä Finland Frédéric Paulin Département de Mathématique et Applications UMR 8553 CNRS Ecole Normale Supérieure 45 rue d’Ulm 75230 PARIS Cedex 05 France | 2016-09-27 08:41:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4767971932888031, "perplexity": 2586.91875069266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660996.20/warc/CC-MAIN-20160924173740-00196-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://rpg.stackexchange.com/questions/30843/how-do-psion-at-will-powers-work | # How do Psion At-Will Powers work?
Where can I find the list of Psion Powers? I know where the At-Wills, Encounters, etc. are, but where are the powers upon which I can spend power points?
I'm a fairly green DM, and this is my first tango with a Psion, I suppose an added question would be, do Psions have anything akin to a spellbook, and if so, how does it work?
- | 2016-07-01 04:26:24 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850420713424683, "perplexity": 3141.3183206900885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00130-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/15830/extendible-cong-congruence-sign | # Extendible \cong congruence sign
This question is very similar to Extendible equals sign but the solutions given there do not apply immediately.
At times I woud like to clarify the nature of a mathematical congruence. So I have defined in my preamble
\newcommand*\morph[1]{\underset{\mbox{\tiny #1}}{\cong}}
So I can write things like \morph{diff} and \morph{hom} to differentiate between different congruences. But even with the \mbox content set to \tiny, the text is still wider than the congruence sign. Any ideas?
-
Reducing the font size, although possible, would make the label almost illegible; on the other side, making an extended version of \cong will produce (IMO) inconsistent results. In this particular case (again, in my opinion) I'd rather stick to the behaviour exhibited by your current definition. Another option would be to select different symbols. – Gonzalo Medina Apr 14 '11 at 15:25
Sometimes, when I have this sort of issue with features not in LaTeX that I think I want, I realize after trying to produce them that they are absent because they are a bad idea. It looks to me like you want to label "congruences" which are isomorphisms, in which case perhaps you could use \xrightarrow[under]{over} and put a \sim in one position? You could also use \widetilde to some (probably limited) extent to place a tilde over an extendible equals sign, though I haven't tried it so I'll leave the suggestion here as a comment. – Ryan Reich Apr 14 '11 at 15:54
One could always use \resizebox to stretch the symbol.
\documentclass{article}
\usepackage{amsmath,graphicx}
\makeatletter
\newcommand*\morph[1]{%
\setbox0=\hbox{\scriptsize#1}%
\setbox2=\hbox{$\m@th{\cong}$}%
\stackrel{\copy0}{%
\ifdim\wd2<\wd0
\resizebox{\wd0}{\ht2}{$\m@th{\cong}$}%
\else
\cong
\fi
}%
}
\begin{document}
$X\morph{long text}Y\morph{i}Z$
\end{document}
-
Hum, something's weird. It works if I invoke using latex test; dvipdf test; xpdf test.pdf. But it doesn't work if I invoke using pdflatex test; xpdf test.pdf. Any ideas? – Willie Wong Apr 15 '11 at 13:11
Let me clarify: doesn't work means "where the symbol should display, there is nothing there." \resizebox works by itself with pdflatex: if I have something inline like \resizebox{3cm}{2cm}{test} or the same with something in mathmode inside the third argument, it gets displayed and scaled. But somehow not the command you gave above. – Willie Wong Apr 15 '11 at 13:14
Hum, \scalebox however doesn't have the same problems. Looking at my use case, I am happy to now just scale the width of the symbol by 33%. – Willie Wong Apr 15 '11 at 13:47
@Willie: I can't imagine why the code doesn't work for you with pdflatex. It is what I used. – TH. Apr 15 '11 at 14:44
@Willie: If \scalebox works, then so should \resizebox. They both end up using the same macro \Gscale@box. – TH. Apr 15 '11 at 14:47
Of course, whilst agreeing that this Not Recommended, it's also fairly similar to something I do: I like to make arrows and so forth a little more conspicuous in presentation by making them a bit bigger. So my method for doing that adapts reasonably well to this situation.
That method is to use ... TikZ!
\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{decorations.pathmorphing}
\newcommand{\xcong}[1]{%
\mathrel{\tikz[baseline=0pt] {
\node[above] at (0,1.2ex) (a) {$$\scriptstyle #1$$};
\draw[preaction={
transform canvas={yshift=-.5ex},
draw,
decorate,
decoration={lineto}},
preaction={
transform canvas={yshift=-1ex},
draw,
decorate,
decoration={lineto}}]
(a.south west) .. controls +(.25,.15) and +(-.25,-.15) .. (a.south east);
}}}
\begin{document}
$$A \cong B \quad A \xcong{a,b,c,d,e,f,g,h} B$$
\end{document}
Okay, it's not going to win any design awards ...
-
To save on preaction, you could use double, double equal sign distance (as I just found out for a similar problem). – Caramdir Apr 14 '11 at 21:25 | 2014-04-19 07:46:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898343443870544, "perplexity": 1740.3826452672254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://brilliant.org/problems/plenty-of-integrate-by-parts/ | # Plenty of Integrate by Parts?
Calculus Level 4
Given that $$\large \displaystyle \int_0^\infty \dfrac { \sin x}{x} \, dx = \dfrac {\pi}{2}$$.
If the value of $$\large \displaystyle \int_0^\infty \dfrac { \sin^9 x}{x} \, dx = \dfrac {a\pi}{b}$$ for coprime positive integers $$a$$ and $$b$$, what is the value of $$a+b$$?
× | 2018-04-19 11:27:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480071663856506, "perplexity": 395.193902261918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936914.5/warc/CC-MAIN-20180419110948-20180419130948-00685.warc.gz"} |
https://aviation.stackexchange.com/questions/11675/what-does-aerodynamic-noise-of-an-airliner-sound-like-in-landing-configuration | # What does aerodynamic noise of an airliner sound like in landing configuration?
I want to know what aerodynamic noise an airliner makes when flying with both engines shut down. Specifically, it is the sound made by air passing a plane which is in landing configuration, with gear and flaps extended, at a normal approach speed.
I know that it makes a loud noise especially during landing, but normally we cannot distinguish it from the engine sound.
• No, you really don't – rbp Jan 13 '15 at 16:20
• Follow by a very loud noise.... – vasin1987 Jan 13 '15 at 16:28
• What exactly are you looking for? A description? A video? A sound file? – fooot Jan 13 '15 at 17:41
• This question should be reopened. The sound you're looking for can be heard here: youtu.be/9ZBcapxGHjE?t=59s – Steve V. Jan 14 '15 at 2:06
• Although improved, I've voted to keep it closed, as it's still in a not-very-SE format. Where is the listener? How far away is the aircraft? What kind of aircraft? How fast is it travelling? Are we talking airspeed or ground speed? Is the RAT deployed? How about flaps? At what altitude? Is the landing gear up or down? Are the passengers inside screaming because the aircraft has no power and they're scared? This question does not have "an answer" it has "a range of conjecture and speculation which could give an idea of an answer with numerous caveats and ranges" – Jon Story Jan 14 '15 at 11:42
Aerodynamic noise has many sources. It is not white noise, because some frequencies are dominant. Generally it happens when flows of different speed collide, or when a standing wave develops in a cavity. Common noise sources are:
• Uncovered openings, like vent holes or control surface gaps. Like when you blow across the top of an open bottle, they produce a howling sound with a dominant frequency that depends on flow speed and opening size.
• Tollmien-Schlichting waves in the boundary layer. These frequencies change with speed and their location along the flow path, and generally are responsible for most of the hissing sound of gliders.
• Separated flow, which produces alternating separations behind blunt bodies. Here the main frequency is that of the Karman vortex street that forms behind them. It can be calculated if the Strouhal number Sr of the flow is known. This is the equation for the main frequency $f$ of a bracing wire with the diameter $d$: $$f = Sr \cdot \frac{v}{d}$$ Here $v$ is the airspeed, and for bracing wires Sr is normally 0.2. Bracing wires (or a blunt trailing edge, for that purpose) produce a characteristic whistling sound.
As you said, the landing configuration makes most noise. In addition to the factors above, you now have
• Extended flaps, mostly with gaps between them which show high local flow speed. This local high-speed flow is very noisy.
• Many more blunt objects sticking out of the airframe: Landing gears, gear covers or landing lights. The particular noise of landing gears was once tested with a high-performance glider which had styrofoam gears fitted under the wings. They broke off when it had to land, but yielded valuable data when compared to the noise of the clean glider. Sorry, there is no photo of this experiment on the web!
• The gaping hole of the landing gear well. Especially while the gear is moved, this creates a lot of noise, but even after extension a part of the well is uncovered and adds its noise.
Here is a good overview of different noise sources. It is best read with a good working knowledge of German.
To answer your question: The sound is a mixture of hissing and whistling in different frequencies. If you stand close to the Autobahn (best is a section without speed restrictions), the noise of the passing cars is similar, but less intense. At 180 km/h, engine noise starts to vanish in all the aerodynamic and tire noises ...
• Thank you Peter for the details. I was looking for a video or a sound sample – wael rokbani Jan 14 '15 at 16:39
• @waelrokbani: I found the question interesting and worthy of an answer. However, I wouldn't expect that this video has yet been made. The best you can get is a video with engines idle, but still running - that reveals already much of the aerodynamic noise. – Peter Kämpf Jan 14 '15 at 17:07 | 2019-07-16 05:09:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31816813349723816, "perplexity": 1515.9800745995121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00185.warc.gz"} |
https://www.albert.io/ie/sat-physics-subject-test/playground-ride-acceleration | ?
Free Version
Moderate
Playground Ride Acceleration
SATPHY-EMKXYJ
The picture below shows a child (small brown circle) from above on a playground ride (large blue circle) that spins horizontally. The red arrows show the direction of rotation of the ride.
The ride is NOT rotating at constant speed, but is slowing down.
Which of the following arrows best represents the direction of the acceleration of the child at the location in the picture?
A
$\rightarrow$
B
$\searrow$
C
$\uparrow$
D
$\nearrow$
E
$\downarrow$ | 2016-12-07 22:12:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.526807963848114, "perplexity": 1382.0565637437571}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542250.48/warc/CC-MAIN-20161202170902-00145-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://byjus.com/free-gre-prep/gre-general-test/ | # GRE General Test- An Introduction
The Graduate Record Examination (GRE) is the only test that any person needs to take for admissions into graduate or business schools abroad. The GRE General test evaluates candidate’s skills in Verbal Reasoning, Quantitative Reasoning and Analytical Writing. Prospective students wanting to do a Masters, MBA or a Doctorate degree, need to take this test, as it forms a major part of the application to the universities. The GRE General test provides a common platform to assess all the students qualification through a single measure.
## GRE Revised General Test
The GRE revised General test is just a revamped version of the old GRE General test. The new GRE test moved to a new scoring system, effectively replacing the old method of scoring between 200-800 with scores between 130-170 marks with one mark increments for right answers. The Verbal and Quantitative Reasoning sections are now scored between 130 – 170 marks while the Analytical Writing is scored between 0 – 6 with half mark increments. The test also allows candidates to go back to questions previously skipped or change the answers to previous questions. The revised GRE test also introduced the adaptive test format.
## The Official Guide to GRE Test
ETS provides the candidates with a few free preparation guides. The POWERPREP Online is a tool which can be used to simulate the GRE Computer-based test and it is available for two attempts. The Practice Test for Paper-delivered GRE General Test can be used as a preparation for the paper-based test. The Math Review is a free document that revises the math concepts which are important for the GRE test. Apart from the free materials, there are more paid preparation materials which can be brought. However, there are a number of free GRE apps which are easily available online among which – BYJU’S GRE Learning App is the leading provider of app based study; the app can be downloaded from Google Play Store and App Store.
The GRE General test costs US$205 irrespective of the type of test the candidate chooses, viz. Computer-based or Paper-based. In terms of Indian Rupee, the GRE test costs approximately INR. 13330. ## GRE General Test Syllabus Section Syllabus Verbal Reasoning Syllabus Basic sentence structure, verb tense, idioms and idiomatic expression, pronoun agreement, subject verb agreement, modifiers, parallelism, great vocabulary Quantitative Reasoning Syllabus Number, percentage, profit and loss, ratio and proportion, simple and compound interest, speed distance and time, permutation and combination, linear equation, quadratic equation, set theory, statistics, powers and roots, probability, work and time, geometry, coordinate geometry, mensuration Analytical Writing Syllabus Basic sentence structure ## GRE General Test vs GRE Subject Test Bases GRE General Test GRE Subject Test Purpose Entry to Graduate, Business or Doctoral degree Entry to technical graduate programs Required by Graduate or Business schools Scientific and technical universities or individual dept. Adaptive Computer adaptive test, paper-based Paper based test Duration 3 hours 45 minutes 2 hours and 50 minutes Fees US$205 US\$150
Test contents Verbal, Quant and AWA Specific specialization in the 7 subjects like: Biochemistry, Cell and Molecular Biology
Biology
Chemistry
Literature in English
Mathematics
Physics
Psychology
BYJU’S will be glad to help you in your GRE preparation journey. You can ask for any assistance related to GRE from us by just giving a missed call at +918884544444, or you can drop an SMS. You can write to us at gre@byjus.com. | 2019-01-16 22:41:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17100602388381958, "perplexity": 2370.7846658353633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00124.warc.gz"} |
https://puzzling.stackexchange.com/questions/50144/my-short-riddle-1/50151 | # My short riddle 1 [closed]
I am something. Take away a letter and I'm still such a thing. Take away another and I'm still such a thing. Take away another and I cease being such a thing. What am I now?
Hint 1:
I wasn't a word initially.
• I've rolled back your last edit (rev 4) because a) people who saw the hint have an unfair advantage and b) 'Were you actually expecting to find a hint? Ha ha ha!' is not constructive in any way. Mar 20, 2017 at 7:45
• Now it's getting broad I guess. Mar 20, 2017 at 8:26
• This puzzle cannot be accurately solved without the hint. Even then, there is nothing that suggests one answer is more correct than another. Mar 20, 2017 at 17:34
This fits
I am something. Take away a letter and I'm still one.
Alone ->Lone
Take away another and I'm still one
Lone->One
Take away another and I cease being one.
One->on
So you're now the word on.
Explanation
The initial something is the word alone.
(The 'still' in the next phrase indicates you were one/singular earlier too.)
Taking away the letter a, you're now the word lone, which essentially signifies singularity.
Taking away the letter l, you're now the word one.
Taking away the letter e, you're now the word on, and no longer one.
• how does the hint fits here ? Mar 20, 2017 at 8:25
• Doesn't. But every question should be designed so it can be answered without a hint. And without the hint, this fits. Mar 20, 2017 at 8:31
• That's not what was intended. Good answer though. Mar 20, 2017 at 8:32
Is it:
Postman with 3 letters to deliver?
Because:
Take one letter, take 2 - he still has a letter to deliver. Take away third, and he isn't a postman with letters anymore.
• Wait, you said it's correct and then you said it's not correct? Which is it? Mar 21, 2017 at 8:37
A very long shot !
You are nothing
Initially you were a phase "is a"
Take away a letter "a", "is" refers to singular
Take away another letter "s", "I" refers to singular
Take away another letter "i", you are now Nothing
I am not sure. But I think, you were (Updated after hint)
A Noun or Xoun (Taking any letter as the first which doesn't give a word $\rightarrow$ as per hint)
I am something.
NOUN is something. It's a word and is used to define class of things.
Also, XOUN is gibberish. And gibberish is something.
Take away a letter and I'm still one.
Noun $\rightarrow$ Uno $\rightarrow$ One in Italian
Xoun $\rightarrow$ Uno $\rightarrow$ One in Italian
Take away another and I'm still one
Uno $\rightarrow$ UN $\rightarrow$ One in French
Take away another and I cease being one
Un $\rightarrow$ U/N stops being one. And, now you are just a letter U or N. :-)
• Uno is also Spanish for 1 ;) Mar 20, 2017 at 12:00
You are a
bee
take a letter away
be
and another
b
and another
• This was my first thought, too... until the hint, of course. Mar 20, 2017 at 20:54
• @SenorAmor a bee isn't a word ... it is an insect. 'be' is just a word on the other hand ;) Mar 20, 2017 at 20:58
• To bee or not to bee? Mar 20, 2017 at 21:24
You are
IIII
Based on the hint, you aren't a word initially.
Take away any letter, you get III. Take away any other letter, you get II. Take away a third letter, and you get I. You have transitioned from nonsense to nonsense to nonsense to a word.
You are a
Sword (something that causes pain)
Take away one letter
Word (Can also cause pain)
Take away another letter
Ord (Old English for spear point) (Can also cause pain)
Take away another letter
Rd which means nothing
The hint:
The "such a thing" refers to something that inflicts pain. It initially wasn't a WORD but a SWORD (This may be a stretch)
Are you a
Postman
Explanation:
Based on answer by @Zizy, "Postman with letters" was almost correct. The question asks "what am I now" - now I am a "postman without letters" or just postman. But if this is correct, I think Zizy's answer should really get the points. | 2022-05-28 17:01:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3927048146724701, "perplexity": 4223.618613383264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00698.warc.gz"} |
https://www.snapsolve.com/solutions/Ifx-and-y-are-directly-proportional-and-when-x-13-y-39-which-of-the-following-is-1672377491573761 | Home/Class 8/Maths/
If $$x$$ and $$y$$ are directly proportional and when $$x=13$$, $$y=39$$ , which of the following is not a possible pair of corresponding values of $$x$$ and $$y$$?( )
A. $$1$$ and $$3$$
B. $$17$$ and $$51$$
C. $$30$$ and $$10$$
D. $$6$$ and $$18$$
Speed
00:00
02:57
## QuestionMathsClass 8
If $$x$$ and $$y$$ are directly proportional and when $$x=13$$, $$y=39$$ , which of the following is not a possible pair of corresponding values of $$x$$ and $$y$$?( )
A. $$1$$ and $$3$$
B. $$17$$ and $$51$$
C. $$30$$ and $$10$$
D. $$6$$ and $$18$$
C
4.6
4.6
## Solution
Given, $$x$$ and $$y$$ are directly proportional
i.e. $$x\propto y$$
$$\Rightarrow$$ $$x=ky$$ where $$k$$ is constant.
With $$x=13$$ and $$y=39$$, we have
$$13=39k$$
$$\Rightarrow$$ $$k=\frac{13}{39}$$$$\Rightarrow$$ $$k=\frac{1}{3}$$.
So, $$\frac{x}{y}=\frac{1}{3}$$.
Checking all the options as:
a.) $$x=1$$ and $$y=3$$, checking with $$k=\frac{1}{3}$$
i.e. $$x=\frac{1}{3}\times 3=1$$.
b.) $$x=17$$ and $$y=51$$, checking with $$k=\frac{1}{3}$$
i.e. $$x=\frac{1}{3}\times 51=17$$
c.) $$x=30$$ and $$y=10$$, checking with $$k=\frac{1}{3}$$
i.e. $$x=\frac{1}{3}\times 10\neq 30$$
d.) $$x=6$$ and $$y=18$$, checking with $$k=\frac{1}{3}$$
i.e. $$x=\frac{1}{3}\times 18=6$$.
Since option $$(C)$$ only does not follow the proportionality.
Hence, $$(C)$$ is the correct option. | 2022-05-20 07:56:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266092777252197, "perplexity": 1052.1607884454029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00560.warc.gz"} |
https://stats.stackexchange.com/questions/162538/how-to-determine-the-strongest-player-in-a-team-game | # How to determine the strongest player in a team game
I have an idea for a "King of the Hill" contest in the Programming Puzzles and Code Golf Stack Exchange (https://codegolf.stackexchange.com/). The game I have in mind has four players divided into two teams of two players each with a specified play position (Team 1 White, Team 1 Black, Team 2 White, Team 2 Black) and three discrete outcomes (Team 1 Win, Team 2 Win, Draw). This would allow me to record each match in the tournament as Team 1 White Player Name, Team 1 Black Player Name, Team 2 White Player Name, Team 2 Black Player Name, Outcome. My idea was that I could run a tournament where every combination of four unique contestants would play several matches per permutation of players in that combination, record the results, and then determine the contest winner based upon the contestant that participated in the most games where that player won.
A user in that community is concerned that the only way to determine a contestant's strength is to effectively reduce the game to a two-player game by having each contestant play both colors on the team. If this is the case, then I think that may eliminate the fun of the competition.
Is there some statistical method that can be used to determine an individual player's strength in a team game? I think that there might be a way to determine a contribution (correlation, maybe?) between a player's outcomes and his performance as opposed to his teammate's performance. The limited statistics I learned in college does not seem to be helping me much here. I had thought that my best bet would be ANOVA, but the fact that each result between teams AB and CD will be accounted for in each team's observations seems like it might break that test.
If such a test exists, I would appreciate being told its name and being given an explanation for its process.
• You could try logistic regression with one feature for each player, where 1 means a player is on Team 1, 0 means they are not in the match, and -1 means they are on Team 2. A higher weight for a particular player would mean a higher contribution to winning. – Davis Yoshida Jul 24 '15 at 20:55
• If you're interested in use (more than in development), you should give a try to rankade, our ranking system. It's free to use, it can manage two faction with more than one players (2-vs-2, 3-on-3, and more, including asymmetrical factions), and it produces individual rankings. Here's a comparison between most known ranking systems, including Trueskill, another option for your task. – Tomaso Neri Dec 1 '15 at 7:56
For a nice introduction to the model type this is a nice reference using R. To fit this model I think you will have to write some code yourself, although it is not too hard and I am happy to help with that if you can provide some sample data.
For match $m$ where players $i_1$ and $j_1$ play in team 1 against players $i_2$ and $j_2$ in team 2, where $i$ denotes the white position and $j$ the black position, the model might look something like: $$P(Y_m \leq k) = \frac{e^{\theta_k + (w(i_1) + b(j_1)) - (w(i_2) + b(j_2))}}{1 + e^{\theta_k + (w(i_1) + b(j_1)) - (w(i_2) + b(j_2))}}$$ where $Y_m$ denotes the outcome of match $m$ coded as: $$Y_m = \left \{ \begin{array}{ll} 1 \quad \text{if team 1 wins} \\ 2 \quad \text{if draw} \\ 3 \quad \text{if team 2 wins} \end{array} \right.$$ where $-\infty < \theta_1 < \theta_2 < \theta_3 = \infty$ are the threshold/intercept parameters. $\theta_1 = -\theta$ and $\theta_2 = \theta$ with $\theta \geq 0$ ensures that team 1 and team 2 have the same probability of winning if the overall ability of each team is equal (the ability of team 1 for example is $w(i_1) + b(j_1)$). So you just need a a single parameter here, $\theta$, which largely implies the probability of a draw outcome.
$w$ and $b$ are then model parameter vectors containing the ability of each player in a particular index, for the white and black positions respectively (these may be the same i.e. $w=b$. I have no idea if there is any difference between the positions). The players in a match, $i_1$, $j_1$, $i_2$ and $j_2$ are really just indexes to access the correct element of the vector.
If you are able to then infer the parameter vectors $w$ and $b$ from data then you would be able to rank the players ability in each position. If $w=b$ is a reasonable assumption then it will be easier to rank the players based on overall ability. Also this method allows you to compare players who have not met, if there is some link between them via some commonly played players. | 2019-08-18 09:48:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47732359170913696, "perplexity": 470.3107545352635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313747.38/warc/CC-MAIN-20190818083417-20190818105417-00001.warc.gz"} |
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-1-section-1-5-prime-factorization-exercise-page-25/55 | # Chapter 1 - Section 1.5 - Prime Factorization - Exercise: 55
$2\times3\times7$
#### Work Step by Step
$2|\underline{42}$ last digit is even $3|\underline{21}$ sum of digits is divisible by 3 $\ \ \ \ \ 7$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-05-27 13:59:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4683559834957123, "perplexity": 988.6747154254533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868316.29/warc/CC-MAIN-20180527131037-20180527151037-00482.warc.gz"} |
https://igraph.org/r/html/1.2.7/st_cuts.html | # R igraph manual pages
Use this if you are using igraph from R
## List all (s,t)-cuts of a graph
### Description
List all (s,t)-cuts in a directed graph.
### Usage
st_cuts(graph, source, target)
### Arguments
graph The input graph. It must be directed. source The source vertex. target The target vertex.
### Details
Given a G directed graph and two, different and non-ajacent vertices, s and t, an (s,t)-cut is a set of edges, such that after removing these edges from G there is no directed path from s to t.
### Value
A list with entries:
cuts A list of numeric vectors containing edge ids. Each vector is an (s,t)-cut. partition1s A list of numeric vectors containing vertex ids, they correspond to the edge cuts. Each vertex set is a generator of the corresponding cut, i.e. in the graph G=(V,E), the vertex set X and its complementer V-X, generates the cut that contains exactly the edges that go from X to V-X.
### Author(s)
Gabor Csardi csardi.gabor@gmail.com
### References
JS Provan and DR Shier: A Paradigm for listing (s,t)-cuts in graphs, Algorithmica 15, 351–372, 1996.
st_min_cuts to list all minimum cuts.
### Examples
# A very simple graph
g <- graph_from_literal(a -+ b -+ c -+ d -+ e)
st_cuts(g, source="a", target="e")
# A somewhat more difficult graph
g2 <- graph_from_literal(s --+ a:b, a:b --+ t,
a --+ 1:2:3, 1:2:3 --+ b)
st_cuts(g2, source="s", target="t")
[Package igraph version 1.2.7 Index] | 2022-10-02 12:36:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22938939929008484, "perplexity": 10451.031567373419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00094.warc.gz"} |
https://tongfamily.com/2021/06/29/smart-blinds-cooling-and-dealing-extreme-heat/ | OK, well, Seattle just suffered a historic three days of high temperatures (in June?) of over 100F. Portland hit a record 117F. To give you a sense of how hot that is, the Las Vegas record is 117F/48C so that is awesomely high. So what can you do in this kind of temperature. In Seattle, the record was 108F which broke the 105 record in July 2009. So it is hard to know what the statistics hold, but 2020 and 2016 were the two hottest years on record globally.
So, what's a person to do given this is happening and you don't have the air conditioning, well here is some advice (for folks like us who have lived without A/C) for those circumstances where there is low humidity and the temperature falls in the evening (think desert temperatures). For instance, here, the high was 108F but the low that night was 76F.
1. Get up early and when the temperatures fall (if you are lucky, although in Asia, oftentimes the temperature just stays at 90F/90% humidity where this won't work). But in the Northwest, although it was down to 72F at 4AM is the time to get up and open all the windows to get the house as cool as possible.
2. Then as the temperatures rise, you can begin to close up the East side where the heat is going to be. We like to close up the rooms on that side too.
3. For these incredibly hot days, going lower is good. As an example, the basement stayed a static 75F and our garage which is in the shade and has a strange double roof with 2 feet of air stayed 72F the whole time (although humidity there bumped to 72% and the PM2.5 really climbed as the cars just ooze chemical junk. Yuck!)
4. Basically, you want to seal the rooms and these act like insulation for the rooms you are living in.
5. We ended up in our only room at the Northwest corner of the house (and then the basement because soil is a great insulator) that happens to be shaded and with that trick, the temperatures there never got about about 80F which is pretty amazing given it was nearly 30 degrees hotter outside.
6. Finally, water is an incredibly effective coolant particularly when it is dry. So a little water spritzing and a jump in a pool work really well. I tried this experiment working outside and then periodically dunking.
## Longer term things to think about
Well, it does seem like things are getting warmer, so what are some things that you can do longer term:
1. Insulation and thermopane and sealed windows. It's counterintuitive, but insulation really helps in the summer as well as the winter. Get as much as you can.
2. Attic fan. We can't do this because of what our roof is, but having an attic fan really helps lower temperatures. Attics can be really, really hot and these can be automatically controlled. Some are really cool, like being solar-powered. Even if you have A/C, these are great units to cool the 150F air up there. And if you are DYI inclined, you can set it up with ESP8266 which is a 3 channel relay that can control a 2-speed attic fan, but you probably really want an
3. Get heat-insulating honeycomb blinds. OK, this is another thing that you can do which are blinds that come down automatically and which are heat reflecting. The so-called honeycomb blinds look less bulky than classic metal or wood and are better insulating, but you can't turn them halfway to get some light, so they are great for insulation or as a secondary set behind regular curtains. And if you care about insulation, the more cells the better. Also, you can get them in lightening or blackout. Obviously the latter is better as a second set to sleep better and insulate more.
4. Make them smart. So if you get blinds, then you have making them smart blinds. That's a cool idea where they will go down automatically as things heat up. So Ikea makes a vertical blind system in widths from 28 to 48" wide that have motors at the top. Pretty inexpensive at $148 or so. They have a controller, but with one of their Zigbee boxes, you can make it work with HomeKit. The big limitation is that they are most 74" long, so if you have really big floor to ceiling windows they won't work. They are battery powered though, so you have to put new batteries in. If you already have blinds, then Soma makes a bunch of battery powered actuators that can automatically turn vertical blinds. the Soma Tilt 2 works on horizontals and the Soma Shades 2 works on anything with a chain on it. Finally at the high end, you can get the Lutron Serenas which have really long lasting batteries or you can get it with a 12V wall wart too or even a power distributor if you can into the walls for say new construction. They are also custom made, so you get exactly the right length and about 10x more expensive than Ikea 🙂 but they will exactly fit and that does matter. 5. Temperature sensors, with the new generation of smart sensors, we have a Kaiterra (and are getting the Eve Room) are great solutions. They are pretty expensive, but since they tie into Homekit, they are secure and private. They will give you a real-time feed in terms of temperature, so if you have sensitive stuff like wine for instance, you can be safe knowing it is stored properly. Buying these is a little complicated, but the Kaiterra Laser Egg comes in three flavors. The Laser Egg is$149 and measures PM2.5 and temperature. The Kaiterra Egg+Chemical also measures TVOC (think alcohol) and junk like that for $199. Then for places where you have gas or other bad stuff, the Kaiterra Egg+CO2 is$199 and measures that dangerous gas. One nice thing about these devices is that they have rechargeable batteries and connect to wall power, so you never need to change out batteries (which I always forget!). For most folks if money is no object, I would say the Kaiterra Egg +CO2 makes the most sense if you don't have CO detectors now (everyone should :-). Finally not on the Eve Room is that it is indoor and Bluetooth, but because it also uses Thread, if you have an HomePod mini around, you are more likely to connect to it (otherwise it falls back to Bluetooth). The thing doesn't have CO2, but does have TVOC at $180 for two of them. 6. Scale up your A/C. If you have it, then you are going to need a lot more tonnage if these temperatures keep rising, so might be a good time to think about that. In the example, we are marked as Zone 4 so low-temperature needs, but if you believe things are getting worse (one rule of thumb is to look down 300 miles towards the equator and that is what you have to plan for), you bump up a region. As an example, if you have say 1,500 square feet per floor, then in Zone 4 in typical temperatures you would need a 2.5-ton unit. But if you think it's getting hotter, then you will need more like 3 tons. An aside, 1 ton means you A/C can cool 12,000 BTUs per hour and that means that you can cool 12,000 pounds of water by one degree per hour. Note that 5 tons is the residential limit, so if you need more, you install multiple units in tandem. Note that if you do buy a unit that is "too big" then it will cycle rapidly and you lose efficiency, but you are ready for the really bad days. The old school way is square footage * 30 is the rough BTUs you need and then subtract a ton for most cooler regions. Or leave it on if you are in hot climates (like yesterday). So, 1,500 x 30 = 45,000 is 3.75 tons so if you worried about the hot, then you need a 3-4 ton unit, otherwise, subtract one and you need a 2.5-3 ton unit in typical climates. And if you are really doing this, have them do a Manual-J analysis. More complex analysis is ((House sq.ft. x 25) / 12,000) – 0.5) and you can see it is just different factors. One thing about the multiplier is the SEIR rating which is basically how well insulated you have and then the subtraction is a rough way to also adjust it. 7. Get a mister for your air conditioner. This is not going to work everywhere. First of all, if it is really dry hot and you have soft water (no minerals and not too humid so you don't get mold forming, but the temperate Seattle, it can work well, it is supposed to improve efficiency by making the input air moist. A really clever design by cool'n save make it possible with a flapper that mechanically opens the misters just by the air pressure. So no electronics at all. YOu do have to remove it I the winter, but a decent solution. It does require that you have water near your A/C, so you need to connect it to a garden hose (with a Y-connector as it doesn't draw much water at all). 8. Get a smart sprinkler system. If it is hot, then you need this kind of thing to keep your plants alive. The new ones are HomeKit enabled as well, so you can see what is going on. It is apparently pretty easy to install, you just replace your current unit and they have a set of wire pairs (8-zones or 16-zone) and you plug them into this thing. If you have rain sensors, it does the same. The Rachio Sweet 16 for instance is$280 and the Sweet 8 is $180 and the Rainmachine Pro 16 is$220 (and also doesn't store anything in the cloud and is all local). There are lots of installation issues with these and most seem related to of course flaky WiFi. So the Rainmachine Pro 16 might be the best choice since it has a dedicated Ethernet port and you can always hook up a real WiFi Access point if you need it. Finally right now it is just $169 at Rainmachine, so not a bad deal for 16 zones. There is also a Rainmachine HD, but the reviews say that the connections are quite a bit smaller and harder to stick into the device. Just make sure that you note which zone you are moving from so you can port the controls from your old one to your new one. 9. Get a mister for patio area. This again works if it is the dry heat, but for$22 from Redtron, you get a hose with misters every few feet and you basically mount them above you and it sprays keeping you cool. Way easier than dunking all the time. You probably also want a Y-connector and something like the Eve Aqua water controller, so you can turn them on and off remotely. If you are doing this, you probably don't want a 2-way splitter since the Eve will be crooked, so get a 4-way and it will be nice and straight. Finally, if you do have a sprinkler system, one trick is that you can take one of the zones and add the mister wire above then when it is hot, you can use it as a mister, so you don't need to a dedicated hose system. | 2022-12-04 16:14:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4371480345726013, "perplexity": 1605.0647335333936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00479.warc.gz"} |
http://mathhelpforum.com/geometry/88554-volume-pentagonal-prism.html | # Math Help - Volume of a pentagonal prism
1. ## Volume of a pentagonal prism
Find the volume of a regular pentagonal prism with a height of 5 feet and a perimeter of 20 feet
Is there an easier way to find the area of the prism with out finding the area of the base then multiply by the height.
to find the area of the base i did it (4^2*5)/(4tan(pi/5) can this be done easier?
Thank you | 2015-07-01 21:15:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8821679949760437, "perplexity": 339.3300551184586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095270.70/warc/CC-MAIN-20150627031815-00124-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://gamedev.stackexchange.com/questions/26627/why-would-anti-aliasing-work-for-the-debug-runtime-but-not-the-retail-runtime | # Why would anti-aliasing work for the debug runtime but not the retail runtime?
I'm experimenting with setting various graphical settings in my Direct3D9 application, and I'm currently facing a curious problem with anti-aliasing. When running under the debug runtime, AA works as expected, and I don't have any errors or warnings. But when running under the retail runtime, the image isn't anti-aliased at all. I don't get any errors, the device creates and executes just fine.
As I honestly have little idea where the problem is, I will simply give a relatively high-level overview of the architecture involved, rather than specific problematic code. Simply put, I render my 3D content to a texture, which I then render to the back buffer.
Why would this be?
• Is the D3D runtime the only thing that changes or do you have different targets for your application that use a different runtime (e.g. Debug/Release)? If that's the latter, you might try disabling your Debug features one by one to get closer to your Release target. Maybe you've got an #ifdef _DEBUG somewhere that's screwing things up. – Laurent Couvidou Apr 2 '12 at 11:34
• @lorancou: No, I'm running the Debug build in both cases and just changing the runtime in the D3D control panel. – DeadMG Apr 2 '12 at 11:51
• So you might have a non-initialized variable issue. Using the debug runtime, D3D initializes something to 0 and everything works fine. But using the retail runtime this something doesn't get initialized and holds some random data, so your anti-aliasing breaks. My advice is to try initializing everything that's not, and next time to develop directly with the retail runtime; I'd only use the debug runtime once in a while to look for issues. – Laurent Couvidou Apr 2 '12 at 13:17
• @lorancou: I use an always_initialized<T> helper class to guarantee initialization at all times in all modes. – DeadMG Apr 2 '12 at 20:40
• This doesn't guarantee you're initializing D3D completely. Maybe you didn't initialize your viewport with IDirect3DDevice9::SetViewport. Or one of the parameters you use here is not using your always_initialized helper. Or your helper doesn't do what you think it does. Maybe you called IDirect3DDevice9::BeginScene but not IDirect3DDevice9::EndScene. It's weird that you don't get any errors with the debug runtime though, we might be missing something more obvious. – Laurent Couvidou Apr 3 '12 at 10:17 | 2021-01-19 01:18:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3088407516479492, "perplexity": 2398.489329685856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00360.warc.gz"} |
https://www.electro-tech-online.com/threads/stuck-on-migrating-16f-to-12f.84900/ | # Stuck on Migrating 16F to 12F
Status
Not open for further replies.
#### bigal_scorpio
##### Active Member
Hi to all,
Can anyone help me with a problem I have in changing the PIC16F877A that this uses code to a 12F683.
I have looked at the datasheets and the 683 seems to have all it needs but I am absolutely stuck with the different terminology used for the 12Fs to all the other 16F, 18F PICs.
There must be something I'm missing here but why didn't they simply assign the 683 with PORTA instead of the GPIO terminology to keep all the syntax the same?
I just thought I was learning a little when I needed to use a 683 and it all falls to pieces. The only decent examples for programs in the MEBasic are all for the larger PICs, with very little about the 12Fs and even then only the most basic of programs with not much hope of me learning from them.
If anyone could show me how to migrate this program or indeed give me an example of another MEBasic program for both 12F and 16F that I could compare then I would be very grateful and it would help my learning along nicely.
Code:
' *
' * Project name
' PWM_Test_01 (PWM1 library Demonstration)
' (c) mikroElektronika, 2008
' * Revision History
' 20080225
' - initial release.
' * Description
' This is a simple demonstration of PWM1 library, which is being used for
' control of the PIC's CCP module. The module is initialized and started,
' after which the PWM1 Duty Ratio can be adjusted by means of two buttons
' connected to pins RA0 and RA1. The changes can be monitored on the CCP
' output pin (RC2).
' * Test configuration
' MCU: PIC16F877A
' Dev.Board: EasyPIC5
' Oscillator: HS, 08.0000 MHz
' Ext. Modules: -
' SW: mikroBasic v7.1
' * NOTES
' - Pull-down PORTA and connect button jumper (jumper17) to Vcc. (board specific)
'*
program PWM_Test_01
dim current_duty, old_duty as byte
sub procedure InitMain()
PORTA = 255
TRISA = 255 ' configure PORTA pins as input
PORTB = 0 ' set PORTB to 0
TRISB = 0 ' designate PORTB pins as output
PORTC = 0 ' set PORTC to 0
TRISC = 0 ' designate PORTC pins as output
PWM1_Init(5000) ' Initialize PWM1 module at 5KHz
end sub
main:
initMain()
current_duty = 16 ' initial value for current_duty
old_duty = 0 ' old_duty will keep the 'old current_duty' value
PWM1_Start() ' start PWM1
while TRUE
' endless loop
if (Button(PORTA, 0,1,1)) then ' button on RA0 pressed
Inc(current_duty) ' increment current_duty
end if
if (Button(PORTA, 1,1,1)) then ' button on RA1 pressed
Dec(current_duty) ' decrement current_duty
end if
if (old_duty <> current_duty) then ' if change in duty cycle requested
PWM1_Change_Duty(current_duty) ' set new duty ratio,
old_duty = current_duty ' memorize it
PORTB = old_duty ' and display on PORTB
end if
Delay_ms(20) ' slow wn change pace a little
wend
end.
Thanks for looking.........Al
Last edited:
#### skyhawk
##### New Member
There must be something I'm missing here but why didn't they simply assign the 683 with PORTA instead of the GPIO terminology to keep all the syntax the same?
There is only one port on 8-pin PICs. It's called GPIO. It's also a 6-pin port. Your code references PORTA, PORTB, and PORTC.
#### SMUGangsta
##### New Member
Have the datsheets for both next to you, as mE has all the ports defined as in the microchip data sheets, and convert from their.
for example it will be GPIO.4 rather than PORTA.4 and i think its TRISIO instead of TRISA etc. this caught me out a few weeks back, but by reading the header files in the mE folder for my 12F chip, it showed all the designations, so i just worked through the list checking my 16F628 code and changing it where neccesary.
#### bigal_scorpio
##### Active Member
There is only one port on 8-pin PICs. It's called GPIO. It's also a 6-pin port. Your code references PORTA, PORTB, and PORTC.
Hi Skyhawk,
Yeah I know its only got one port, my car has 4 wheels. A unicycle only has 1 but they still call it a wheel! So why not maintain some kind of pattern?
Also I realise the code uses 3 ports but if you read it then you would realise that there are only 4 actual pins being used, and one of them is just to enable measurement of the PWM, so the 683 should still have 2 spare!
My point is why confuse things with different aliases, if it walks like a duck.......
Al
#### bigal_scorpio
##### Active Member
Have the datsheets for both next to you, as mE has all the ports defined as in the microchip data sheets, and convert from their.
for example it will be GPIO.4 rather than PORTA.4 and i think its TRISIO instead of TRISA etc. this caught me out a few weeks back
Hi SMUGangsta,
Yes, thats my point exactly! Who designs these things? They must be very smart but then again they say the smartest people in the world have little common sense.
Thanks mate.......Al
Last edited:
#### blueroomelectronics
##### Well-Known Member
You could always modify the .inf files and add the tris & port defines.
I never understood why the naming convention was done either.
#### picasm
##### Member
I think the different port naming on the 12f chips probably dates back to when they first introduced the 12c series.
At that time, existing chips such as the 16c84 did not make very efficient use of all pins. eg. they usually had separate pins for ports, oscillator and mclr.
To make full use of only 8 pins aavailable in the 12c chips they had to make some do multi-functions - perhaps that is why they called them "General Purpose"
#### bigal_scorpio
##### Active Member
You could always modify the .inf files and add the tris & port defines.
I never understood why the naming convention was done either.
Hi Bill,
That does sound interesting, but I have never messed with the inf files, can you give me an example?
Thanks....Al
#### Nigel Goodwin
##### Super Moderator
Hi Bill,
That does sound interesting, but I have never messed with the inf files, can you give me an example?
Thanks....Al
It's the INC files, niot the INF files.
It's just a simple text substitution, here's a section from the 16F84.inc file
Code:
PORTA EQU H'0005'
PORTB EQU H'0006'
It you wanted to use GPIO instead of PORTA (and as well as), just add an extra line:
PORTA EQU H'0005'
GPIO EQU H'0005'
PORTB EQU H'0006'
It's really as simple as that - any occurance of PORTA or GPIO in the source code would be replaced by 'H'0005' during assembly.
#### skyhawk
##### New Member
Also I realise the code uses 3 ports but if you read it then you would realise that there are only 4 actual pins being used, and one of them is just to enable measurement of the PWM, so the 683 should still have 2 spare!
I don't do BASIC so I could be mistaken, but the way that I read it is:
PORTA - two input pins, increment and decrement buttons
PORTB - output for a number 0-16, requires at least 5 pins
PORTC - PWM output
At the very least your TRIS is going to be split between input and output, not like the example where a port is entirely input or output.
I looked at the documentation
http://www.electro-tech-online.com/custompdfs/2008/10/mikrobasic_manual.pdf
It seems to suggest that the the PWM library only works for chips with a PWM module on PORTC. Maybe there is an exception for 8-pin PICs or maybe you are going to have to write your own routine loading data into SFRs. That shouldn't be hard, just read the datasheet for the 12F683.
edit: Looking more closely it looks as though all 8 bits of PORTB are used to display the duty cycle.
Last edited:
#### bigal_scorpio
##### Active Member
Hi Nigel,
Hows things going down your way mate?
Thanks for that info, I see how it works now, its a bit like declaring aliases.
BTW I used the ESR meter yesterday to find a fault in a power transformer that the DMM said was ok! More uses for it all the time.
Al
#### Nigel Goodwin
##### Super Moderator
Hi Nigel,
Hows things going down your way mate?
Thanks for that info, I see how it works now, its a bit like declaring aliases.
Yes, it looks a bit confusing, but once you realise it's just a simple text substitution it makes more sense.
BTW I used the ESR meter yesterday to find a fault in a power transformer that the DMM said was ok! More uses for it all the time.
Glad you found a use for it, I never got rouind to finishing it off
#### bigal_scorpio
##### Active Member
Problem solved!
Hi guys,
Thanks for all the info.
I have now decided on a PIC16F872, as it was pointed out to me that the PWM library in MeBasic only supports the C port and I am not confident enough to write my own PWM routine.
Circuit will be a bit larger but at least I can do it now!
Thanks again...........Al
Status
Not open for further replies. | 2020-07-11 01:15:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49128031730651855, "perplexity": 4495.1877225887665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00279.warc.gz"} |
https://trueq.quantumbenchmark.com/guides/quantum_capacity.html | # Quantum Capacity¶
The True-Qᵀᴹ Quantum Capacity (QCAP) tool provides a bound on the performance of a circuit performed under Randomized Compiling (RC). Performance evaluation is measured as the total variational distance (TVD) between the ideal bitstring distribution of the circuit and the empirical bitstring distribution measured by a quantum device.
For example, if the ideal distribution of measurement bitstrings of a 2-qubit circuit is {“00”: 0.5, “11”: 0.5} and in 1000 shots the results {“00”: 552, “01”: 21, “11”: 427}, then the TVD between these two distributions is $$(|0.5-0.552|+|0.021|+|0.5-0.427|/2=0.073)$$, which represents the estimate of a 7.3% chance of getting the wrong bitstring in a given shot. Of course, computing the ideal bitstring distribution involves a full quantum simulation, which is not scalable. QCAP is able to estimate an upper bound on the TVD without such a simulation by characterizing the error rate of each cycle in the circuit and combining the results. This upper bound assumes the the circuit is being run under randomized compiling.
Running make_qcap() on a circuit will return a collection of Cycle Benchmarking (CB) circuits for every cycle in the circuit. Running fit() on the circuit collection allows users to retrieve infidelities for each cycle in the circuit. To retrieve the bound on quantum capacity for the circuit, qcap_bound() should be used. This bound is calculated using the results of the generated Cycle Benchmarking (CB) experiments.
Note
The QCAP bound reported could more accurately be called an estimated upper bound, as it depends on experimental/simulator data, and will therefore return a slightly different value each time it is called for a given circuit. The qcap_bound() function returns an estimate of the TVD and the standard deviation corresponding to that estimate. | 2020-05-27 02:23:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7576913833618164, "perplexity": 1178.6071248809446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392057.6/warc/CC-MAIN-20200527013445-20200527043445-00194.warc.gz"} |
https://polymake.org/doku.php/news/release_2_12 | news:release_2_12
# New Features in Release 2.12
Release date: March 19, 2012.
Release 2.12 is now available on the Downloads page. Below we list the most important new features. In addition to that we have a number of minor additions and bug fixes.
Two scripts simplify the computation of the convex hull of the lattice (integer) points in a polyhedron. The script will be described in more detail in the tutorial polymake and Optimization.
Polymake is now able to produce and display TikZ pictures via an interface to Sketch 3D. Here is a simple example:
sketch(cube(3)->VISUAL);
sketch(cube(3)->VISUAL, File=>"cube.sk");
With the second command one can export a polytope to a sketch input file. To get the TikZ code just call sketch.
The interface can handle most of the basic customization options like colors for facets and vertices as well as thickness, transparency, … (see also Tutorial for Visualization)
sketch(cube(3)->VISUAL(VertexThickness=>3, FacetColor=>"blue", FacetTransparency=>1));
The visual object now has three more options to customize the viewing parameters for the sketch output.
sketch(cube(3)->VISUAL(ViewPoint=>[1,2,3], ViewDirection=>[0,0,0], ViewUp=>[0,1,1]));
• ViewPoint is the point from where one is looking at the object (Default: [10,11,9]).
• ViewDirection is the point to where one is looking (Default: [0,0,0]).
• ViewUp defines which direction should be up (Default: [0,1,0]).
On the Perl side all coordinate based big object types (such as Polytope, Cone, Fan) as well as the related small object types (such as Vector, Matrix) now have Rational as the default template parameter. It may thus be omitted. | 2021-01-26 07:52:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4828026592731476, "perplexity": 2434.7197568118067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00513.warc.gz"} |
https://sites.math.rutgers.edu/~wz222/seminar/AACM.html | # Applied and Computational Math Seminar
## Scheduled Talks (Fall 2019): Room 425, 2:00 - 3:00pm
Date: Mar 13, 2020
Speaker: Shravan Veerapaneni, University of Michigan, Ann Arbor
Title: Fast solvers for simulating particulate flows in complex geometries
Abstract: From blood flow to subsurface flows, particulate flows are ubiquitous. Direct numerical simulations of dense, rigid or deformable particle suspensions in viscous fluids are extremely challenging yet critically important to bring insights into their macro-scale flow behavior. In this talk, I will present recent advances made by our group in overcoming several computational bottlenecks such as accurate evaluation of nearly-singular integrals, periodization schemes for complex geometries and robust collision resolution algorithms. Applications in the design of microfluidic chips, shape optimization and electrohydrodynamics of vesicles will be discussed.
## Past Talks:
Date: Feb 28, 2020
Speaker: Samuli Siltanen, University of Helsinki, Finland
Title: Classifying stroke from electric boundary data by nonlinear Fourier analysis
Abstract: Abstract: Stroke is a leading cause of death all around the world. There are two main types of stroke: ischemic (blood clot preventing blood flow to a part of the brain) and hemorrhagic (bleeding in the brain). The symptoms are the same, but treatments very different. A portable “stroke classifier" would be a life-saving equipment to have in ambulances, but so far it does not exist. Electrical Impedance Tomography (EIT) is a promising and harmless imaging method for stroke classification. In EIT one attempts to recover the electric conductivity inside a domain from electric boundary measurements. This is a nonlinear and ill-posed inverse problem. The so-called Complex Geometric Optics (CGO) solutions have proven to be a useful computational tool for reconstruction tasks in EIT. A new property of CGO solutions is presented, showing that a one-dimensional Fourier transform in the spectral variable provides a connection to parallel-beam Xray tomography of the conductivity. One of the consequences of this “nonlinear Fourier slice theorem” is a novel capability to recover inclusions within inclusions in EIT. In practical imaging, measurement noise causes strong blurring in the recovered profile functions. However, machine learning algorithms can be combined with the nonlinear PDE techniques in a fruitful way. As an example, simulated strokes are classified into hemorrhagic and ischemic using EIT measurements.
Date: Feb 21, 2020
Speaker: Eric Bonnetier, Institut Fourier Grenoble,
Title: Homogenization of the Poincare Neumann operator
Abstract: The Neumann Poincar\'e operator is an integral operator that allows the representation of the solutions to elliptic PDE's with piecewise constant coefficients using layer potentials. Its spectral properties are of interest in the study of plasmonic resonances of metallic particles. We discuss the spectrum of that integral operator, when one considers a periodic distribution of inclusions made of metamaterials in a dielectric background medium. We show that under the assumption that the inclusions are fully embedded in the periodicity cells, the limiting spectra of periodic NP operators is composed of a Bloch spectrum, and of a boundary spectrum associated with eigenfunctions which concentrate a part of their energy near the boundary. This is joint work with Charles Dapogny and Faouzi Triki.
Date: Feb 14, 2020
Speaker: Hiroshi Takeuchi, Chubu University, Japan
Title: Application of persistent homology to granular materials and sampled dynamical systems
Abstract: Persistent homology is a tool describing the shape of data. In this talk, we apply persistent homology to two topics; granular materials and sampled dynamical systems. A granular material is a set of macroscopic particles, such as sand, nuts, or coffee beans. The simplest model of granular materials is monodisperse sphere packings. We utilize persistent homology to capture the change of topological configurations in the crystallization of sphere packings. In the second application, we focus on a sampled map, which is a finite subset of a map. A notable example is that the map is a discrete dynamical system, then a sampled map is a sampling from the dynamical system. Our motivation is to retrieve topological information of the dynamical system only from the finite sampling data. Persistent homology can extract the robust topological changes in the underlying map, and the cycles of the homology generators provide the model of the underlying map.
Date: Feb 7, 2020
Speaker: Zin Arai, Chubu University
Title: Period doubling bifurcations from complex and algebraic point of view
Abstract: It is well known in one-dimensional dynamical systems that the period doubling bifurcations plays an essential role in the creation of chaos. In this talk, we will see that this is also the case for higher dimensional systems by studying the monodromy representation of the system and the dynamical zeta function. This is a first step to understand the mysterious topology of the higher dimensional analog of the Mandelbrot set.
Date: Dec 6, 2019
Speaker: Catalin Turc, NJIT
Title: Optimized Schwarz Methods for the iterative solution of quasiperiodic Helmholtz transmission problems in layered media
Abstract: We present an Optimized Schwarz Domain Decomposition Methods applied to Helmholtz transmission problems in periodic layered media. Unlike the classical domain decomposition approach that relies on exchange of Robin data on the subdomain boundaries, we incorporate instead transmission operators that are approximations of Dirichlet-to-Neumann (DtN) operators. The latter approximations, in turn, is obtained via shape perturbation series. The Robin-to-Robin (RtR) operators that are the building blocks of Domain Decomposition Methods are expressed via boundary integral equation formulations that are shown to be robust for all frequencies, including the challenging Wood frequencies. We use Nyström discretizations of quasi-periodic boundary integral operators to construct high-order approximations of RtR. Based on the premise that the quasi-optimal transmission operators should act like perfect transparent boundary conditions, we construct an approximate LU factorization of the tridiagonal QO DD matrix associated with periodic layered media, which is then used as a double sweep preconditioner. We present a variety of numerical results that showcase the effectiveness of the sweeping preconditioners for the iterative solution of Helmholtz transmission problems in periodic layered media. Joint work with David Nicholls (UIC) and Carlos Perez Arancibia (PUC Chile)
Date: Nov 22, 2019
Speaker: Isaac Harris, Purdue University
Title: Direct Sampling Algoritmis in Inverse Scattering
Abstract: In this talk, we will discuss a recent qualitative imaging method referred to as the Direct Sampling Method for inverse scattering. This method allows one to recover a scattering object by evaluating an imaging functional that is the inner-product of the far-field data and a known function. It can be shown that the imaging functional is strictly positive in the scatterer and decays as the sampling point moves away from the scatterer. The analysis uses the factorization of the far-field operator and the Funke-Hecke formula. This method can also be shown to be stable with respect to perturbations in the scattering data. We will discuss the inverse scattering problem for both acoustic and electromagnetic waves.
Date: Nov 15, 2019
Speaker: Shawn Walker, Louisiana State University
Title: The Uniaxially Constrained Q-tensor Model for Nematic Liquid Crystals
Abstract: We consider the one-constant Landau-de Gennes (LdG) model for nematic liquid crystals with traceless tensor field Q as the order parameter that seeks to minimize a Dirichlet energy plus a double well potential that confines the eigenvalues of Q (examples/applications will be described). Moreover, we constrain Q to be uniaxial, which involves a rank-1 constraint. Building on similarities with the one-constant Ericksen energy, we propose a structure-preserving finite element method for the computation of equilibrium configurations. We prove stability and consistency of the method without regularization, and $\Gamma$-convergence of the discrete energies towards the continuous one as the mesh size goes to zero. We also give a monotone gradient flow scheme to find minimizers. We illustrate the method's capabilities with several numerical simulations in two and three dimensions including non-orientable line fields. In addition, we do a direct comparison between the standard LdG model, and the uniaxially constrained model.
Date: Nov 8, 2019
Speaker: Jingni Xiao, Rutgers University
Title: Corner scattering and some applications
Abstract: We consider time-harmonic medium or source scattering. We examine the effect of never-trivial-scattering due to the appearance of corners at the support of the medium inhomogeneity or the source. In particular, interior transmission eigenfunctions in a cornered domain can not be extended into the neighborhood as an incident wave field. Some applications like inverse scattering on shape determination will also be discussed.
Date: Oct 25, 2019
Speaker: Alex Blumenthal, University of Maryland
Title: Lyapunov exponents for small random perturbations of predominantly hyperbolic volume-preserving diffeomorphisms, including the Standard Map
Abstract: An outstanding problem in smooth ergodic theory is the estimation from below of Lyapunov exponents for maps which exhibit hyperbolicity on a large but non-invariant subset of phase space, e.g. the Chirikov standard map or Henon map families. It is notoriously difficult to show that Lyapunov exponents actually reflect the predominant hyperbolicity in the system, due to cancellations caused by the switching of stable and unstable directions in those parts of phase space where hyperbolicity is violated. In this talk I will discuss the inherent difficulties of the above problem, and will discuss recent results when small random perturbations are introduced at every time-step. In this case, we show that for a large class of predominantly hyperbolic systems in two dimensions, the top Lypaunov exponent is large in proportion to the strength of the predominant hyperbolicity in the system. Our results apply to the standard map with large coefficient. This work is joint with Lai-Sang Young and Jinxin Xue.
Date: Oct 18, 2019
Speaker: Tianhao Zhang, Rutgers University
Title: A constructive proof of the Cauchy-Kovalevskaya theorem with applications to validated numerics.
Abstract: In this talk, I will present a constructive proof of the Cauchy-Kovalevskaya theorem for ODEs. The proof is motivated by a validated numerics technique called the radii polynomial approach commonly used for polynomial ODEs. I will introduce the basic aspects of this approach and then show how we extend it to the analytic case and apply it to prove the classical Cauchy-Kovalevskaya theorem. This is joint work with Shane Kepley.
Date: Oct 11, 2019
Speaker: William Cuello, Rutgers University
Title: Single and Multispecies Persistence in the Face of Environmental Uncertainty
Abstract: I will be presenting my work on the analyses of single and multispecies systems in fluctuating environments. For the first half of the presentation, I will present my work on predicting average, germination rates of 10 bet-hedging Sonoran Desert annuals; here, annuals hedge their bets by keeping a fraction of their seeds dormant to buffer against uncertain amounts of annual rainfall. For the second half, I will present the mathematical framework I have developed for analyzing classes of multispecies, stochastic models. This framework provides a step-by-step process in which one can determine whether a system of species will stochastically persist.
Date: Oct 4, 2019
Speaker: Elena Queirolo, Vrije University
Title: Hopf bifurcation in PDE
Abstract: In this talk we will use validated numerics to prove the existence of a Hopf bifurcation in the Kuramoto-Sivashinky PDE.
Validated numerics, in particular the radii polynomial approach, allows to prove the existence of a solution to a given problem in the neighborhood of a numerical approximation. A known use of this technique is for branch following in parameter dependent ODEs. In this talk, we will consider periodic solutions of polynomial ODEs.
With a blow up approach, we can rewrite the original ODE into a new system that undergoes a saddle node bifurcation instead of a Hopf bifurcation, thus avoiding the singularity of the periodic solution. We can then prove the existence of a Hopf bifurcation in the original system with a combination of analytical and validated numeric results.
To conclude, we will apply the same techniques in the Kuramoto-Sivashinky case, thus demostrating the flexibility of this approach.
Date: Sep 27, 2019
Speaker: Andreas Kirsch, KIT
Title: A Radiation Condition for the Scattering by Locally Perturbed Periodic Layers
Abstract: Scattering of time-harmonic waves from periodic structures at some fixed real-valued wave number becomes analytically difficult whenever there arise surface waves: These non-zero solutions to the homogeneous scattering problem physically correspond to modes propagating along the periodic structure and clearly imply non-uniqueness of any solution to the scattering problem. In this talk, I consider a medium that is defined in the upper two-dimensional half-space by a penetrable and periodic contrast. We formulate a proper radiation condition which is motivated by the limiting absorption principle; that is, the solution is the limit of a sequence of unique solutions for artificial complex-valued wave numbers tending to the above-mentioned real-valued wave number. By the Floquet-Bloch transform we first reduce the scattering problem to a finite-dimensional one that is set in the linear space spanned by all surface waves. In this space, we then compute explicitly which modes propagate along the periodic structure to the left or to the right. This finally yields a representation for our limiting absorption solution which leads to a proper extension of the well known upward propagating radiation condition. Finally, we briefly consider the case when the periodic refractive index is perturbed locally.
Date: Sep 20, 2019
Speaker: Brittany Hamfeldt, NJIT
Title: Generalised finite difference methods for fully nonlinear elliptic equations
Abstract: The introduction of viscosity solutions and the Barles-Souganidis convergence framework have allowed for considerable progress in the numerical solution of fully nonlinear elliptic equations. We describe a framework for constructing convergent generalised finite difference approximations for a large class of nonlinear elliptic operators. These approximations are defined on unstructured point clouds, which allows for computation on non-uniform meshes and complicated geometries. Because the schemes are monotone, they fit within the Barles-Souganidis convergence framework and can serve as a foundation for higher-order filtered methods. We present computational results for several examples including problems posed on random point clouds, examples incorporating automatic mesh adaptation, non-continuous surfaces of prescribed Gaussian curvature, Monge-Ampere equations arising in optimal transportation, and Monge-Ampere type equations on the sphere.
Date: Sep 13, 2019
Speaker: Shane Kepley, Rutgers University
Title: Computing linear extensions of partial orders subject to algebraic constraints
Abstract: Switching systems have been extensively used for modeling the dynamics of gene regulation. This is partially due to the natural decomposition of phase space into rectangles on which the dynamics can be completely understood. Recent work has shown that the parameter space can also be decomposed into nice'' subsets. However, computing these subsets, or even determining if they are empty turns out to be a difficult problem.
We will show that computing a parameter space decomposition is equivalent to computing all linear extensions of a certain poset. The elements of this poset are polynomials and this structure induces additional algebraic constraints on the allowable linear extensions. We will describe an algorithm for efficiently solving this problem when the polynomials are linear. A more general class of polynomials can also be handled efficiently through a transformation which reduces to the linear case. Finally, we present several open problems and conjectures which arise when one generalizes this problem to subsets of arbitrary polynomial rings.
Date: April 26, 2019
Speaker: Harbir Antil, George Mason University
Title: Fractional PDEs: Optimal Control and Applications
Abstract: Fractional calculus and its application to anomalous transport has recently received a tremendous amount of attention. In these studies, the anomalous transport (of charge, tracers, fluid, etc.) is presumed attributable to long-range correlations of material properties within an inherently complex, and in some cases self-similar, conducting medium. Rather than considering an exquisitely discretized (and computationally explosive) representation of the medium, the complex and spatially correlated heterogeneity is represented through reformulation of the PDE governing the relevant transport physics such that its coefficients are, instead, smooth but paired with fractional-order space derivatives. This talk will give an introduction to fractional diffusion. We will describe how to incorporate nonhomogeneous boundary conditions in fractional PDEs. We will cover from linear to quasilinear fractional PDEs. New notions of exterior optimal control and optimization under uncertainty will be presented. We will conclude the talk with an approach that allows the fractional exponent to be spatially dependent. This has enabled us to define novel Sobolev spaces and their trace spaces. Several applications in: imaging science, quantum random walks, geophysics, and manifold learning (data analysis) will be discussed.
Date: April 10, 2019
Speaker: Michael Levitin, University of Reading, UK
Title: Sharp eigenvalue asymptotics for Steklov problem on curvilinear polygons
Abstract: I will discuss a work in progress (joint with Leonid Parnovski, Iosif Polterovich and David Sher) on Steklov (or Dirichlet-to-Neumann map) eigenvalue asymptotics for curvilinear polygonal domains in R^2. The results are quite unexpected, and the asymptotics depends non-trivially on the arithmetic properties of the angles of the polygon. There are also connections to classical problems of hydrodynamics (the sloping beach problem and the sloshing problem) and to the Laplacian on quantum graphs.
Date: February 15, 2019
Speaker: Ricardo Nochetto, University of Maryland, College Park
Title: Thermally Actuated Bilayer Plates
Abstract: We present a simple mathematical model of polymer bilayers that undergo large bending deformations when actuated by non-mechanical stimuli such as thermal effects. The model consists of a nonlinear fourth order problem with a pointwise isometry constraint, which we discretize with either Kirchhoff quadrilaterals or discontinuous Galerkin methods. We prove $\Gamma$-convergence of the discrete model and propose an iterative method that decreases its energy and leads to stationary configurations. We investigate performance, as well as reduced model capabilities, via several insightful numerical experiments involving large (geometrically nonlinear) deformations. They include the folding of several practically useful compliant structures comprising of thin elastic layers. This work is joint with S. Bartels, A. Bonito, and D. Ntogkas.
Date: February 1, 2019
Speaker: Heather Harrington, Oxford University
Title: Comparing models and data using computational algebraic geometry and topology.
Abstract: I will overview my research for a very general math audience. I will start with motivation of the biological problems we have explored, such as tumor-induced angiogenesis (the growth of blood vessels to nourish a tumor), as well as signaling pathways involved in the dysfunction of cancer (sets of molecules that interact that turn genes on/off and ultimately determine whether a cell lives or dies). Both of these biological problems can be modeled using differential equations. The challenge with analyzing these types of mathematical models is that the rate constants, often referred to as parameter values, are difficult to measure or estimate from available data.
I will present mathematical methods we have developed to enable us to compare mathematical models with experimental data. Depending on the type of data available, and the type of model constructed, we have combined techniques from computational algebraic geometry and topology, with statistics, networks and optimization to compare and classify models without necessarily estimating parameters. Specifically, I will introduce our methods that use computational algebraic geometry (e.g., Grobner bases) and computational algebraic topology (e.g., persistent homology). I will present applications of our methodology on datasets involving cancer. Time permitting, I will conclude with our current work for analyzing spatio-temporal datasets with multiple parameters using computational algebraic topology. Mathematically, this is studying a module over a multivariate polynomial ring, and finding discriminating and computable invariants.
Date: December 11, 2018
Speaker: Yasumasa Nishiura, Advanced Institute for Materials Research, Tohoku University
Title: What is a good mathematical descriptor for the toughness of heterogeneous materials
Abstract: One of the dreams of materials scientists is to make a novel materials through the design of atomistic scale, however most of the modern composite materials is very heterogeneous in order to make it strong via network structure like epoxy resin matrix with carbon fibers used in the aircraft. Those are far from crystal structure nor completely random so that it is not apriori clear what type of mathematical concepts is appropriate to describe it, especially medium-range structure. As for the static profile, recent statistical methods as well as topological approach TDA clarify some aspects of it. On the other hand, the performance of the materials, especially its dynamic robusness against mechanical stress remains open and still heavily depends on trials and errors in the laboratories. The difficulty lies in that, firstly the lack of good macroscopic mathematical model to describe dynamical processes, secondly it is not clear how to implement the microscopic heterogeneity into the macroscopic model, thirdly how to extract an appropriate mathematical descriptor that can measure and predict the strength of it and even allows us to design novel materials. I would like to present a case study in this direction in the context of cracking phenomena for brittle materials. The stage is still early phase, however it suggests many interesting questions and challenge not only for for materials scientists but for mathematicians.
Date: November 16, 2018
Title: Surveillance-Evasion games under uncertainty
Abstract: Adversarial path planning problems are important in robotics applications and in modeling the behavior of humans in dangerous environments. Surveillance-Evasion (SE) games form an important subset of such problems and require a blend of numerical techniques from multiobjective dynamic programming, game theory, numerics for Hamilton-Jacobi PDEs, and convex optimization. We model the basic SE problem as a semi-infinite zero-sum game between two players: an Observer (O) and an Evader (E) traveling through a domain with occluding obstacles. O chooses a pdf over a finite set of predefined surveillance plans, while E chooses a pdf over an infinite set of trajectories that bring it to a target location. The focus of this game is on "E's expected cumulative exposure to O", and we have recently developed an algorithm for finding the Nash Equilibrium open-loop policies for both players. I will use numerical experiments to illustrate algorithmic extensions to handle multiple Evaders, moving Observes, and anisotropic observation sensors. Time permitting, I will also show preliminary results for a very large number of selfish/independent Evaders modeled via Mean Field Games. Joint work with M.Gilles, E.Cartee, and REU-2018 participants.
Date: November 2, 2018
Speaker: Abner Salgado, University of Tennessee, Knoxville
Title: Regularity and rate of approximation for obstacle problems for a class of integro-differential operators
Abstract: We consider obstacle problems for three nonlocal operators:
A) The integral fractional Laplacian
B) The integral fractional Laplacian with drift
C) A second order elliptic operator plus the integral fractional Laplacian
For the solution of the problem in Case A, we derive regularity results in weighted Sobolev spaces, where the weight is a power of the distance to the boundary. For cases B and C we derive, via a Lewy-Stampacchia type argument, regularity results in standard Sobolev spaces. We use these regularity results to derive error estimates for finite element schemes. The error estimates turn out to be optimal in Case A, whereas there is a loss of optimality in cases B and C, depending on the order of the integral operator.
Date: October 26, 2018
Speaker: Bill Kalies, Florida Atlantic University
Title: Order Theory in Dynamics
Abstract: Recurrent versus gradient-like behavior in global dynamics can be characterized via a surjective lattice homomorphism between certain bounded, distributive lattices, that is, between attracting blocks (or neighborhoods) and attractors. In this lecture we explain the basic order and lattice theory for dynamical systems which lays a foundation for a computational theory for dynamical systems that focuses on Morse decompositions and index lattices. We build combinatorial order-theoretic models for global dynamics. We give computational examples that illustrate the theory for both maps and flows.
Date: October 24, 2018, 5:00-6:00pm
Room: Hill 005
Speaker: Prof. Wojciech Chacholski, Department of Mathematics, KTH
Title: What is persistence
Abstract: It is not surprising that different units and scales are used to measure different phenomena. So why the Gromov-Hausdorff and bottleneck distances are the only one used to measure inputs and outcomes of topological data analysis applied to a variety of different data sets? My aim is to explain and illustrate a new approach to persistence. I will present both mathematical and real life data examples illustrating effectiveness of our approach to improve various classification tasks.
Date: October 19, 2018
Speaker: Francisco Sayas, University of Delaware
Title: Waves in viscoelastic solids
Abstract: I will first explain a transfer function based framework collecting well-known models of wave propagation in viscoelastic solids, fractional derivative extensions, and their couplings. I will briefly explain the associated Laplace domain stability results and their time domain counterparts, as well as how they are affected by finite element discretization in space. Finally, I will discuss a semigroup approach to a non-strictly diffusive Zener model. This is joint work with Tom Brown, Shukai Du, and Hasan Eruslu.
Date: October 12, 2018
Speaker: Vladimir Itskov, The Pennsylvania State University
Title: Directed complexes, sequence dimension and inverting a neural network.
Abstract: What is the embedding dimension, and more generally, the geometry of a set of sequences? This problem arises in the context of neural coding and neural networks. Here one would like to infer the geometry of a space that is measured by unknown quasiconvex functions. A natural object that captures all the inferable geometric information is the directed complexes (a.k.a. semi-simplicial sets). It turns out that the embedding dimension as well as some other geometric properties of data can be estimated from the homology of an associated directed complex. Moreover each such directed complex gives rise to a multi-parameter filtration that provides a dual topological description of the underlying space. I will also illustrate these methods in the neuroscience context of understanding the "olfactory space".
Date: September 28, 2018
Speaker: Carina Curto, The Pennsylvania State University
Title: Graph rules for inhibitory network dynamics
Abstract: Many networks in the nervous system possess an abundance of inhibition, which serves to shape and stabilize neural dynamics. The neurons in such networks exhibit intricate patterns of connectivity, whose structure controls the allowed patterns of neural activity. In this work, we examine inhibitory threshold-linear networks whose dynamics are dictated by an underlying directed graph. We develop a set of parameter-independent graph rules that enable us to predict features of the dynamics from properties of the graph. These rules provide a direct link between the structure and function of these networks, and provides new insights into how connectivity may shape dynamics in real neural circuits.
Date: December 1, 2017
Speaker: Harbir Antil, George Mason University
Title: Fractional Operators with Inhomogeneous Boundary Conditions: Analysis, Control, and Discretization
Abstract: In this talk we introduce new characterizations of spectral fractional Laplacian to incorporate non homogeneous Dirichlet and Neumann boundary conditions. The classical cases with homogeneous boundary conditions arise as a special case. We apply our definition to fractional elliptic equations of order $s \in (0,1)$ with nonzero Dirichlet and Neumann boundary conditions. Here the domain $\Omega$ is assumed to be a bounded, quasi-convex.
To impose the nonzero boundary conditions, we construct fractional harmonic extensions of the boundary data. It is shown that solving for the fractional harmonic extension is equivalent to solving for the standard harmonic extension in the very-weak form. The latter result is of independent interest as well. The remaining fractional elliptic problem (with homogeneous boundary data) can be realized using the existing techniques. We introduce finite element discretizations and derive discretization error estimates in natural norms, which are confirmed by the numerical experiments. We also apply our characterizations to Dirichlet and Neumann boundary optimal control problems with fractional elliptic equation as constraints.
Date: November 3, 2017
Speaker: Marcio Gameiro, University of Sao Paulo at Sao Carlos, Brazil
Title: Rigorous Multi-parameter Continuation of Solutions of Differential Equations
Abstract: We present a rigorous multi-parameter continuation method to compute solutions of differential equations depending on parameters. The method combines classical numerical methods, analytic estimates and the uniform contraction principle to prove the existence of solutions of nonlinear differential equations. The method is applied to the computation of equilibria for the Cahn-Hilliard equation and periodic solutions of the Kuramoto-Sivashinsky equation.
Date: September 22, 2017
Speaker: Qi Wang, University of South Carolina
Title: Energy quadratization strategy for numerical approximations of nonequilibrium models
Abstract: There are three fundamental laws in equilibrium thermodynamics. But, what are the laws in nonequilibrium thermodynamics that guides the development of theories/models to describe nonequilibrium phenomena? Continued efforts have been invested in the past on developing a general framework for nonequilibrium thermodynamic models, which include Onsager's maximum entropy theory, Prigogine's minimum entropy production rate theory, Poisson bracket formulation of Beris and Edwards, as well as the GENERIC formalism promoted by Ottinger and Grmela. To some extent, they are equivalent and all give practical means to develop nonequilibrium dynamic models. In this talk, I will focus on the Onsager approach, termed the Generalized Onsager Principle (GOP). I will review how one can derive thermodynamic and generalized hydrodynamic models using the generalized Onsager principle coupled with the variational principle. Then, I will discuss how we can exploit the mathematical structure of the models derived using GOP to design structure and property preserving numerical approximations to the governing system of partial differential equations. Since the approach is valid near equilibrium as pointed it out by Onsager, an energy quadratization strategy is proposed to arrive linear numerical schemes. This approach is so general that in principle we can use it to any nonequilibrium model so long as it has the desired variational and dissipative structure. Some numerical examples will be given to illustrate the usefulness of this approach.
Date: April 20, 2017
Speaker: Michael Neilan, University of Pittsburgh
Title: Discrete theories for elliptic problems in non--divergence form
Abstract: In this talk, two discrete theories for elliptic problems in non-divergence form are presented. The first, which is applicable to problems with continuous coefficients and is motivated by the strong solution concept, is based on discrete Calderon-Zygmund-type estimates. The second theory relies on discrete Miranda-Talenti estimates for elliptic problems with discontinuous coefficients satisfying the Cordes condition. Both theories lead to simple, efficient, and convergent finite element methods. We provide numerical experiments which confirm the theoretical results, and we discuss possible extensions to fully nonlinear second order PDEs.
Date: March 3, 2017
Speaker: Ridgway Scott, University of Chicago
Title: Electron correlation in van der Waals interactions
Abstract: We examine a technique of Slater and Kirkwood which provides an exact resolution of the asymptotic behavior of the van der Waals attraction between two hydrogens atoms. We modify their technique to make the problem more tractable analytically and more easily solvable by numerical methods. Moreover, we prove rigorously that this approach provides an exact solution for the asymptotic electron correlation. The proof makes use of recent results that utilize the Feshbach-Schur perturbation technique. We provide visual representations of the asymptotic electron correlation (entanglement) based on the use of Laguerre approximations.We also describe an a computational approach using the Feshbach-Schur perturbation and tensor-contraction techniques that make a standard finite difference approach tractable.
Date: April 22, 2016
Speaker: Guillaume Bal, Columbia University
Title: Boundary control in transport and diffusion equations
Abstract: Consider a prescribed solution to a diffusion equation in a small domain embedded in a larger one. Can one (approximately) control such a solution from the boundary of the larger domain? The answer is positive and this form of Runge approximation is a corollary of the unique continuation property (UCP) that holds for such equations. Now consider a (phase space, kinetic) transport equation, which models a large class of scattering phenomena, and whose vanishing mean free path limit is the above diffusion model. This talk will present positive as well as negative results on the control of transport solutions from the boundary. In particular, we will show that internal transport solutions can indeed be controlled from the boundary of a larger domain under sufficient convexity conditions. Such results are not based on a UCP. In fact, UCP does not hold for any positive mean free path even though it does apply in the (diffusion) limit of vanishing mean free path. These controls find applications in inverse problems that model a large class of coupled-physics medical imaging modalities. The stability of the reconstructions is enhanced when the answer to the control problem is positive.
Date: April 8, 2016
Speaker: John Sylvester, University of Washington
Title: Evanescence, Translation, and Uncertainty Principles in the Inverse Source Problem
Abstract: The inverse source problem for the Helmholtz equation (time harmonic wave equation) seeks to recover information about a radiating source from remote observations of a monochromatic (single frequency) radiated wave measured far from the source (the far field). The two properties of far fields that we use to deduce information about shape and location of sources depend on the physical phenomenon of evanescence, which limits imaging resolution to the size of a wavelength, and the formula for calculating how a far field changes when the source is translated. We will show how adaptations of "uncertainty principles", as described by Donoho and Stark [1] provide a very useful and simple tool for this kind of analysis.
Date:March 24, 2016
Speaker: Qi Wang , Interdisciplinary Mathematics Institute and NanoCenter at University of South Carolina
Title: Onsager principle, generalized hydrodynamic theories and energy stable numerical schemes
Abstract: In this talk, I will discuss the Onsager principle for nonequilibrium thermodynamics and present the generalized Onsager principle for deriving generalized hydrodynamic theories for complex fluids and active matter. For closed matter systems, the generalized Onsager principle combines variational principle with the dissipative property of the system to give a hydrodynamic system that dissipates the total energy. I will illustrate the idea using a few examples in complex fluids. For the hydrodynamic system of equations derived from the generalized Onsager principle, dissipation property preserving numerical schemes can be devised , known as energy stable schemes. These schemes are unconditional stable in time. Several applications of generalized hydrodynamic theories to active matter systems, like cell migration on solid substrates and cytokinesis of animal cells will be presented.
Date: February 26, 2016
Speaker: Andrea Bonito, Texas A&M University
Title: Bilayer Plates: From Model Reduction to Gamma-Convergent Finite Element Approximation
Abstract: The bending of bilayer plates is a mechanism which allows for large deformations via small externally induced lattice mismatches of the underlying materials. Its mathematical modeling consists of a geometric nonlinear fourth order problem with a nonlinear pointwise isometry constraint and where the lattice mismatches act as a spontaneous curvature. A gradient flow is proposed to decrease the system energy and is coupled with finite element approximations of the plate deformations based on Kirchhoff quadrilaterals. In this talk, we give a general overview on the model reduction procedure, discuss to the convergence of the iterative algorithm towards stationary configurations and the Gamma-convergence of their finite element approximations. We also explore the performances of the numerical algorithm as well as the reduced model capabilities via several insightful numerical experiments involving large (geometrically nonlinear) deformations. Finally, we briefly discuss applications to drug delivery, which requires replacing the gradient flow relaxation by a physical flow.
Date: February 26, 2016
Speaker: Lou Kondic, New Jersey Institute of Technology
Title: Force networks in particulate-based systems: persistence, percolation, and universality
Abstract: Force networks are mesoscale structures that form spontaneously as particulate-based systems (such as granulars, emulsions, colloids, foams) are exposed to shear, compression, or impact. The presentation will focus on few different but closely related questions involving properties of these networks:
(i) Are the networks universal, with their properties independent of those of the underlying particles?
(ii) What are percolation properties of these networks, and can we use the tools of percolation theory to explain their features?
(iii) How to use topological tools, and in particular persistence approach to quantify the properties of these networks?
The presentation will focus on the results of molecular dynamics/discrete element simulations to discuss these questions and (currently known) answers, but I will also comment and discuss how to relate and apply these results to physical experiments. | 2021-05-15 04:30:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5410642027854919, "perplexity": 825.7288414255213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00382.warc.gz"} |
https://www.vrcbuzz.com/tag/rectangular-distribution-examples/ | ## Continuous Uniform Distribution Calculator With Examples
Continuous Uniform Distribution Calculator With Examples The continuous uniform distribution is the simplest probability distribution where all the values belonging to its support have the … | 2022-01-23 06:42:21 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643189311027527, "perplexity": 737.4069941858857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00202.warc.gz"} |
https://sea-man.org/lng-rollover.html | .
Site categories
Prevention rollover in LNG Tanks
Introduction
This guidance is specifically applicable to LNG ships. lt is also applicable to LNG ships acting as floating storage vessels, LNG Regasification Vessels (LNGRV) and Floating Storage and Regasification Units (FSRU) if no countermeasures are in place. For conventional, onshore LNG receiving terminals, the issues are generally well understood and suitable mitigation methods are in place. For LNG ships, the circumstances leading to rollover are quite unusual, but rollover has occurred and therefore this information paper seems appropriate.
Traditionally, bulk LNG is stored in heavily insulated tanks. At shore installations, these may be vertical cylindrical or in-ground tanks, the largest of which have a capacity of up to 250 000 m3 and a working pressure of up to 250 mbar. Spherical or prismatic cargo tanks are used on LNG carriers with individual tank capacities of up to 50 000 m3 and a similar working pressure. Smaller quantities of LNG are normally stored in vacuum insulated tanks (VITs) at pressures of up to 5 bar, although VITs can be produced with capacities of up to 10 000 m3. Heat leaks into the tank through the insulation, warming the cargo and in turn causes the surface layer to evaporate resulting in “boil-off”. The boil-off rate depends on the tank type and application, varying from 0,02 % to 0,2 % of tank volume per day.
Although a few onshore LNG facilities and some classes of LNG ships have reliquefaction plants, generally boil-off from storage tanks/ cargo tanks is not reliquefied, but treated separately. On LNGCs, boil-off is traditionally used as fuel gas; ashore it may be compressed and exported, sent to a re-condenser and adsorbed into the LNG export stream prior to vaporisation, used as fuel gas, or a combination of these, whichever is the most convenient or economic. In all cases, if boil-off is not reliquefied and returned to the storage tank, the lighter fractions evaporate over a period of time and the density of the remaining tank inventory will increase.
“Rollover” refers to the rapid release of LNG vapour that can occur as a result of the spontaneous mixing of layers of different densities of LNG in a storage or cargo tank. A pre-condition for rollover is that stratification has occurred, ie the existence in the tank of two separate layers of LNG of different density. The possibility of a sudden release of large amounts of vapour and the potential over-pressurisation of the tank resulting in possible damage or failure is recognised by the major design codes.
EN 1473 – “The design of onshore LNG terminals” and NFPA 59A – “Standard for the Production, Storage and Handling of LNG” both require this phenomenon to be taken into consideration when sizing relief devices. Whilst the relief valves may Questions and answers to Crew Evaluation System Test about Damage Prevention during cargo deliveryprevent damage to the tank, LNG vapour is not only flammable and heavier than air on release, but a valuable commodity and a potent greenhouse gas and therefore venting should be avoided whenever possible.
Rollover received considerable attention following a serious venting incident at the LNG receiving terminal at La Spezia, Italy in 1971, which is described in the annex.
In 1981, a GIIGNL technical study group began investigating rollover incidents. 41 incidents occurring at 22 plants were identified in the period 1970 to 1982. The majority of these incidents were attributed to mixing liquids of different densities in one tank, but 4 were attributed to “nitrogen induced rollover”, which is explained in the following section. This study enabled operators of storage facilities to implement procedures to prevent stratification and hence rollover.
Traditionally, the LNG industry has been characterised by long-term contracts with an export terminal, employing dedicated ships, supplying a number of regular receiving terminals with LNG whose composition will only ever vary within a very narrow range. Over the last few years, there has been an increasing tendency to balance supplies with short-term contracts from a range of LNG producers. This trend has made the mixing of different compositions of LNG within the same storage tank more likely and hence the probability of stratification and possible rollover has increased unless suitable precautions are taken to ensure complete mixing.
Looking to the future, if LNG is going to be used extensively as a marine fuel, as is widely predicted, ships loading LNG fuel at different ports must be aware of the possibility, consequences and mitigation methods of LNG fuel stratification and rollover.
Basic Thermodynamics
Picture 1 shows an LNG tank without stratification. Methane evaporates from the surface, which cools due to loss of latent heat, causing the density of the surface layer to increase and the liquid to sink. Heat inleak through the tank bottom and wall insulation is sufficient to warm the lower side layers and a convection current is set up, ensuring mixing of the liquid. The lighter fractions will boil off first, resulting in the density of the remaining liquid gradually increasing, a process known as “weathering” or “aging”.
Rollover can only occur if stratification has taken place in the LNG. Stratification of LNG can occur when an LNG tank is filled with LNG of different densities. Stratification will occur readily if the LNG being introduced into the tank is either denser than that of the “heel” remaining in the tank and filling is at the bottom, or if the LNG introduced is lighter than the heel and filling is into the top of the tank. Studies undertaken in Japan in the late 1970s showed that a density difference of 1 kg/m3 (0,001 tonne/m3) could result in stratification if incoming LNG was introduced at a very slow rate.
Picture 2 shows a tank where stable stratification has taken place caused by filling a storage tank with liquids of different densities, the higher density layer being the lower layer. There is little heat or mass transfer between the layers and each layer establishes its own convection currents. A key indicator that stratification has occurred is a noticeable reduction in the normal boil-off rate.
Heat is lost from the upper layer by evaporation, but because of the density difference and very low thermal conductivity of LNG there is very little heat transfer from the lower layer to the upper layer. Instead, the heat, which is absorbed by the lower layer through the tank wall and floor, causes a rise in temperature and a decrease of density of the lower layer. When the densities are approximately equal, the lower superheated layer will rise through the upper layer, releasing its superheat and thereby generating large volumes of boil-off in a short period; this is rollover. Pictures 3, 4 and 5 show the temperature, density and boil-off rate trends graphically.
Experimental work has been undertaken which shows that a thin intermediate layer may exist between the two stratified layers. This controls the rate of mixing of the stratified layers and thus the volume of vapour generated, but further discussion on this topic 1s beyond the scope of this document.
Any gas discharged from the tank during rollover wil be mainly methane at a temperature of approximately -160 °C and so, initially, it will be denser than air. Therefore, it will tend to disperse around the vent mast outlet whilst mixing with air and forming a flammable cloud, becoming buoyant as it warms above about -110 °C.
The information gathered by the GIIGNL study group, referred to in the introduction, indicated that in about half the incidents recorded the increase in boil-off rate was less than 10 times the normal rate, but in 12 % of cases it was calculated to exceed 20 times normal boil-off rate.
If the LNG contains significant quantities of nitrogen, it has been postulated that auto-stratification may occur, possibly resulting in “nitrogen induced rollover”. Four of the rollover cases in the GIIGNL study were attributed to this. Nitrogen has a boiling point of -196 °C compared with -162 °C for the average LNG. Furthermore, nitrogen has a molecular mass of 28, compared with that of 16 for methane, the main constituent of LNG.
Therefore, as the nitrogen boils off, the density of the remaining LNG will decrease, unlike nitrogen-free LNG where the density will increase as it ages. If there is sufficient nitrogen present (> 1 % according to Chatterjee and Geist) this can result in a layer of low density liquid which can remain on the surface, but will eventually mix with the lower layer resulting in rollover.
However, most LNG plants produce LNG with a nitrogen content significantly lower than 1 %. The production of LNG with a high nitrogen content represents a reduction in plant efficiency hence an increase in operating costs.
Table. Variation in Nitrogen and Methane Content and Density of LNG – GIIGNL
N2 %C1 %Density kg/m3
Algeria – Arzew0,688464
Algeria – Bethioua0,988,1455
Australia – NWS0,490,1460
Egypt – Idku095,9436
Libya0,781,6485
Snohvit0,891,8451
Oman0,487,9470
Qatar – Qatargas I0,187,4467
Detection of Stratification and Prevention of Rollover (Receiving Terminals)
This section briefly describes various rollover management methods applied in receiving terminals. The same principles can be applied to FSRUs that have been built or converted to have similar characteristics to those of a shore-based receiving terminal.
lt is noted that LNG carriers are not normally equipped with either top filling connections or internal jet nozzles. If grades of LNG with different compositions are going to be received and stored, the simplest countermeasure option is to store them in separate tanks, if this is possible.
However, stratification and thus rollover can be prevented by mixing LNG of different densities using top and bottom fill procedures and recirculation of the tank inventory through jet nozzles or other mixing devices.
Bottom Filling
If the incoming LNG is lighter than the heel in the tank, a bottom filling operation will generally ensure complete mixing of the two LNG grades, with little or no chance of stratification. The boil-off gas production, generated due to the temperature rise of the LNG during transfer from the LNG carrier to the filled tank, is limited by the hydrostatic pressure at the bottom of the tank.
Top Filling
If the incoming LNG is heavier than the stored LNG in the tank, a top filling operation will avoid stratification and the risk of subsequent rollover. However, top filling usually results in excessive vapour generation, due to the flashing of the injected LNG into the tank’s vapour space and subsequent increase in tank pressure, which must be managed. A simple solution to this is to reduce the loading rate, but this may not always be commercially acceptable and other means may need to be adopted.
Furthermore, top filling is not generally provided on LNG carriers, unless they have been converted for use as a floating storage and regasification unit (FSRU) in which case they are often provided with top fill connections.
One method of reducing overall vapour generation when top filling is to lower the tank pressure prior to filling the tank; this will create more boil-off and drop the temperature of the heel. Immediately before filling commences, the tank pressure is raised to above normal operating pressure to limit the amount of LNG that flashes off when discharging into the tank’s vapour space. This raised pressure is maintained throughout the loading process and when filling is complete the tank pressure is slowly returned to its normal level.
Jet Nozzles and Other Mixing Devices
A jet nozzle fitted to a fill line located at the bottom of the tank can be very effective in preventing stratification, but there must be sufficient head in the filling line to ensure the jet can reach the surface of the liquid and sufficient time must be allowed to ensure the mixing process takes place throughout the tank. Diffusers at the bottom of the fill line can also aid mixing. Perforated fill lines have also been fitted to some tanks, but these may result in excessive boil-off if any of the perforations are above the liquid surface during the filling operation.
Detection of Stratification and Prevention of Rollover in Shore Tanks
As mentioned earlier, a noticeable reduction in boil-off rate below the normal is a good indication that stratification has occurred. The measurement of temperature and density throughout the liquid column will confirm this, but accuracy of measuring instruments is essential as ΔT of 0,1 and density variations of 0,1 % need to be detected. A reduction of 10 % in boil-off rate should be taken as a warning of stratification.
LNG rollover predictive models are widely used in conjunction with internal tank travelling temperature and density instrumentation to predict and update the behaviour of LNG stratification. The more sophisticated of these models also utilise input information from the construction data of the tank (including volume, aspect ratio, insulation efficiency, filling devices and boil-off handling capacity) and, by measuring density and temperature profiles, tank level, initial LNG composition, boil-off rate and send-out rate, can accurately identify stratification and predict the “time to rollover” and the consequences of the rollover such as maximum tank pressure and volume of gas generated.
Once stratification has been detected, the following means may be used to break up the layers:
• Transfer of the liquid from the tank either by exporting or transferring to another tank if possible.
• Circulation of tank contents through jet nozzles or other mixing devices.
• Recirculation of the liquid through a top fill line. lt should be noted that the efficiency of this depends on the flow rate and it can result in high boil-off losses.
If a sophisticated tank management system is provided, the operator will have real time information available to enable break-up of the stratification before rollover occurs.
Recently, intentionally induced density stratification has become routinely used by some operators to reduce high LNG boil-off rates, particularly when top filling is required for heavier LNGs. This means that boil-off gas compressor and pre-heater operating costs can be reduced both during and after unloading LNG tankers. These procedures require careful management, a sophisticated tank management system and a means to break up any stratification as referred to earlier.
Detection of Stratification and Prevention of Rollover (LNG Carriers)
Whilst rollover in receiving terminals has been well studied, the risk of rollover in LNG ships has always been considered low. This is because the dominant trading pattern has involved dedicated trade routes with vessels trading from a single loading port. In this trade, the pre-condition for rollover cannot exist unless there has been a sudden significant increase in the density of the export LNG since it requires a ‘heavy’ or ‘rich’ cargo to be loaded under a significant heel that is lower density or ‘lean’ .
Through the weathering effect described above, the heel is always richer than the new cargo, and the trade requires minimum heel on arrival, so there are not normally large quantities of heel.
The case study “Rollover on a Moss Type LNG carrier” arose because, unusually, there was a large heel on board and this heel was leaner than the incoming cargo. Another way this could happen is the case of a ship acting as floating storage for an extended period and a decision to top up the tanks with LNG from a different, richer, source. Whilst rollover on an LNG ship is still considered an unusual event, the following comments elaborate on detection and prevention of rollover resulting from these types of incidents.
Because LNG ships do not normally have either the instrumentation to detect stratification or the means to force mix the tank contents, the best management method is to avoid the circumstance arising in the first place.
If faced with either of the circumstances described above, a view should be taken of the density of the LNG on board versus the incoming LNG. If the incoming is likely to be of higher density than the LNG on board, the risk of stratification is high and measures should be taken. If there is uncertainty, detailed calculations should be made to assess the densities both of the onboard and the incoming LNG representative of the conditions at the time of the new loading operation. Ships are not equipped to conduct these calculations and they will have to be performed by experts ashore.
For the case of a large, lower density heel, the following procedure is suggested to mitigate the potential risk once identified prior to loading:
1. Consolidate the heel into one tank.
2. Partially load a second tank to a level such that there is room to transfer into the tank the entire heel.
3. Close the manifold liquid valves-leaving the vapour manifold open.
4. Transfer the heel into the partially filled tank. This should be done using the ship’s cargo pumps as fast as safely possible, prudence and vapour generation permitting. The reason for speed is to promote as much turbulence as possible in the bottom of the receiving tank to aid mixing.
5. Do not load any further LNG into the tank containing the mixture.
The above procedure is to be carefully discussed between ship and shore before commencement of loading. lt should be noted that the transfer and mixing process may generate significant amounts of vapour.
Whilst more complex, an alternative to the above process would be to consolidate the heel into one tank and then start loading into a different tank at a slow rate whilst transferring from the heel tank. This would ensure mixing of the products, but it is important that the mixing occurs throughout the time the tank is being filled in order to prevent stratification between the mixed LNG and that coming from the terminal. For the floating storage case, particularly if none of the methods to detect and mitigate the effects of stratification is installed, if a risk of stratification is identified prior to loading, the cargo is simply unacceptable.
Should, against previous advice, a ship load and it is subsequently determined that there is a risk of stratification, the following may give some indication that stratification has occurred:
1. Reduction in boil-off gas flow rate below normal.
2. Tank level not decreasing at a normal rate; indeed, it may even increase.
3. A careful examination of the CTS temperature probes shows the lower ones increasing in temperature whilst the upper ones are substantially constant.
These effects are all fairly small and may be masked by other factors. If it is discovered that a ship is at risk of rollover, the only remedy is to discharge all the cargo as soon as possible into a shore receiving tank with the appropriate mixing arrangements. This has very severe commercial and operational implications, which is why it is so important that the risk is carefully assessed before loading.
Risk Factors for LNG Ships
The following trades are deemed to have negligible or low potential risk of stratification:
2. Ships that arrive at the loading port with minimal or zero heel.
3. Ships continually trading within either a rich gas region or a lean gas region.
4. Ships moving from a rich gas trade to a lean gas trade.
5. Ships moving from lean gas trade to rich, providing point (b) above is observed.
6. Floating storage vessels topping up with fresh (unweathered) LNG from the same source as the original stock.
The following circumstances could lead to significant risk and careful assessment on a case-by-case basis is needed:
1. Ships with a significant amount (>800 m3) of lean gas heel loading a rich gas cargo.
2. A floating storage vessel with stock originating from a lean producer topping up with rich LNG.
Risk Factors for LPG Ships
There is no reported evidence of rollover occurring with LPG ships, but there are differing risks associated with co-mingling of cargoes on board.
Case Histories
LNG Rollover at La Spezia, Italy
In August 1971, a rollover incident occurred at the La Spezia LNG import terminal resulting in the release of a large quantity of vapour from a storage tank relief valves and vent.
The terminal had two vertical cylindrical single containment 9 % Ni storage tanks, each of about 50 000 m3 capacity with a maximum design pressure of 50 millibar. Filling was via a 24 inch side entry bottom connection. A 4 inch top recirculating connection was also provided.
The tank that was filled had a heel of 5 170 tonnes to which was added a cargo of 18 200 tonnes. Details of the heel and cargo are as follows:
HeelCargo
Methane63,6 mole %62,3 mole %
Ethane24,221,8
Propane9,412,7
Butane2,33,1
Pentane+0,20,1
Nitrogen0,3<0,1
Temperature °C-158,9-154,3
Density kg/m3541,7545,58
Prior to discharging to the shore tank, the LNGC “Esso Brega” had been in La Spezia harbour for more than one month, during which time the cargo had weathered and warmed. When this heavier warmer LNG was loaded through the bottom side fill it stayed on the bottom, the lighter cooler tank heel being displaced upwards with only minimal mixing and the static pressure suppressing vaporisation of the bottom layer. lt is not known whether the 4 inch recirculating line was used, but it would probably have been too small to have had any serious effect.
The heel in the storage tank had been boiling off prior to filling, but the rate was seen to increase sharply during the loading period of about 10 hours, during which time about 30 tonnes of boil-off was generated and stratification developed. After loading, there was an ullage of 4 m in the tank. There followed a quiet period during which time the boil-off evolution was at a similar rate to that before the loading commenced.
About 31 hours after the loading had commenced, rollover occurred. The tank relief valves lifted for about 1 hour and 15 minutes and the vent discharged at high rates for a further 2 hours after the relief valves closed. The vapour release rate escalated to an estimated peak of 10 tonnes/hr and it was calculated that, before boil-off relapsed to its rate before loading had commenced, a total of 86 tonnes of vapour had been released.
When the relief valves started to lift, the plant management informed the port authority and local emergency services, who closed local roads, and the “Esso Brega” was moved off the berth. The tank was not damaged by the overpressure, although this rose to about 20 millibar above the design pressure, and no injuries were sustained.
In J.A Sarsten’s report in “Pipeline & Gas Journal” Vol. 199 in 1972, it was stated that reoccurrence of this type of incident would be prevented by fitting an angled jet nozzle to promote mixing in the tank. This report also noted that the relief valves on the second tank lifted for a period of about 15 minutes. It is assumed that this tank was over-pressurised through a common vapour header.
Rollover on a Moss Type LNG Carrier
It was believed. that rollover on a Moss type LNG earner was unlikely to occur because the spherical shape of the tank would strengthen the convection current and ensure thorough mixing of the tank inventory, this being aided by the vessel’s motion in a seaway.
However, in 2008, a Moss type 125 000 m3LNG carrier discharged a cargo in the Far East that had been loaded in Trinidad, keeping over 8 500 m3 of LNG as heel in two cargo tanks for the onward voyage to the Mediterranean to load. After 8 days at sea, the vessel received orders to divert to load in a Japanese port, where it arrived 17 days after leaving the discharge port, arriving with a heel of over 5 000 m3 of LNG.
The port where the vessel loaded was a receiving terminal and the loading rate was less than half of what would normally be expected. Also the vessel had to interrupt loading for several hours to ensure that the cargo tanks were cooled to acceptable limits. Both of these factors may have contributed to the stratification of the tanks contents. The density of the cargo loaded in Trinidad was 427 kg/m3, that of the 8 500 m3 heel 454 kg/m3 and that loaded in Japan 454 kg/m3. Nitrogen content was negligible.
24 hours after leaving port, the levels were seen to increase in No. 3 and No. 4 tanks, which had contained the heel. After 5 days, whilst the vessel was waiting to berth at the discharge port, the tank pressures were seen to rise, accompanied by a drop in the tank levels in 3 and 4 tanks as rollover occurred. The crew shut in on the vapour valves from 1, 2 and 5 tanks to send as much vapour as possible to the boilers from 3 and 4 tanks, which peaked at 200 mbar. Shortly after this, the vessel berthed and was able to send vapour to the shore flare to stabilise the system and enable opening custody transfer to begin.
This was not considered to be a senous roll-over, compared with the La Spezia incident, but demonstrated that LNG carriers can experience stratification and rollover if heavy LNG is loaded under a heel of lighter density. The changes in tank level were more apparent because a spherical tank will have a greater change for a given volume than a prismatic tank when the tank is fully loaded. At no time d id the tank pressures exceed the design pressure nor did the cargo tanks pressure relief valves lift.
This is the summary of a verbal report given at a SIGTIO Panel Meeting in 2011 by the vessel operator and charterer.
Partington LNG Peak-Shaving Plant 1993
In 1993, a rollover occurred in a 21 000 tonne storage tank at the Partington LNG peak-shaving plant in the UK, resulting in the relief valves lifting and a considerable quantity of vapour being discharged to atmosphere.
The tank had a heel of 17 266 tonnes of LNG and a total of 3 433 tonnes of new product was added over a period of 24 days, ready for the winter period. During the final 13 days of this production run, two significant events occurred. First a cryogenic distillation plant was commissioned, designed to reduce the heavy hydrocarbon and C02 content of the feed gas to the liquefaction plant, and secondly, due to the Morecambe Bay gas field being shut down, the N2 content of the feed gas to the plant dropped significantly.
68 days after filling ceased, the tank pressure started to rise rapidly and both the process relief valves and the emergency relief valves lifted, resulting in approximately 150 tonnes of vapour being vented to atmosphere from the tank over a 2 hour period. At no time d id the pressure in the tank exceed its design pressure. The tank was not damaged and was subsequently returned to service after examination.
Calculations undertaken as part of the investigation into the incident indicated that the tank heel prior to filling was approximately 446 kg/m3, to which 1,533 tonnes of LNG at 449 kg/m3 was initially added followed by 1 900 tonnes of the lighter LNG, resulting in a product density of 433 kg/m3. The first phase of the run would have been expected to mix with the heel, but the lighter second phase would have stratified. In the first 58 days after filling, approximately 160 tonnes of LNG had boiled off, whereas calculations showed that about 350 tonnes would have been expected.
HeelPhase 1Phase 2
Tonnes17,2261,5331,900
Nitrogen %0,381,60,5
Methane %92,692,797,5
Ethane %6,55,72,0
Propane %0,46
Butane %0,03
Density kg/m3446449433
Following this investigation, the operator introduced operational procedures at their peak-shaving sites for filling tanks and identifying stratification. These included determination of heel density by analysing export gas, controlling LNG density from the liquefaction plant to ensure it does not differ from the heel by more than 5 kg/m3, limiting N2 concentrations in the tank to less than 0.8 % after filling and regular analysis of boil-off composition and rates. Should stratification be suspected, the tank is recirculated from bottom to top to mix the contents and release superheat.
Shipboard Rollover in the 1960s
Footnotes
Did you find mistake? Highlight and press CTRL+Enter
Январь, 20, 2021 1674 0
Notes
Text copied
Favorite articles
• Список избранных статей пуст.
Here will store all articles, what you marked as "Favorite". Articles store in cookies, so don't remove it.
$${}$$ | 2022-10-03 11:39:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5471685528755188, "perplexity": 2437.5711688676483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00627.warc.gz"} |
http://ternarysearch.blogspot.com/2013/01/sampling-normal.html | ## Thursday, January 24, 2013
### Sampling a Normal
The normal distribution is one of the most commonly used probability distributions because it turns out that many probabilities in the real world are approximately normal as a result of the central limit theorem. The standard normal distribution has a probability density function of
$$f(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}x^2}$$
For something so common, its density function is not particularly nice to deal with; its cumulative distribution cannot be expressed in terms of elementary functions. This makes it a bit tricky to sample from as well, since the easiest way to sample a random variable is to sample a value from the uniform distribution on $[0, 1]$ and then invert the cumulative distribution function. Fortunately, people have come up with very clever ways of sampling a normal, and I'll provide a derivation of one, called the Box-Muller transform, here.
Consider the joint distribution of two independent normal variables:
$$f(x, y) = f(x)f(y) = \frac{1}{2\pi} e^{-\frac{1}{2}(x^2+y^2)}$$
This is a distribution across all points in the two-dimensional plane given by their $(x, y)$ coordinates. If we instead consider the polar representation of points, we can transform the given joint distribution via the change of variables formula. Using $x = r\cos{\theta}$ and $y = r\sin{\theta}$ gives (most calculations omitted for brevity)
$$f(r, \theta) = f(x, y) \left| \begin{array}{cc} \cos{\theta} & -r\sin{\theta} \\ \sin{\theta} & r\cos{\theta} \end{array} \right| = f(x, y) \cdot r = \frac{1}{2\pi} r e^{-\frac{1}{2} r^2}$$
Integrating out $\theta$ over the range $[0, 2\pi)$ leaves us with something that we can obtain the cumulative distribution for:
$$f(r) = r e^{-\frac{1}{2} r^2} \Rightarrow F(r) = \int_0^r f(t) dt = \int_0^r t e^{-\frac{1}{2} t^2} dt = 1 - e^{-\frac{1}{2} r^2}$$
Now we can apply the inversion technique to sample $r$; let $U_1$ be a random variable drawn from a uniform distribution on $[0, 1]$. We can compute $r = F^{-1}(1-U_1) = \sqrt{-2\ln{U_1}}$ (we decide to swap $U_1$ with $1-U_1$ to make the result nicer since the two have the same distribution). Notice that we can sample $\theta$ as well because it's actually uniformly distributed on $[0, 2\pi)$, so if $U_2$ is another uniform random variable on $[0, 1]$ then we can take $\theta = 2 \pi U_2$. Putting it all together, we get two (independent) sampled normals
$$x = r\cos{\theta} = \sqrt{-2\ln{U_1}} \cos{(2 \pi U_2)} \\ y = r\sin{\theta} = \sqrt{-2\ln{U_1}} \sin{(2 \pi U_2)}$$
To wrap things up and make it a bit more practical, here's a Java snippet of this in action. | 2018-11-14 03:21:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587308526039124, "perplexity": 120.30287418806027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00263.warc.gz"} |
https://prepinsta.com/zoho/aptitude/data-sufficiency/quiz-1/ | # Zoho Data sufficiency Quiz 1
Question 1
Time: 00:00:00
1.What will be the total weight of 95 boys, each of the same weight?
Statements:
I. The total weight of 43 boys is 60 kilograms more than the total weight of 42 boys.
II. Three-fifth of the weight of each boy is 36 kg
I alone is sufficient while II alone is not sufficient.
I alone is sufficient while II alone is not sufficient.
II alone is sufficient while I alone is not sufficient
II alone is sufficient while I alone is not sufficient
Either I or II is sufficient
Either I or II is sufficient
Neither I nor II is sufficient
Neither I nor II is sufficient
Both I and II together are sufficient
Both I and II together are sufficient
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 2
Time: 00:00:00
Find the part of the cistern which can be filled in 1 hour
I. A tap fills the cistern in 17 hours.
II. There is only one tap to fill the cistern and there is no leakage.
I alone is sufficient while II alone is not sufficient.
I alone is sufficient while II alone is not sufficient.
II alone is sufficient while I alone is not sufficient
II alone is sufficient while I alone is not sufficient
Either I or II is sufficient
Either I or II is sufficient
Neither I nor II is sufficient
Neither I nor II is sufficient
Both I and II together are sufficient
Both I and II together are sufficient
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 3
Time: 00:00:00
Question: What is the length of the bus?
Statements:
I. The bus crosses a 150 meters long bridge in 18 seconds.
II. The bus crosses a girl running in the opposite direction in 9 seconds.
I alone is sufficient while II alone is not sufficient
I alone is sufficient while II alone is not sufficient
II alone is sufficient while I alone is not sufficient
II alone is sufficient while I alone is not sufficient
Either I or II is sufficient
Either I or II is sufficient
Neither I nor II is sufficient
Neither I nor II is sufficient
Both I and II together are sufficient
Both I and II together are sufficient
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 4
Time: 00:00:00
Question: What is the speed of the stream?
Statements:
I. Ratio of speed of downstream and upstream is 7 : 5
II. Boat covers 200 km in upstream speed in 8 hours.
I alone is sufficient while II alone is not sufficient
I alone is sufficient while II alone is not sufficient
II alone is sufficient while I alone is not sufficient
II alone is sufficient while I alone is not sufficient
Either I or II is sufficient
Either I or II is sufficient
Neither I nor II is sufficient
Neither I nor II is sufficient
Both I and II together are sufficient
Both I and II together are sufficient
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 5
Time: 00:00:00
Question: In how many days C alone can do the work?
Statements:
1. A alone can complete the work in 17 days
2. A and B together can complete the same work in 8 days.
I alone is sufficient while II alone is not sufficient
I alone is sufficient while II alone is not sufficient
II alone is sufficient while I alone is not sufficient
II alone is sufficient while I alone is not sufficient
Either I or II is sufficient
Either I or II is sufficient
Neither I nor II is sufficient
Neither I nor II is sufficient
Both I and II together are sufficient
Both I and II together are sufficient
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 6
Time: 00:00:00
Find the unique value of y:
Statements:
1. $y^{2}$= 256
2. $y^{3}$= 4096
I alone is sufficient while II alone is not sufficient
I alone is sufficient while II alone is not sufficient
II alone is sufficient while I alone is not sufficient
II alone is sufficient while I alone is not sufficient
Either I or II is sufficient
Either I or II is sufficient
Neither I nor II is sufficient
Neither I nor II is sufficient
Both I and II together are sufficient
Both I and II together are sufficient
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 7
Time: 00:00:00
Is y even?
Statements:
1. 3y - 17 = 34
2. 2y + 38 = 72
I alone is sufficient while II alone is not sufficient
I alone is sufficient while II alone is not sufficient
II alone is sufficient while I alone is not sufficient
II alone is sufficient while I alone is not sufficient
Either I or II is sufficient
Either I or II is sufficient
Neither I nor II is sufficient
Neither I nor II is sufficient
Both I and II together are sufficient
Both I and II together are sufficient
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 8
Time: 00:00:00
What is the distance between Gwalior and Indore?
Statement:
1. Distance between Sagar and Gwalior is 320 Km.
2. Distance between Sagar and Indore is 380 Km.
I alone is sufficient while II alone is not sufficient
I alone is sufficient while II alone is not sufficient
II alone is sufficient while I alone is not sufficient
II alone is sufficient while I alone is not sufficient
Either I or II is sufficient
Either I or II is sufficient
Neither I nor II is sufficient
Neither I nor II is sufficient
Both I and II together are sufficient
Both I and II together are sufficient
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 9
Time: 00:00:00
Ashita’s salary is 125% of Pawan’s salary . What is Pawan's salary?
Statements:
1. Ashita and Pawan’s salary are in the ratio of 5: 4.
2. Ashita's salary is Rs.63200.
Statement 1 alone is sufficient but statement 2 alone is not sufficient to answer the question
Statement 1 alone is sufficient but statement 2 alone is not sufficient to answer the question
Statement 2 alone is sufficient but statement 1 alone is not sufficient to answer the question
Statement 2 alone is sufficient but statement 1 alone is not sufficient to answer the question
Both statements 1 and 2 together are sufficient to answer the question but neither statement alone is sufficient
Both statements 1 and 2 together are sufficient to answer the question but neither statement alone is sufficient
Each statement alone is sufficient to answer the question
Each statement alone is sufficient to answer the question
Once you attempt the question then PrepInsta explanation will be displayed.
Start
Question 10
Time: 00:00:00
What is Abdul's birth date?
Statements:
I. Abdul's mother was born on 26th Febrary 1965.
II. Abdul is 7 years older than his brother.
Statement 1 alone is sufficient but statement 2 alone is not sufficient to answer the question
Statement 1 alone is sufficient but statement 2 alone is not sufficient to answer the question
Statement 2 alone is sufficient but statement 1 alone is not sufficient to answer the question
Statement 2 alone is sufficient but statement 1 alone is not sufficient to answer the question
Both statements 1 and 2 together are sufficient to answer the question but neither statement alone is sufficient
Both statements 1 and 2 together are sufficient to answer the question but neither statement alone is sufficient
Each statement alone is sufficient to answer the question
Each statement alone is sufficient to answer the question
Once you attempt the question then PrepInsta explanation will be displayed.
Start
["0","40","60","80","100"]
["Need more practice!","Keep trying!","Not bad!","Good work!","Perfect!"]
Personalized Analytics only Availble for Logged in users
Analytics below shows your performance in various Mocks on PrepInsta
Your average Analytics for this Quiz
Rank
-
Percentile
0%
Completed
0/0
Accuracy
0%
Prime Video
#### Prepinsta Prime Video
Complete Video Course for Zoho
For Zoho
Get Prime Video
Prime Mock
For Zoho
Get Prime Mock | 2022-07-06 04:23:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 2, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5978252291679382, "perplexity": 2738.3364941205164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00772.warc.gz"} |
https://flatland.aicrowd.com/getting-started/env.html | # Flatland Environment¶
The goal in Flatland is simple:
We seek to minimize the time it takes to bring all the agents to their respective target.
This raises a number of questions:
## ↔️ Actions¶
The trains in Flatland have strongly limited movements, as you would expect from a railway simulation. This means that only a few actions are valid in most cases.
Hare are the possible actions:
• DO_NOTHING: If the agent is already moving, it continues moving. If it is stopped, it stays stopped. Special case: if the agent is at a dead-end, this action will result in the train turning around.
• MOVE_LEFT: This action is only valid at cells where the agent can change direction towards the left. If chosen, the left transition and a rotation of the agent orientation to the left is executed. If the agent is stopped, this action will cause it to start moving in any cell where forward or left is allowed!
• MOVE_FORWARD: The agent will move forward. This action will start the agent when stopped. At switches, this will chose the forward direction.
• MOVE_RIGHT: The same as deviate left but for right turns.
• STOP_MOVING: This action causes the agent to stop.
Code reference
The actions are defined in flatland.envs.rail_env.RailEnvActions.
You can refer to the directions in your code using eg RailEnvActions.MOVE_FORWARD, RailEnvActions.MOVE_RIGHT
## 👀 Observations¶
In Flatland, you have full control over the observations that your agents will work with. Three observations are provided as starting point. However, you are encouraged to implement your own.
The three provided observations are:
• Global grid observation
• Local grid observation
• Tree observation
Global, local and tree: A visual summary of the three provided observations.
🔗 Provided observations
Code reference
The provided observations are defined in envs/observations.py
Each of the provided observation has its strengths and weaknesses. However, it is unlikely that you will be able to solve the problem by using any single one of them directly. Instead you will need to design your own observation, which can be a combination of the existing ones or which could be radically different.
## 🌟 Rewards¶
At each time step, each agent receives a combined reward which consists of a local and a global reward signal.
Locally, the agent receives $$r_l = −1$$ for each time step, and $$r_l = 0$$ for each time step after it has reached its target location. The global reward signal $$r_g = 0$$ only returns a non-zero value when all agents have reached their targets, in which case it is worth $$r_g = 1$$.
Every agent $$i$$ receives a reward:
$r_i(t) = \alpha r_l(t) + \beta r_g(t)$
$$\alpha$$ and β are factors for tuning collaborative behavior. This reward creates an objective of finishing the episode as quickly as possible in a collaborative way.
In the NeurIPS 2020 challenge, the values used are: $$\alpha = 1.0$$ and $$\beta = 1.0$$.
Code reference
The reward is calculated in envs/rail_env.py
The episodes finish when all the trains have reached their target, or when the maximum number of time steps is reached.
## 🚉 Other concepts¶
### Custom levels¶
Going further, you will want to run experiment using a variety of environments. You can create custom levels either using multiple random generators, or design them by hands.
🔗 Generate custom levels
### Stochasticity¶
An important aspect of these levels will be their stochasticity, which means how often and for how long trains will malfunction. Malfunctions force the agents the reconsider their plans which can be costly. | 2021-02-25 16:29:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6115112900733948, "perplexity": 1296.9689432112527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00468.warc.gz"} |
http://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-5-review-page-403/9 | ## Algebra: A Combined Approach (4th Edition)
$(3b)^{0}=1$ Recall that $a^{0}=1$, where $a$ is a nonzero real number. | 2018-04-19 18:16:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565199017524719, "perplexity": 545.5305784524763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937015.7/warc/CC-MAIN-20180419165443-20180419185443-00182.warc.gz"} |
https://www.physicsforums.com/threads/magnetic-field-in-a-cavity.582473/ | # Homework Help: Magnetic field in a cavity
1. Feb 29, 2012
### c299792458
1. The problem statement, all variables and given/known data
We are given an infinitely long cylinder of radius b with an empty cylinder (not coaxial) cut out of it, of radius a. The system carries a steady current (direction along the cylinders) of size I. I am trying to find the magnetic field at a point in the hollow. I am told that the answer is that the magnetic field is uniform throughout the cavity. and is proportional to $d\over b^2-a^2$ where $d$ is the distance between the centers of the cylinders.
3. The attempt at a solution
I have found by using Ampere's law that the magnetic field at a point at distance r from the axis in a cylinder of radius R carrying a steady current, I, is given by $\mu_0 I r\over 2\pi R^2$. So I thought I would use superposition. But what I get is ${\mu_0 I \sqrt{(x-d)^2+y^2}\over 2\pi b^2}-{\mu_0 I \sqrt{(x)^2+y^2}\over 2\pi a^2}$. However this is not the given answer!
2. Feb 29, 2012
### M Quack
You are on the right track, but you have to superpose the magnetic field vectors.
3. Feb 29, 2012
### c299792458
@M Quack: Thank you. I don't know how to change these into vectors, could you please kindly give me another nudge? Thanks again.
4. Feb 29, 2012
### M Quack
The magnetic field generated by a long wire goes right around the wire. So it is perpendicular to the raidal vector.
If the wire is along (0,0,z) and your point at (x,y,z), you know that B_z=0 and that
B is perpendicular to (x,y,0). What vector has these properties?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook | 2018-08-20 15:26:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5803966522216797, "perplexity": 411.5462613966049}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216475.75/warc/CC-MAIN-20180820140847-20180820160847-00718.warc.gz"} |
https://worldbuilding.stackexchange.com/questions/25724/what-good-are-herbivores-in-an-animal-army/25735 | # What good are herbivores in an animal army?
This is a multi-species animal army with predators of all shapes and sixes, foxes, lions, bears, tigers, leopards, wolves, lynx, eagles, falcons and so on.
The herbivores have been excluded from the army because they appear to serve no apparent purpose. They have no talons, no sharp teeth, no ability to pounce on a foe. A few of the more patriotic herbivores have volunteered their services but every time were rebuffed and ridiculed.
This is strictly animal on animal combat. There are no tools or artificial armor. All animals have achieved human level intelligence without tool building. They are able to vocally communicate fluently regardless of their physiology. They feel the same six emotions that humans feel, anger, fear, surprise, disgust, happiness and sadness. Each species has general personality attributes derived from that species' physical characteristics. Eagles tend to be a bit arrogant. Wolves are highly social. Elephants tend toward wisdom and pontification.
Discussion about what the carnivores eat is out of scope. Also, why the herbivores would want to help the carnivores is out of scope. They just do.
How can the herbivores prove that they have a place in the army alongside the carnivores? What special capabilities do they bring to the table that would be valuable in winning a war?
• Have you heard the parable of the Lion and the Mouse? – JDługosz Jun 4 '16 at 5:03
• This (and some of the answers) anticipate the themes in Zootopia (May 2016). – JDługosz Jun 4 '16 at 5:11
• Are you telling me, that a bunch of wolves, who have to put considerable effort into planning the tactics required to take down a lone bison for dinner, can't see the utility in an elephant?! The wolves would surely approach the elephant for help! "Hey, can you help us?" "Fight! I don't know how to fight." "That's the beauty of it, you don't need to. You just run in and try and stand on everything." "Oh!" – inappropriateCode Jun 6 '16 at 16:39
• Take a look at huntercourse.com/blog/2011/11/… Note that of the first five animals listed, only the crocodile eats meat. The elephant, rhino, hippo and cape buffalo are very dangerous herbivores if provoked. – Mark Ripley Sep 17 '16 at 9:37
• Animals like rhinos and elephants are GIANT and have sharp tusks/horns. Plus, you don't need to expend resources for herbivores cause it's easier to grow plants than grow, fatten, and slaughter cattle or other domesticated animals for the carnivores. – WorldCraftTrainee Nov 29 '17 at 7:02
I second the answers that place herbivores in the role of heavy cavalry (or, I would hazard, animal 'pikemen' - big masses of bodies that a few lions or dogs cannot really approach), or as runners, scouts etc.
But consider the social dynamics. Most carnivores are lone wolves (e.g. prey birds) or hunt in small packs. Two small packs of lions in the same territory will fight each other before they turn their attention to prey. Completely un-managable.
The strategists and generals of the army may well be intelligent herbivores, since they are used to dealing in large numbers and the movement of any army worthy of the name is necessarily the movement of a herd.
Managing an army is a man-management and foraging task. More soldiers die of disease and malnutrition than enemy action. Persuasive oratory by the general is constantly needed to prevent mutiny when the promised wages inevitably fail to materialise.
Your supreme commander is probably a Pig. He 'gets' the psychology of both plant eaters and meat eaters. He can make speeches that appeal to both.
His Aides de Camp include hares, horses, rats etc as well as carnivores. The army's chief quartermaster is a squirrel (for obvious reasons). The rats are his military police and political commissars (supported by dog packs as enforcers), they are everywhere and constantly on the lookout for sedition and poor morale.
The main body of the army - deployed in the centre - consists of a battle hardened Bovine core. They don't do much, but serve to anchor the centre and while they retain cohesion, are unlikely to be routed by a few mangy carnivores, especially since the former outnumber the latter. The Bovine blocks are probably officered by dogs to move them about effectively. This main core does not charge the enemy until their foes are on the brink of collapse, because once they let rip you ain't getting them back in formation.
The carnivores are deployed on the wings like traditional cavalry - to try and encircle the enemy centre, or as skirmishers ranged along the front to worry the main body of the enemy force, and possibly a few key shock troops kept in reserve - however this small cadre is unlikely to be decisive.
Another possible formation echoes renaissance pike and shot armies. Big blocks of dumb heavy Bovines (pikes) surrounded and supported by faster moving and more long-ranged carnivores (musketeers). When superior enemy numbers threaten, the carnivore sleeves of the animal tercio shelter under the legs of their more bulky comrades.
Substitute the pikemen for cows and the blocks of shot at the corners for carnivores. Then go read thos fabulous account of the Battle of Ceresole.
A snippet:
The pike and shot infantry had by this time adopted a system in which arquebusiers and pikemen were intermingled in combined units; both the French and the Imperial infantry contained men with firearms interspersed in the larger columns of pikemen.[42] This combination of pikes and small arms made close-quarters fighting extremely bloody.[43] The mixed infantry was normally placed in separate clusters, with the arquebusiers on the flanks of a central column of pikemen; at Ceresole, however, the French infantry had been arranged with the first rank of pikemen followed immediately by a rank of arquebusiers, who were ordered to hold their fire until the two columns met.[44] Montluc, who claimed to have devised the scheme, wrote that:
In this way we should kill all their captains in the front rank. But we found that they were as ingenious as ourselves, for behind their first line of pikes they had put pistoleers. Neither side fired till we were touching—and then there was a wholesale slaughter: every shot told: the whole front rank on each side went down.[45]
Again, substitute the big bovines for pikes and the big cats for those armed with firearms.
Another bit of Ceresole from the link above that I like, which I have edited to substitute animals for men:
On the first charge, Enghien's wolfpack penetrated a corner of the Imperial Bull-square, pushing through to the rear and losing some of the volunteers from the Black Forrest.[54] As bulls ranks closed again, the wolfpack turned and made a second charge under constant arial attack from the hawks circling above the Imperial formations. This was far more costly, and again failed to break the Imperial Bulls. Enghien, now joined by Dampierre's Foxes, made a third charge, which again failed to achieve a decisive result; fewer than a hundred of the wolves remained afterwards.
Enghien believed the battle to be lost—according to Montluc, he intended to stab himself, "which ancient Romans might do, but not good Christians"—when St. Julian, the commander of his own Bulls, arrived from the center of the battlefield and reported that the Imperial Bulls there had broken formation after a long horn to horn tussle with our own, and then been chased from the battlefield by our own skirmishing hounds, which had been held in reserve.
• Brilliant! And a proposed order of battle too! – Green Sep 15 '15 at 20:56
• I added a few more tidbits and a piccy. A mixed force of animals is crying out to be deployed like this! – rumguff Sep 15 '15 at 21:30
• "Your supreme commander is probably a Pig." Because some comrades are more equal than others? Also this bit amused me: "The army's chief quartermaster is a squirrel (for obvious reasons)." In my mind those reasons being a squirrel hopelessly trying to drag a sword to and from their hidden stash of nuts and weapons... kind of like if the IRA's quartermasters happened to be dwarf leprechauns. – inappropriateCode Jun 6 '16 at 16:48
I think the herbivores would argue that the carnivorous are highly specialised, and tend not to be very social.
On a battlefield, charging Rhinos, Bisons, Buffalos, Bulls, etc. could break any line of carnivorous animals.
You need to defend a position? Who are you calling? a Hyena or an elephant? Same story in the sea, larger whales are not carnivorous.
But armies have to be made of complementary units and not only fighting units.
• Giraffes are great guards. Their long neck allows them to see any coming enemies from far away.
• Small herbivores can scout the enemy lines quite well: mice, rabbits.
• And you need logistics in your army. Who's to bring food? A horse or a donkey would be much more efficient than tigers. Transport messages on long distances? Not the lions, more antelopes.
• Beavers makes ideal military engineers.
And seriously, who would not want to have Koalas in their army? They'd make great spy: so cute that no one would ever doubt them.
• Notes: The giraffes are outclassed by the falcons. You can't easily pack much food on the donkey as there are no saddle bags (they do carry themselve though) and so are outclassed by pelicans (or kangaroos with their arms as their pouches might get irratated). Migratory birds are best for non-stealth message transport and those are either herbivores or carnivores. Owls are best for stealth message transport (wings designed to make little noise). Really though you need many soldiers doing these taskes and a larger recruitment pool is great. – kaine Sep 15 '15 at 20:40
• @kaine, I voluntary left the birds out of the list. May birds are omnivorous, so I don't know if they are included and those eating insects aren't technically carnivorous either. So I wanted to avoid technicalities. When you transport food, a truck does not load itself. Same here, you lay the dead animals on top of horses or donkeys... using an elephant for example. I don't know how much wait can the kangaroos or the pelicans carry, but I doubt they surpass the equidaes. So you're comparing a truck with a car. Both are good. – clem steredenn Sep 15 '15 at 20:47
• Krill are small crustaceans and they are a large part of whalebone whale diets. Meanwhile, toothed whales eat larger animals. Whalebone whales might not actively hunt but their diet contains plenty of animal protein. – kram1032 Sep 15 '15 at 20:47
• That's fair enough. Really, the question quickly becomes very broad if one looks too deeply into it. What really is considered in what category? And what scales are interesting for OP? Are bacteria considered? Probably not. But where is the cutoff? Etc. I definitely see why you want to avoid technicalities. Koalas probably are druggies in this universe considering their liberal usage of eucalyptus. – kram1032 Sep 15 '15 at 21:04
• Good job on pointing out the flaws with the assumption that herbivores couldn't do much in an army, without getting bogged down on all the vagueness of the question's premises and assumptions. – Karen Sep 16 '15 at 13:19
This hippo seeks to disagree with the notion that herbivores lack sharp teeth. Or, at least, she would disagree with that notion were anyone to present it to her in person, but they won't, because they know that this hippo will wreck them.
Well, there was one animal that disagreed with her about the whole 'sharp teeth' thing. "You want some dangerous teeth?" Said the other animal, "I'll show you some dangerous teeth.
There are no carnivores in that picture because they all fled in terror, happy to fight alongside these huge-toothed engines of destruction, if only because it meant not having to fight against them.
• Please, I need context for these photos! – Raystafarian Sep 16 '15 at 16:33
• I found it – Raystafarian Sep 16 '15 at 17:27
• +1 Hippos kill more people every year than lions, elephants, wolves and sharks combined. – Gaurav Sep 17 '15 at 0:31
• Here's my translation to @Raystafarian's link: "A mother's courage sometimes knows no limit! During his travel in the private park of Erindi in Namibia, the photograph, Mr. Ryan van Shalkveyk was fortunate enough to capture a fight (without casualties, I assure you) so rare I decided to show you! Indeed, that day, an elephant charged towards a group of hippos. The group's female assumed a good position to receive the attack, giving her children enough time to flee. The mother was not gravely injured. Her thick skin protected her and the elephant did not use its tusks." – Nolonar Sep 18 '15 at 18:54
• @mojo AFAIK, the issue is that hippos like to stay submerged in rivers, where they are difficult to spot. This makes easier for people to inadvertently get inside its "safety zone" than inside the safety zone of elephants or rhinos. – SJuan76 Sep 18 '15 at 21:36
# This question makes the number one mistake made by budding arm chair generals.
"You will not find it difficult to prove that battles, campaigns, and even wars have been won or lost primarily because of logistics." - General Dwight D. Eisenhower
Your carnivore army can be defeated without being engaged in a single battle.
Nothing kills in combat like disease and famine.
So lets start with your supposed army of carnivores. If I were in a campaign against your army, first thing I would do is retreat to my lines, lock myself in my castle/cave/whatever fortified position, and wait for winter.
Within days of the beginning of the siege, your army of carnivores will find themselves stretched beyond their supply lines, and they will lose discipline, resort to in-fighting to eat each other.
The fact of the matter is, supplying an army of carnivores is actually impossible. The dynamics of a realistic army would actually have the herbivores make up the bulk of the fighting force, whilst the carnivores would act as roving skirmishers and raiders.
Its not all bad though. Like modern fighting forces, the roving skirmishers/raiders tend to be given elite status, as they operate deep in enemy lines with little support.
• The truth of Eisenhower's statement is difficult for people who haven't studied the subject (or spent time in the military) to grasp. Battles are won before they are fought, for a hundred reasons. Each side engages either because they must, or (more often) because they think they know the situation and disposition of the other side well enough to be confident -- and we're nearly always wrong about most of what we think we know. Nearly everything about an enemy's situation, disposition and even training traces back to logistics in one form or another. – zxq9 Sep 17 '15 at 5:33
Interesting, many herbivores are very capable fighters. Have you ever seen videos of deer beating the crap out of an idiot who got too close? They also have antlers which they can use to keep predators at bay.
Elk and Moose are huge animals with a lot of destructive power, Moose have totally trashed snowmobiles with their antlers. What about Buffalo? I would be much more worried about being attacked by a single Bison than any single wolf. Hippos, Rhinos etc are also extremely dangerous when threatened. One on one most large herbivores are much more dangerous than their predatory counter part.
Now rabbits? not so much, but they are quick (could be messengers) and are low to the ground and thus could bite ankles and such as a distraction etc.
But just because herbivore's don't have 'claws' doesn't make them 'safe'. And plenty of herbivore's DO have claws (including rabbits). Hippos and boars have tusks, rhinos and bison have horns, hooves can cause serious injury, weight is a weapon too. Just cause someone doesn't want to fight doesn't mean they can't.
Edit: And I forgot about these! Deer with fangs! and unlike Jack-o-lope, these guys really exist!
• Rabbits don't have claws?! Tell that to all the bleeding scratches I've had from our pet rabbit whenever she panicked while being held - a frantically kicking rabbit can rip your skin to shreds if you get in the way of those long claws. – Monty Wild Sep 16 '15 at 2:42
• @MontyWild ha, that is not what I meant, but that certainly is how it reads. I'll clean up that paragraph... – bowlturner Sep 16 '15 at 2:46
• Now rabbits? not so much, but they are quick and they are immune to anything but the Holy Hand Grenade of Antioch – WernerCD Sep 16 '15 at 15:08
At the moment I can think of 6 things besides unique species characteristics. Please note though that it will help you to consider exactly what species you want to focus on to see if individual species have more useful unique characteristics. Monkey have hands; mole (ok insectivore) burrow; etc.
1) Number: Because plant life is much easier to produce en mass than meat, expect that the biomass of herbivores will greatly exceed that of carnivores. For every lion, you need many zebras. They are very good at stampeding.
2) Size: While not the majority, the most giant beasts tend to be herbivores. Ignoring elephants and rhinos for now, there are large numbers of large grazing mammals such as bulls that can do some serious damage in they put their weight into it.
3) Repurposed defense: Herbivores are evolved to survive carnivore attacks. Obvious examples include horns and hooves but for strategic purposes, poisons or spines could theoretically serve a purpose. If they are good enough to defend, they are usually good enough for attacking. They could also help to slow advancing forces by standing their ground.
4) Repurposed escape: Some herbivores survive not by defending themselves but by getting away. This means they might be good for reconnaissance but are generally outclassed as far as this goes by falcons and what not. They may, however, be able to move quickly and not be seen. A mouse is great at hiding and spying. A sparrow can get in and out quietly and quickly to pass the info on.
5) Incite riots: Finally, you can use the carnivore/herbivore thing to your advantage. While the pure carnivore enemies plan their war, the more allied forces sneak into enemy territory and speak to the other herbivores. "Hey dude, I'm like you but it is better over where I'm at. Help us out and it will be better here." Let the monkey and hare then release all the cattle. The rooster than calls loud enough to signal the other camp to go forward and all hell breaks loose.
6) Easier food supply for managing army: How are creatures that can't use tools going to store food for an extended conflict? Squirrels are smart enough to store provisions. I think human intellegence level cows can find a way to place a few haystacks in sheltered areas and have sheep resupply the pile as they can. I have no idea how a wolf could do that with meat if they can't freeze or salt it. Wars are won by logistics as much as if not more so than tactics. A large army of herbivores definitely stand a better chance on the logistics side as long as they can protect their dead (and admittedly feed it to allies carnivores). Before long, the enemy army will bring potential future allies (see insight riot) to just behind the front lines for food. OTherwise, internal conflict and consumption will ensue. The carnivore army can best survive if it is advancing (pillaging) or being attacked. If it is sieged or forced to retreat it cannot recover as easily. The only advantage they have logistically is that their food can walk. (Note this idea works better if someone is smart enough to find a way to carry hay well such as tying it somehow to the sheep's fleece.)
• Putting this here. Several people discuss communication. If a team has elephants, they are the perfect communication hub. They produce infrasonic tones which can communicate to other elephants 4km away. Call them old and wise and they can encode this further. – kaine Sep 15 '15 at 19:37
• @Green Indeed, there are two things that are best at humans unique: thinking and running. – PyRulez Sep 15 '15 at 22:37
Herbivores can be extremely dangerous:
Sharks, Lions, and Wolves combined have a fraction the body count of Hippos.
Dogs are predators and are way ahead, but that's because of their high population comparatively. I did find the Croc number interesting.
However, obviously your predators should also get out of the army and make way for the legions of deadly insects.
• ah but the actual killers here for most are the protozoans and trypanosomes etc. They are killer micro organisms... maybe they can gain human level sentience... then what? (note a joke) – kaine Sep 15 '15 at 19:21
• Giving microorganisms human level sentience would be interesting, given that they don't have the sensory capabilities to comprehend a human-level world, or really to manipulate their environment, for the most part. Trophozoites, for example, would have the ability to glide about blindly, consume their immediate surroundings, and perhaps sense some chemical properties of their surroundings. The existence of larger organisms, or even other cells, would be out of the ability of their sensory organs to determine, even if backed (somehow) by a human-strength mind. – ckersch Sep 16 '15 at 1:14
• @ckersch: A micro-organism that can see through the host's sensory apparatus can make for an interesting story. See Robert Sawyer's novel, End of an Era. sfwriter.com/exer.htm – Peter Cordes Sep 16 '15 at 15:46
• @zxq9: Keep in mind this is worldwide, I'm sure the first world numbers are quite different. it also obviously includes diseases spread by animals - otherwise mosquito wouldn't be on the list - so that might be distorting at least the dog numbers as well. – Dan Smolinske Sep 17 '15 at 14:40
• @DanSmolinske Other figures in that same article contradict figures in the image above, though. Not that pravda.ru is a particularly reliable source (lots of circular citation going around the web). Might be a fun topic for Snopes or skeptics.se. – zxq9 Sep 17 '15 at 16:25
• Elephant.
Their size is a definitive advantage. Nobody would try to approach them without caution. Their tusks can throw enemies around and they can also impale them. Nobody wants to be charged by an elephant.
• Bison, it's a lot faster than an elephant.
Rhinos and elephants charging in formation would make some formidable "heavy cavalry". Also, hippos kill more people than any other large animal (pigs or larger) per year in Africa. Smaller animals tend to be disease carriers rather than direct killers of humans, but I'm not sure that's relevant. Generally, larger and slower herbivores have mechanisms to defend themselves from predators, and can be extremely dangerous. Then there are also non combat roles. Giraffes could communicate over distances on battle field due to their height, birds for reconnaissance, small fast herbivores as forward scouts. Generally herbivores that rely on speed as a defensive mechanism can either straight out out-pace predators, or otherwise out-distance them. There are reasons wolves hunt in packs. Cheetahs tire quickly.
TL DR: Strength in numbers and disease.
The premise of this question is flawed, an animal does not need claws, talons, or pouncing ability to kill; sheer size can be a threat. Herbivores have horns that in a ramming blow can easily break ribs. Plus herbivores have vastly larger numbers. after all there is a reason wolves don't go head on into herds or lions don't attack groups of elephants. Also speaking of elephants, their tusks are a major threat as well. I would think that the primary force would be herbivores, used as infantry while carnivores would be more of a Calvary unit. Keep in mind that strategy plays a large part in this as well.
Also while I'm here let's talk about disease. Rats make up over a third of all mammals and bats make up another quarter that 55% already taken in just 2 orders! bat have guano that is poisonous to most animals (I'm thinking of fruit bats if insect eating counts as carnivore In your plan). Rats are also carriers of disease as they are fed carrion. Send thousand of thousands of both and the enemy is screwed.
Edit This next part assumes insectivore are not counted as carnivores and treated the same as herbivores. I could easily see massive swarms of flies, bees, ant, and grasshoppers suffocating the enemies by sheer numbers.
Herbivores would make great spies. "That rabbit...who cares about a rabbit...he's just food. Just ignore him...".
Herbivores are easier to feed. They eat plants, which grow everywhere. How does the army feed itself? Better an army that can forage than one which must take time to hunt.
Rodents are great diggers, and they would be better fighters than moles. Beavers can build infrastructure. Mice can nibble their way through substances that would blunt and break a carnivore's teeth. They could be sent on rescue missions.
Because of their skeletal structure they would be better able to carry heavy loads. That's why you can ride a horse for long periods of time but you can't do the same to a big cat. They are also built to travel long distances better than a carnivore, which is basically made for dash-and-strike. And because they are foragers, they don't get sluggish after a meal.
They would also be better at teamwork. Very few carnivores are by nature equipped to work in teams.
And what about omnivores? You don't mention those. A skunk would make a great weapon to use against carnivores with sensitive noses. And most of the animals with higher intelligence are going to be omnivorous.
• Humans are the only predator I know of that runs its prey to death. Some predators have impressively large ranges that they cover daily or weekly but that's at a lope instead of sprint. – Green Sep 15 '15 at 21:51
• Actually, he does mention omnivores (at least Bears) but he classifies them as carnivores. – krowe Sep 16 '15 at 3:54
• -1 "Herbivores would make great spies." This is nonsense. Anyone who has ever seen any nature program will have seen "carnivore x sneaking up to herd of herbivore y". I have yet to see a nature program of "herbivore y sneaking up to plant z". – Aron Sep 16 '15 at 13:25
• Don't count out rabbits; recently this video came out of a momma rabbit saving at least one of her three babies by biting and kicking the crap out of a snake who was strangling them. – CR Drost Sep 16 '15 at 22:30
• @ChrisDrost That's mighty-momma power. Inherent to most mammals. All bets are off if you face any army off against an army of panicked-for-their-children mammalian mothers. – zxq9 Sep 17 '15 at 16:43
I'll second the emphasis on logistics. Herbivores are ten times more efficient when you compare the calories they eat to the calories you need to fatten up animals for carnivores to eat. Herbivores can probably live off the land. Carnivores probably have to raid. This might, however, get back to the mystery of how the carnivores eat.
Primarily, though, the issue isn't even which individual herbivores can outfight carnivores one on one or what specialized roles some species can play. (Although remember: in the real world, horses were the main way to haul things and send messages until the middle of the 20th century.) The question is whether herbivores can contribute at all, even as auxiliaries.
Analogies this brings to mind are the racial desegregation of the US armed forces and the gradual expansion of the role of women in combat, in many countries, including the arguments people made against them.
Combat is violence, but that is pointless without a strategy, and strategy is the heart of war.
Carnivores tend to have a hard time ranging solo (covering lots of distance per day, day after day), surviving in various terrain (or even negotiating it sometimes), carrying loads, etc. Once out of their element carnivores are helpless.
Herbavores, on the other hand, tend to have athletic endurance, be fleet of foot, range very well, survive in harsh terrain, adapt well to drastic weather changes, etc. Herbavores are your scouts, support formations, signal corps, etc. If these animals can talk then they are probably also your builders and engineers, and a formation of engineers can easily be worth ten times their equivalent number in infantry because they are force multipliers (what they do leverages what violence you can already bring).
The outcome of battles is generally determined before they are fought. Most of the time each side thinks it has the upper hand, and is keeping as much about its own nature and disposition as secret as possible. All of the set-up work that determines whether a fight will be a win or loss happens before the engagement starts, and is the outcome of the employment of these herbavore formations.
War is movement, not toe-to-toe slugging it out against a perfectly matched foe (in war that is "stupidity", in a different context it is "regulated sport"). What does it matter when you are on a march to X when the enemy has already moved to Y? If the enemy is faster than your combat formations you must either locate them, or locate a target that will force them to assemble in its defense before they can pre-empt your move. This is all about battlefield intelligence, and you can probably only get that by employing swift, long-ranging anmials that survive well alone or in small groups -- and these are usually herbavores.
Also, all that other stuff people said about herbavores not being wimps -- that stuff too. Having encountered cougars, leopards, cape buffalo and elephants in the wild, I have to say I am much more afraid of bull elephants and cape buffalo than pretty much any other land animal (aside from humans). Actually, it is hard to imagine any formation of cat or wolf-like predators surviving a head-on formation battle against a determined formation of cape buffalo or elephants. That's why these predators target the weak, sick, deformed, crippled, unjured and young.
Herbivores have a place in the army as many others have stated for many reasons:
1) As previously stated many times, many of the largest animals are not only herbivore, but also have very tough skin(a natural armor). This attribute was not specifically pointed out, though it was alluded to when it was mentioned that they are good for holding a line. The elephant may go down, but it will kill or maim tens or hundreds before doing so, and survive long enough to give the opportunity for for other animals(mongoose, snakes, other death blow dealing animals) to slip in unnoticed and slay the already engaged enemy.
2) Though the falcon, owl, or eagle makes a great scout, they are highly visible(the owl much less so, but it's hard to hide when there is nothing to blend against). This can pose a problem if the scouting action is to remain unnoticed. Small ground based herbivores such as a rabbit or squirrel make excellent scouts since they can move fast and quiet, and are incredibly nimble. Squirrels have the added advantage of being able to get to high perches with relative ease while still blending in with the surrounding environment, as well as being able to jump from tree to tree to avoid the carnivores.
3) The several times answered already issue of beans, bullets, band-aids. It is much easier to find your food growing, absent of thought, and stationary, rather than moving and intelligently avoiding the predator. This reason alone is enough to warrant herbivore integration. Herbivores can be sustained fairly easily, and can also take advantage of the healing benefits of herbs and other plant based medicines.
4) Herbivores tend to mass produce and live in colonies, adding chaos to chaos when this nature is applied in a battle environment. It is much easier to track the individual lions in the pack than it is to track the individual zebras in a herd. This idea stands to capitalize on the confusion of details, and can be exploited to send messages(can't kill the messenger if you can find him), as well as to mask troop movements(can't see past the cloud of dust the herd is kicking up, or at least can't tell the direction of movement since the sheer numbers make it hard to track individual movement.)
5) Some herbivores are extremely dangerous when provoked. Many primates are herbivores, some have enormous strength, as well as dexterity close to or even exceeding a human. When this combination is applied with human level intellect, this creature can be multitudes more deadly than the carnivore equipped with just claws and fangs, regardless of intellect. Think of a gorilla that has mastered martial arts.
There are many more reasons I'm sure, I will add them as I think of them. Here is a list of dangerous herbivores that surely fit the standard of fighting force. I did not include humans in this consideration since they are the standard to which all other animals have been compared to, as well as having a mastery of tool building, and it's not easy to classify a human as herbivore or carnivore since it's more culture based rather than specie based.
http://listverse.com/2010/01/10/top-10-herbivores-you-probably-want-to-avoid/
• Hi Hephaestus. Welcome to Worldbuilding SE. Nice starting answer! – clem steredenn Sep 18 '15 at 7:18
• Thank you! I'm still relatively new to SE in general, but it seems that it has everything the mind can think of! – Hephaestus Sep 18 '15 at 11:54
• Yes, your reputation gives you away. Don't hesitate to check the help center, or otherwise ask questions on Worldbuilding Meta or Worldbuilding Chat. – clem steredenn Sep 18 '15 at 11:56 | 2020-08-09 23:53:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33426398038864136, "perplexity": 4050.8343730886927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00351.warc.gz"} |
http://mathhelpforum.com/pre-calculus/113608-find-difference-quotient-simplify-answer.html | # Thread: Find the difference quotient and simplify answer
1. ## Find the difference quotient and simplify answer
f(x)=x^3-5x^2+x , f(x+h)-f(x)/h, h cannot = 0
so far i got this
3x^2h+3xh^2+h^3-10xh-5h^2+h/H
i just need the correct answer so to see what i did or did not do.
2. Now factorise h out of the top to get h(....) and then cancel the h's.
3. ok now if i factor out h then i get
h(3x^2+3xh+h^2-10x-5h)/h
which = 3x^2+3xh+h^2-10x-5h
for some reason this doesnt look right to me what did i do wrong????
4. you've lost a "+1" at the end
5. your not helping what is the answer?? im so stuck? i know i lost the +1 at the end. whats next?
6. I actually think I am helping - I'm just not doing it for you ....there's a difference!
What is the actual question...because you have "found the difference quotient and simplified the answer". Are you supposed to now go on and take the limit as h approaches 0?
7. Originally Posted by Debsta
I actually think I am helping - I'm just not doing it for you ....there's a difference!
What is the actual question...because you have "found the difference quotient and simplified the answer". Are you supposed to now go on and take the limit as h approaches 0?
im sorry you are helping but its just because im to freaking dumb to understand this stuff.
8. Apology accepted . Don't put yourself down. What you have done so far is right (except for losing the +1 at the end, which is easy to do). Can you please quote the qusetion word for word?
9. Originally Posted by Debsta
Apology accepted . Don't put yourself down. What you have done so far is right (except for losing the +1 at the end, which is easy to do). Can you please quote the qusetion word for word?
Find the difference quotient and simplify your answer.
f(x)=x^3-5x^2+x, f(x+h)-f(x)/h, h cannot = 0
i tried my best but the answer i got look so different from the examples in the book? or maybe the answer is right but just looks different?
10. Can you show me the answer in the book - so I can see if it is just in a different form?
11. Originally Posted by Debsta
Can you show me the answer in the book - so I can see if it is just in a different form?
the book does not give the answer to this problem because its and even number problems the book only gives odd number problems. but its ok i think this is correct though thank you so much.
12. Glad to help! | 2017-05-28 08:55:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002833127975464, "perplexity": 681.2261226959739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609610.87/warc/CC-MAIN-20170528082102-20170528102102-00289.warc.gz"} |
http://physics.stackexchange.com/tags/frequency/hot | # Tag Info
123
Why is mains frequency 50Hz and not 500 or 5? Engine efficiency, rotational stress, flicker, the skin effect, and the limitations of 19th century material engineering. 50Hz corresponds to 3000 RPM. That range is a convenient, efficient speed for the steam turbine engines which power most generators and thus avoids a lot of extra gearing. 3000 RPM is ...
61
This is a really interesting question. It turns out that your body is reasonably conductive (think salt water, more on that in the answer to this question), and that it can couple to RF sources capacitively. Referring to the Wikipedia article on keyless entry systems; they typically operate at an RF frequency of $315\text{ MHz}$, the wavelength of which is ...
49
I'm not going to address the production mechanism,1 just the nature of the "sound" in this case. What you think of as the hard vacuum of outer space could just as well be seen as a very, very, very diffuse, somewhat ionized gas. That gas can support sound waves as long as the wavelength is considerably longer than the mean free path of the atoms on the ...
42
The limitation you're hearing has been part of the phone network since long before digital sampling had any part in the telephone system. It is related to the fact that the connection from a land-line phone in your house or office back to the "central office" of the phone company is essentially a continuous connection through a pair of wires. There's ...
34
As the other guys have already covered most of the topic, I'd like to quote some things. Light can't escape only from the inside of event horizon because it has already fallen into it. But after reading the article now, we could indicate some points. The article specifically says a "supermassive blackhole". They're a way too bulk in size when compared to ...
33
Colour is defined by the eye, and only indirectly from physical properties like wavelength and frequency. Since this interaction happens in a medium of fixed index of refraction (the vitreous humour of your eye), the frequency/wavelength relation inside your eye is fixed. Outside your eye, the frequency stays constant, and the wavelength changes according ...
26
Do low frequencies carry farther than high frequencies? Yes. The reason has to do with what's stopping the sound. If it weren't for attenuation (absorption) sound would follow an inverse square law. Remember, sound is a pressure wave vibration of molecules. Whenever you give molecules a "push" you're going to lose some energy to heat. Because of this, ...
26
We can consider four aspects of your question: Why do most events generate sound? What sounds get propagated? What does it take for sound to be detected? Has evolution got anything to do with this? 1 - generating sound Most of the sounds you describe are "broad band". Remember that a delta pulse (short sharp shock) is basically "all frequencies", ...
25
In the end, the choice of a single specific number comes from the necessity to standardize. However, we can make some physical observations to understand why that final choice had to fall in a certain range. Frequency Why a standard? First of all, why do we even need a standard? Can't individual appliances convert the incoming electricity to whatever ...
23
According to Wikipedia the frequency range of the plain old telephone service is 300Hz to 3.4kHz. So any music you listen to will be missing the low frequencies and missing the high frequencies. If you remember back to the last time you heard hold music on the phone you'll probably remember that it sounded a bit muffled, but I have to say that it's still ...
22
For almost all detectors, it is actually the energy of the photon that is the attribute that is detected and the energy is not changed by a refractive medium. So the "color" is unchanged by the medium...
21
It is an ångström, a unit of length commonly used in chemistry to measure things like atomic radii and bond lengths. Although not an official SI unit, it has a simple relationship to the metric units of length: $$1\:\mathrm{ångström} = 1\:\mathrm{Å} = 10^{−10}\:\mathrm{m} = 0.1\:\mathrm{nm} = 100\:\mathrm{pm}.$$
20
The electric and magnetic fields have to remain continuous at the refractive index boundary. If the frequency changed, the light at each side of the boundary would be continuously changing it's relative phase and there would be no way to match the fields.
19
As FrankH said, it's actually energy that determines color. The reason, in summary, is that color is a psychological phenomenon that the brain constructs based on the signals it receives from cone cells on the eye's retina. Those signals, in turn, are generated when photons interact with proteins called photopsins. The proteins have different energy levels ...
18
As previous answers have stated, the wavelength (or frequency) and intensity of the beam are important, as well as the type and amount of impurities in the air. The beam must be of a wavelength that is visible to humans, and fog or dust scatters the light very strongly so that you can see it. However, even in pure, clean air, you will be able to see a laser ...
16
Lorentz came with a nice model for light matter interaction that describes dispersion quite effectively. If we assume that an electron oscillates around some equilibrium position and is driven by an external electric field $\mathbf{E}$ (i.e., light), its movement can be described by the equation $$... 16 I've found some sources. Mathematical To start with, as for the mathematical notion of "beats", it seems that one Ibn Yunus (c. 950-1009) was responsible for first demonstrating the trigonometric identity$$ \cos a \cos b = \frac 12 \left( \cos ( a + b) + \cos (a - b) \right ) $$quoting A History of Mathematics By Carl B. Boyer, Uta C. Merzbach At ... 16 As promised in the comments to my answer, I went out and measured the effect in a number of different configurations (a couple of days later than promised :-)). For those of you who just want the conclusions, here they are: The remote seems to work better when held to the head though the improvement isn't as marked as one might have expected from a google ... 14 Have a look into the Nyquist theorem. The sampling frequency needs to be at least double the rate of the sampled frequency. I.e. that's why the human ear can hear up to ca. 20kHz and the CD samples at 44.1kHz. Wikipedia Nyquist-Shannon Theorem What do we hear instead if we do listen to (originally) 5 Hz to 20 kHz music through the phone? Is everything ... 14 The wavelengths that stimulate vitamin D production are between 280nm and 320nm, which is called UVB. You would need to use a detector capable of measuring light in this wavelength. However there is no need, because normal windows are made from soda-lime glass and this transmits no wavelengths shorter than about 350nm. Some Googling will find you the ... 13 Do keep in mind that the frequency of light is reference frame dependent. So, for example, the cosmic background microwave radiation would appear as a concentrated gamma radiation source 'in front' to an observer with ultra-relativistic speed relative to the CMB. In other words, light emitted from a body of a particular frequency in that body's frame of ... 13 There seem to be a lot of human body mechanical models, such as this one: As for applications, I have heard that sub-audio frequency vibrations have been considered as nonlethal weapons for riot control. 13 The speed of light in vacuum is constant and does not depend on characteristics of the wave (e.g. its frequency, polarization, etc). In other words, in vacuum blue and red colored light travel at the same speed c. The propagation of light in a medium involves complex interactions between the wave and the material through which it travels. This makes the ... 12 There's no contradiction, because your assumption that at some finite time the pulsar will stop is false. We can solve the equation for f:$$\frac{dT}{dt} = \frac{d(1/f)}{dt} = -\frac1{f^2}\frac{df}{dt} = C$$Which is equivalent to$$\frac{df}{dt} = -Cf^2$$The solution is$$f(t) = \frac{1}{Ct+1/f_0} Of course, this is the same as taking the ...
12
The two other answers address the frequency issue. The voltage issue is much simpler. If the voltage is too high, you run the risk of arcs between conductors. The minimum distance between conductors before an arc appears is proportional to voltage. At 240V, you arc at a distance of a few millimeters in air, depending on humidity. More voltage gets clearly ...
12
Because the frequency of a sound wave is defined as "the number of waves per second." If you had a sound source emitting, say, 200 waves per second, and your ear (inside a different medium) received only 150 waves per second, the remaining waves 50 waves per second would have to pile up somewhere — presumably, at the interface between the two media. ...
11
So, we need data of from ears. An audible sound has an minimum intensity of $I_0\approx10^{-12}W/m^2$. This shows how sensitive our ears really are. A way to see it is to use that intensity to calculate the total variation of air displacement. If you do that, we will have about $\Delta u\approx1.1\cdot 10^{−11}m$. This is $0.11$ angstroms! This is smaller ...
11
A human eye may only distinguish thousands or millions of colors – obviously, one can't give a precise figure because colors that are too close may be mistakenly identified, or the same colors may be mistakenly said to be different, and so on. The RGB colors of the generic modern PC monitors written by 24 bits, like #003322, distinguish \$2^{24}\sim ...
11
First, for additional references, there is the original press release. Also, a similar report from a different black hole is here. It seems the Chandra people like this sort of thing. It is also worth noting that as far as I can tell, there are only press releases and no published scientific articles on this phenomenon. Now to address the questions. ...
10
I am going to speculate on a production mechanism to complement @dmckee's answer. It is true that light cannot escape from a black hole, except it can lose energy through Hawking radiation. Suppose the black hole oscillates,vibrates, i.e. within it compression waves exist this would of course be another way of losing energy. This because the radius of the ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2015-11-28 22:19:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6137474179267883, "perplexity": 588.972042696851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398454160.51/warc/CC-MAIN-20151124205414-00320-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://dbfin.com/logic/enderton/chapter-3/section-3-1-natural-numbers-with-successor/problem-2-solution/ | # Section 3.1: Problem 2 Solution
Working problems is a crucial part of learning mathematics. No one can learn... merely by poring over the definitions, theorems, and examples that are worked out in the text. One must work part of it out for oneself. To provide that opportunity is the purpose of the exercises.
James R. Munkres
Complete the proof of Theorem 31F. Suggestion: Use induction.
Let us recall Theorem 31F.
Assume that for every formula $\phi$ of the form where each $\alpha_{i}$ is an atomic formula or the negation of an atomic formula, there is a quantifier-free formula $\psi$ such that $T\vDash(\phi\leftrightarrow\psi)$ . Then $T$ admits elimination of quantifiers.
The proof in the text has already shown that, given the assumption of the theorem, one can find a quantifier-free equivalent for any formula of the form $\exists x\theta$ where $\theta$ is quantifier-free. It follows that $\forall x\theta=\neg\exists x\neg\theta$ has a quantifier-free equivalent as well. Now, we can use the prenex normal form (Section 2.6) to argue that every wff has a logically equivalent wff in the prenex normal form, in which we can eliminate quantifiers one-by-one. | 2021-06-23 11:17:45 | {"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231789112091064, "perplexity": 229.2553619078494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00114.warc.gz"} |
https://admin.clutchprep.com/organic-chemistry/practice-problems/16184/write-structural-formulas-of-the-type-indicated-c-condensed-structural-formulas- | # Problem: Write structural formulas of the type indicated: (c) condensed structural formulas for four constitutional isomers with the formula C 3H9N;
###### Problem Details
Write structural formulas of the type indicated:
(c) condensed structural formulas for four constitutional isomers with the formula C 3H9N;
What scientific concept do you need to know in order to solve this problem?
Our tutors have indicated that to solve this problem you will need to apply the Drawing Isomers concept. If you need more Drawing Isomers practice, you can also practice Drawing Isomers practice problems.
What is the difficulty of this problem?
Our tutors rated the difficulty ofWrite structural formulas of the type indicated: (c) conden...as low difficulty.
How long does this problem take to solve?
Our expert Organic tutor, Jonathan took 3 minutes and 9 seconds to solve this problem. You can follow their steps in the video explanation above.
What textbook is this problem found in?
Our data indicates that this problem or a close variation was asked in Organic Chemistry - Solomons 10th Edition. You can also practice Organic Chemistry - Solomons 10th Edition practice problems. | 2020-07-04 05:39:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092360496520996, "perplexity": 2495.996667203236}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00545.warc.gz"} |
http://math.stanford.edu/geomsem/ | Stanford University
Department of Mathematics
# Geometry Seminar Spring 2014
Organizers: Richard Bamler (rbamler@math.*) and Yi Wang (wangyi@math.*)
Time: Wednesday at 4 PM
Location: 383N
(*=stanford.edu)
## Next Seminar
2 April
Speaker: Setsuro Fujiie (Ritsumeikan University) (Analysis and PDE Seminar)
Title: Semiclassical distribution of resonances created by homoclinic trajectories
Abstract: We consider the Schrödinger operator $$−h^2 \Delta + V (x)$$ in the multidimensional Euclidean space with a semiclassical parameter h and a smooth potential decaying at infinity. Assuming that the underlying classical mechanics on the energy surface of a fixed positive energy has a trapped set consisting of homoclinic trajectories, we describe the precise asymptotic distribution of resonances in a complex neighborhood of this energy. The method is based on the microlocal study of solutions near the trapped set, especially near the hyperbolic fixed point. This is a joint work with Jean-Francois Bony, Thierry Ramond and Maher Zerzeri.
## Spring Quarter
2 April
Speaker: Setsuro Fujiie (Ritsumeikan University) (Analysis and PDE Seminar)
Title: Semiclassical distribution of resonances created by homoclinic trajectories
Abstract: We consider the Schrödinger operator $$−h^2 \Delta + V (x)$$ in the multidimensional Euclidean space with a semiclassical parameter h and a smooth potential decaying at infinity. Assuming that the underlying classical mechanics on the energy surface of a fixed positive energy has a trapped set consisting of homoclinic trajectories, we describe the precise asymptotic distribution of resonances in a complex neighborhood of this energy. The method is based on the microlocal study of solutions near the trapped set, especially near the hyperbolic fixed point. This is a joint work with Jean-Francois Bony, Thierry Ramond and Maher Zerzeri.
9 April
Speaker: David Maxwell (Fairbanks)
Title: Initial Data in General Relativity Described by Expansion and Conformal Deformation
Abstract: Initial data for the vacuum Cauchy problem in general relativity satisfy a system of nonlinear PDEs known as the Einstein constraint equations. These equations are underdetermined, and it has been a long-standing problem to naturally parameterized the solution space. In particular, although the set of constant mean curvature solutions is fully understood, the far-from-CMC regime is not. In this talk we describe how the two most popular competing strategies for constructing non-CMC solutions (the conformal method and the conformal thin-sandwich method) are in fact the same, and we present some examples illustrating deficiencies these methods have in constructing far-from-CMC solutions. From this analysis, we propose adjustments to the conformal method that have the potential to better describe far-from-CMC initial data.
16 April
Speaker: TBA
Title: TBA
Abstract: TBA
23 April
Speaker: Yi Wang (Stanford)
Title: Isoperimetric inequality and Q-curvature
Abstract: TBA
30 April
Speaker: TBA
Title: TBA
Abstract: TBA
9 May, Friday (Note special day and time: 11-11:50am. Regular location: 383N)
Speaker: Sun-Yung Alice Chang (Princeton)
Title: TBA
Abstract: TBA
14 May
Speaker: TBA
Title: TBA
Abstract: TBA
19 May
MONDAY
(Note special seminar)
Speaker: TBA
Title: TBA
Abstract: TBA
28 May
Speaker: Peter Hintz (Stanford)
Title: Nonlinear wave equations on de Sitter and Kerr-de Sitter spaces
Abstract: TBA
4 June
Speaker: TBA
Title: TBA
Abstract: TBA
## Past Quarters
For the Winter 2014 Schedule go here
For the Fall 2013 Schedule go here
For the Spring 2013 Schedule go here
For the Winter 2013 Schedule go here
For the Fall 2012 Schedule go here
For the Spring 2012 Schedule go here
For the Winter 2012 Schedule go here
For the Fall 2011 Schedule go here
For the Spring 2011 Schedule go here
For the Winter 2011 Schedule go here
For the Fall 2010 Schedule go here
For the Spring 2010 Schedule go here
For the Winter 2010 Schedule go here
For the Fall 2009 Schedule go here | 2014-04-19 01:47:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6854724287986755, "perplexity": 3610.7707309949424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-equations/94245-please-help-me-find-solutions-differential-equation.html | # Math Help - Please help me to find the solutions to this differential equation
1. ## Please help me to find the solutions to this differential equation
2. Since it is a homogeneous equation, substitute $y=vx$
$\frac{dy}{dx}=v+\frac{dv}{dx}$
$\frac{dy}{dx}=-\frac{3xy}{x^2+y^2}$
substituing y=vx, we get
$v+\frac{dv}{dx}=-\frac{3v}{1+v^2}$
Can you continue from here? | 2014-07-25 10:48:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40861594676971436, "perplexity": 254.64619712388642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894151.32/warc/CC-MAIN-20140722025814-00000-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://www.ques10.com/p/32349/a-if-y-x-xfrac12-1m-prove-that-x2-1y_n2-2n1xy_n1-n/ | Question: a) If y = $(x + (x)^{\frac{1}{2}}-1)^m , Prove that (x^2-1)y_(n+2)+ (2n+1)xy_(n+1) + (n^2-m^2)y_n= 0$
0
Subject:- Applied Mathematics
Marks:- 3
Mumbai Unversity>FE>Sem1>Applied Maths1
m1(81) • 69 views
$y = (x + √x^2-1)^m$ taking + sign before the radical $∴y_1= m[(x +√x^2-1)^(m-1)].[1+\frac{x}{(√x^2-1)}]$ $= m(x + √x^2-1)^m).\frac{x}{(√x^2-1)} =\frac{my}{(√x^2-1)}$ $√x^2-1 .y_1= my$ Differentiating again w.r.t x, $√x^2-1 .y_2 + x/(√x^2-1) y_1 = my_1$ $(x^2-1)y_2 + xy_1 = m√x^2-1.y_1= m.my = my^2$ $(x^2-1)y_2 + xy_1 - my^2 = 0$ Hence after applying lebnitz’s theorem we get, $(x^2-1)y_(n+2)+ (2n+1)xy_{n+1} + (n^2-m^2)y_n= 0$ | 2019-05-23 10:05:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807199239730835, "perplexity": 5903.518604368966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257197.14/warc/CC-MAIN-20190523083722-20190523105722-00412.warc.gz"} |
https://im.kendallhunt.com/MS_ACC/teachers/2/1/8/preparation.html | # Lesson 8
Moves in Parallel
### Lesson Narrative
The previous lesson examines the impact of rotations on line segments and polygons. This lesson focuses on the effects of rigid transformations on lines. In particular, students see that parallel lines are taken to parallel lines and that a $$180^\circ$$ rotation about a point on the line takes the line to itself. In grade 7, students found that vertical angles have the same measure, and they justify that here using a $$180^\circ$$ rotation.
As they investigate how $$180^\circ$$ rotations influence parallel lines and intersecting lines, students are looking at specific examples but their conclusions hold for all pairs of parallel or intersecting lines. No special properties of the two intersecting lines are used so the $$180^\circ$$ rotation will show that vertical angles have the same measure for any pair of vertical angles.
Teacher Notes for IM 6–8 Math Accelerated
This lesson is the first time students see the term vertical angles in IM 6–8 Math Accelerated. They learn that vertical angles are two angles with the same measure formed by two intersecting lines and use transformations to understand why these pairs of angles have the same measure. If students need additional practice identifying vertical angles, use the lesson synthesis to display this image and ask students to identify four pairs of vertical angles. In particular, students may have trouble seeing that angles $$FJI$$ and $$HJG$$ are vertical angles.
### Learning Goals
Teacher Facing
• Comprehend that a rotation by 180 degrees about a point of two intersecting lines moves each angle to the angle that is vertical to it.
• Describe (orally and in writing) observations of lines and parallel lines under rigid transformations, including lines that are taken to lines and parallel lines that are taken to parallel lines.
• Draw and label rigid transformations of a line and explain the relationship between a line and its image under the transformation.
• Generalize (orally) that “vertical angles” are congruent using informal arguments about 180 degree rotations of lines.
### Student Facing
Let’s transform some lines.
### Student Facing
• I can describe the effects of a rigid transformation on a pair of parallel lines.
• If I have a pair of vertical angles and know the angle measure of one of them, I can find the angle measure of the other.
Building On
### Glossary Entries
• vertical angles
Vertical angles are opposite angles that share the same vertex. They are formed by a pair of intersecting lines. Their angle measures are equal.
For example, angles $$AEC$$ and $$DEB$$ are vertical angles. If angle $$AEC$$ measures $$120^\circ$$, then angle $$DEB$$ must also measure $$120^\circ$$.
Angles $$AED$$ and $$BEC$$ are another pair of vertical angles. | 2022-07-07 04:33:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6527951955795288, "perplexity": 605.2200607207187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00189.warc.gz"} |
https://certfhewiki.certsign.ro/wiki/Homomorphic_encryption | # Homomorphic encryption
## Intuitive idea
Suppose one would like to delegate the ability of processing its data without giving away access to it. This type of situation becomes more and more frequent with the widespread use of cloud computing. To store unencrypted data in the cloud is very risky and, for some types of data such as medical records, can even be illegal.
On the other hand, at first thought encrypting data seems to cancel out the possible benefits of cloud computing unless one gives the cloud the secret decryption key, sacrificing privacy. Fortunately, there are methods of encrypting data in a malleable way, such that the encryption can be manipulated without decrypting the data.
To explain the ideas in a tangible manner, we are going to use a physical analogy: Alice, who owns a jewellery store and wants her workers to process raw precious materials into jewellery pieces. Alice is constantly concerned about giving her workers complete access to the materials in order to minimise the possibility of theft. The analogy was coined by Gentry [1] and we follow the presentation in his paper.
Alice's plan
Use a transparent impenetrable glovebox (see image) secured by a lock for which only Alice has the key. Using the gloves, a worker can assemble pieces of jewellery using the materials that were previously locked inside the box by Alice. When the pieces are finished, she unlocks the box with her key and extracts them.
The locked glovebox with the raw precious materials inside is an analogy for an encryption of some data ${\displaystyle m_{1},\dots ,m_{t}}$ which can be accessed only using the decryption key. The gloves should be regarded as the malleability or the homomorphic property of the encryption. The finished piece of jewellery in the box can be thought of as the encryption of ${\displaystyle f(m_{1},\dots ,m_{t})}$, a desired computation using the initial data. The lack of physical access to the raw precious materials in the box is an analogy for the fact that knowing encryptions of ${\displaystyle m_{1},\dots ,m_{t}}$ or ${\displaystyle f(m_{1},\dots ,m_{t})}$ does not give any information about ${\displaystyle m_{1},\dots ,m_{t}}$ or ${\displaystyle f(m_{1},\dots ,m_{t})}$, without the knowledge of the decryption key.
Of course, Alice's jewellery store, like any analogy, does not represent some aspect of homomorphic encryption very well and one does not have to take it too literally. Some flaws of this analogy are discussed in Section 4 of Gentry's aforementioned article.
## Definition
Every encryption scheme ${\displaystyle {\mathcal {E}}}$ is composed of three algorithms: ${\displaystyle KeyGen,Encrypt}$ and ${\displaystyle Decrypt}$ and two sets ${\displaystyle {\mathcal {P}}}$ (the plaintext space) and ${\displaystyle {\mathcal {C}}}$ (the ciphertext space). All of the algorithms must be efficient, in the sense that they must run in polynomial time with respect to an a priori fixed security parameter ${\displaystyle \lambda }$. Encryption schemes can be symmetric or asymmetric. We will focus here on the asymmetric case (commonly known as public key encryption).
Basically, given a security parameter ${\displaystyle \lambda }$, one generates using KeyGen a pair ${\displaystyle (sk,pk)}$. The next two algorithms describe how to associate to a plaintext ${\displaystyle m\in {\mathcal {P}}}$ a ciphertext ${\displaystyle c=Encrypt(m,pk)\in {\mathcal {C}}}$ using the public key ${\displaystyle pk}$ and viceversa, how to associate to a ciphertext ${\displaystyle c\in {\mathcal {C}}}$ a plaintext ${\displaystyle m=Decrypt(c,sk)}$, using the secret key ${\displaystyle s_{k}}$ such that ${\displaystyle Decrypt(Encrypt(m,pk),sk)=m}$.
A homomorphic encryption scheme has a fourth algorithm ${\displaystyle Evaluate}$, which is associated to a set ${\displaystyle {\mathcal {F}}}$ of permitted functions. For any function ${\displaystyle f\in {\mathcal {F}}}$ and any ciphertexts ${\displaystyle c_{1},\dots ,c_{t}\in {\mathcal {C}}}$ with ${\displaystyle c_{i}=Encrypt(m_{i},pk)}$, the algorithm ${\displaystyle Evaluate(f,c_{1},\dots ,c_{t},pk)}$ outputs a ciphertext ${\displaystyle c}$ that encrypts ${\displaystyle f(m_{1},\dots ,m_{t})}$. In other words, we want that ${\displaystyle Decrypt(c,sk)=f(m_{1},\dots ,m_{t})}$. As a shorthand we say that ${\displaystyle {\mathcal {E}}}$ can handle functions in ${\displaystyle {\mathcal {F}}}$. For a function ${\displaystyle g\not \in {\mathcal {F}}}$, ${\displaystyle Evaluate(g,c_{1},\dots ,c_{t},pk)}$ is not guaranteed to output anything meaningful.
As described so far, it is trivial to construct an encryption scheme that can handle all functions. We can just define ${\displaystyle Evaluate(f,c_{1},\dots ,c_{t},pk)}$ to output ${\displaystyle (f,c_{1},\dots ,c_{t})}$ without processing the ciphertexts ${\displaystyle c_{i}}$ at all. Then, we modify ${\displaystyle Decrypt}$ slightly. To decrypt ${\displaystyle (f,c_{1},\dots ,c_{t})}$ first decrypt ${\displaystyle c_{1},\dots ,c_{t}}$ to obtain ${\displaystyle m_{1},\dots ,m_{t}}$ and then apply ${\displaystyle f}$ to them. But this does not fit the purpose of delegating the processing of information. In the jewellery store analogy, this is as if the worker sends the box back to Alice without doing any work on the raw precious materials. Then Alice has to assemble the jewellery herself.
The purpose of delegating computation is to reduce one's workload. In terms of running times, in a practical encryption scheme, decrypting ${\displaystyle c=Evaluate(f,c_{1},\dots ,c_{t},pk)}$ should require the same amount of computation as decrypting ${\displaystyle c_{1}}$ or any of the ciphertexts ${\displaystyle c_{i}}$ for that matter. Some schemes require additionally that ${\displaystyle c}$ is of the same size as ${\displaystyle c_{1}}$. This property, called compactness, whose precise definition can be found in [2] (Definition 3.4). Also, in a practical encryption scheme, the algorithms ${\displaystyle KeyGen}$, ${\displaystyle Encrypt}$ and ${\displaystyle Decrypt}$ should be effectively computable. In terms of complexity, one usually requires that these algorithms should be polynomial in a security parameter ${\displaystyle \lambda }$.
An encryption scheme is fully homomorphic (FHE) if it can handle all functions, is compact and the ${\displaystyle Evaluate}$ is efficient. The trivial solution presented above is not fully homomorphic, since the size of the cirphertexts outputed by ${\displaystyle Evaluate}$ depend on the function being evaluated. Moreover, in the trivial example the time needed to decrypt such a ciphertext depends on the evaluated function as well.
## Examples
Below we list a few examples of homomorphic encryption schemes. We hope that just presenting the public key together with the ${\displaystyle Encrypt}$ is enough to give the reader a clear picture of the whole scheme.
In the ElGamal cryptosystem, in a cyclic group ${\displaystyle G}$ of order ${\displaystyle q}$ with generator ${\displaystyle g}$, if the public key is ${\displaystyle (G,q,g,h)}$, where ${\displaystyle h=g^{x}}$, and ${\displaystyle x}$ is the secret key, then the encryption of a message ${\displaystyle m}$ is ${\displaystyle {\mathcal {E}}(m)=(g^{r},m\cdot h^{r})}$, for some random ${\displaystyle r\in \{0,\ldots ,q-1\}}$. The homomorphic property is then
{\displaystyle {\begin{aligned}{\mathcal {E}}(m_{1})\cdot {\mathcal {E}}(m_{2})&=(g^{r_{1}},m_{1}\cdot h^{r_{1}})(g^{r_{2}},m_{2}\cdot h^{r_{2}})\\[6pt]&=(g^{r_{1}+r_{2}},(m_{1}\cdot m_{2})h^{r_{1}+r_{2}})\\[6pt]&={\mathcal {E}}(m_{1}\cdot m_{2}).\end{aligned}}}
Goldwasser–Micali
In the Goldwasser–Micali cryptosystem, if the public key is the modulus ${\displaystyle n}$ and quadratic non-residue ${\displaystyle x}$, then the encryption of a bit ${\displaystyle b}$ is ${\displaystyle {\mathcal {E}}(b)=x^{b}r^{2}\;{\bmod {\;}}n}$, for some random ${\displaystyle r\in \{0,\ldots ,n-1\}}$. The homomorphic property is then
{\displaystyle {\begin{aligned}{\mathcal {E}}(b_{1})\cdot {\mathcal {E}}(b_{2})&=x^{b_{1}}r_{1}^{2}x^{b_{2}}r_{2}^{2}\;{\bmod {\;}}n\\[6pt]&=x^{b_{1}+b_{2}}(r_{1}r_{2})^{2}\;{\bmod {\;}}n\\[6pt]&={\mathcal {E}}(b_{1}\oplus b_{2}).\end{aligned}}}
where ${\displaystyle \oplus }$ denotes addition modulo 2, (i.e. Exclusive disjunction).
Other examples include the RSA , Paillier and the Benaloh encryption schemes.
## References
1. C. Gentry. Computing arbitrary functions of encrypted data. In "Communications of the ACM", 2010.
2. Brakerski, Z., Vaikuntanathan, V.: Efficient fully homomorphic encryption from (standard) LWE, R. Ostrovsky editor, IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs 2011, pp. 97 - 106 | 2022-08-12 06:58:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 71, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6372959613800049, "perplexity": 653.7763664534086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00148.warc.gz"} |
http://physics.stackexchange.com/questions/72447/why-this-perpetuum-mobile-cant-be-possible | # Why this perpetuum mobile can't be possible? [duplicate]
I know that this won't work but I'm asking Why?
Becuase as far as the vehicle POV - there is a force which drags him to the right.
Isnt $F=ma$ applies here? What is that im missing?
-
## marked as duplicate by Qmechanic♦Jul 27 '13 at 17:44
There is no reason to downvote. Even if you know the answer, many people out there actually believe it will work, or don't know the reason why it won't. This is NOT a bad question. – mikhailcazi Jul 27 '13 at 14:41
@mikhailcazi agree. but I dont tend to educate people about helping other people raise their knowledge. – Royi Namir Jul 27 '13 at 14:42
I just updated my answer again. Did it help you understand? :) – mikhailcazi Jul 27 '13 at 14:44
@mikhailcazi yes indeed thank you. – Royi Namir Jul 27 '13 at 14:44
duplicate of Why does the "Troll-Mobile" not work? – EnergyNumbers Jul 27 '13 at 16:54
This is because how much ever force the magnet is applying on the IRON (not any metal) is opposed by an equal force applied by the iron on the magnet! And since the magnet is a part of the vehicle, this force will cancel out the force on iron by the magnet. Magnetic force occurs both ways!
Therefore, there is no NET force on the whole vehicle, which is why it won't move. What will happen is that the magnet and iron will be pulled to each other, and stick. If the rod connecting the vehicle to the magnet is rigid, and can withstand the magnetic force, nothing will happen.
You should remember internal forces never can produce a change in momentum of the system as a whole.
Another example is if you try to push a car while sitting in it. No doubt, the car will recieve a force pushing it in a direction, but that force will be canceled out by the force you are unknowingly applying on the car in the exact backward direction. This force is applied by your feet while stopping themselves from slipping:
- | 2015-09-05 05:53:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5394159555435181, "perplexity": 561.8369005467273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645390745.87/warc/CC-MAIN-20150827031630-00009-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://matholympiad.org.bd/forum/viewtopic.php?f=30&t=189&p=1390 | ## Dependance of "g"
Discuss Physics and Physics Olympiad related problems here
tanvirab
Posts: 446
Joined: Tue Dec 07, 2010 2:08 am
### Re: Dependance of "g"
$g$ does not depend on $m$, in any theory!
Whatever $m$ is, the value of $g$ is the same, $9.8 ms^{-2}$. It is a constant independent of $m$.
We can write $2 = \frac{a}{b}$. That does not mean $2$ depends on $b$.
Dipan
Posts: 158
Joined: Wed Dec 08, 2010 5:36 pm
### Re: Dependance of "g"
Ok..according to the updated defination of function we can define function with dependent and independent variable....y=2x..here, y is depending on the value of x..so,why you want to say that in 2=ab...2 is not depending on b?
tanvirab
Posts: 446
Joined: Tue Dec 07, 2010 2:08 am
### Re: Dependance of "g"
Definition of functions has nothing to do here.
Does $2$ depend on $b$? You tell me.
Dipan
Posts: 158
Joined: Wed Dec 08, 2010 5:36 pm
### Re: Dependance of "g"
If 2 = ab..you can get 2 after using the value of b in the right side of this...so....
tanvirab
Posts: 446
Joined: Tue Dec 07, 2010 2:08 am
### Re: Dependance of "g"
That does not mean $2$ depends on $b$. Whatever $b$ is, $2$ is always $2$, it does not depend on anything.
Dipan
Posts: 158
Joined: Wed Dec 08, 2010 5:36 pm
### Re: Dependance of "g"
tanvirab wrote:That does not mean $2$ depends on $b$. Whatever $b$ is, $2$ is always $2$, it does not depend on anything.
as far as I know left side of a equation always depends on the right side....yes,you have used here 2 which is a constant...but we was conversing about g..this is also a constant...there are two kinds of constant....one is called wishমুলকconstant(don't konw the right english) so , it is different from 2 type constant......look if 2=ab and a=1 than the equation will be satisfy if and only if b=1 so why are you saying .............................................
tanvirab
Posts: 446
Joined: Tue Dec 07, 2010 2:08 am
### Re: Dependance of "g"
There is not such things like these.
A value $x$ is dependent on $y$ if and only if when you change the value of $y$ the value of $x$ also changes. If the value of $x$ does not change when you change the value of $y$, then $x$ is Independent of $y$.
Dipan
Posts: 158
Joined: Wed Dec 08, 2010 5:36 pm
### Re: Dependance of "g"
Ok, in the equation g=F/m we can use different value of F and m......if I take 4116 as the value of F I have to use 420 as the value of m...I can't use other value of.m..so......
tanvirab
Posts: 446
Joined: Tue Dec 07, 2010 2:08 am
### Re: Dependance of "g"
so what?
(Your message contains 8 characters. The minimum number of characters you need to enter is 10)
Zzzz
Posts: 172
Joined: Tue Dec 07, 2010 6:28 am
Location: 22° 48' 0" N / 89° 33' 0" E
### Re: Dependance of "g"
A=xyz হইলে A কে x এর উপর তখনই নির্ভরশীল বলা যাবে যদি y,z অপরিবর্তিত রেখে শুধুমাত্র x এর পরিবর্তনে A এর পরিবর্তন হয়।
Every logical solution to a problem has its own beauty. | 2021-01-19 13:02:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8571975827217102, "perplexity": 2364.5338930773764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00226.warc.gz"} |
https://www.nature.com/articles/s41559-021-01546-5?error=cookies_not_supported&code=02a77ed8-b82c-432a-9db8-141ca53baea4 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Socio-demographic correlates of wildlife consumption during early stages of the COVID-19 pandemic
## Abstract
To inform efforts at preventing future pandemics, we assessed how socio-demographic attributes correlated with wildlife consumption as COVID-19 (coronavirus disease 2019) first spread across Asia. Self-reported wildlife consumption was most strongly related to COVID-19 awareness; those with greater awareness were 11–24% less likely to buy wildlife products. A hypothetical intervention targeting increased awareness, support for wildlife market closures and reduced medical impacts of COVID-19 could halve future wildlife consumption rates across several countries and demographics.
## Main
The global COVID-19 pandemic has killed over four million people around the world and caused trillions of dollars of economic damage, but it did not arise unexpectedly. Indeed, experts had warned of this type of large-scale outbreak in the wake of other recent emerging zoonotic diseases1. While uncertainty remains regarding the specific origin of COVID-192, a key driving force of emerging infectious diseases of zoonotic origin is the trade and consumption of wildlife, in particular of high-risk taxa3, or of species sold in high-risk market conditions4. While the global costs of pandemics such as COVID-19 drastically exceed the benefits of the global wildlife trade5, it has nevertheless proven difficult to address large-scale wildlife consumption at local or regional scales. This is especially true in certain Asian countries where demand for wildlife used in various traditional, cultural and economic contexts is high6, and where attempts to curb illegal trade are sometimes hampered by weak wildlife trade laws, low enforcement rates and/or corruption7.
The global conservation community is debating the best long-term response to COVID-19, in particular on how to reduce wildlife consumption and habitat destruction so that the probability of future pandemic emergence is reduced8,9,10. Regulatory approaches such as the closing of wildlife markets—especially those deemed high-risk—are a popular demand8; however, previous examples have shown that rendering the consumption of certain goods illegal (for example, alcohol, recreational drugs) can drive existing demand underground to black markets11. Closing markets or otherwise restricting access to wildlife in situations where trade is highly localized, and/or where wildlife use is imperative for livelihoods or subsistence, also poses ethical dilemmas and trade-offs that are not easily answered8,12.
A complement to regulatory approaches are demand reduction efforts, which seek to influence consumer preferences so that demand for wildlife is reduced, leading to lower consumption rates. Reducing consumer demand may be a more comprehensive approach to lessening wildlife consumption13, but is beset by many complications, including limited investment in research to understand what drives individuals to consume wildlife14. Non-governmental organizations and academics are increasingly cognizant of the need for a solid research foundation to feed into behaviour change campaigns to reduce demand. Recent studies have made advances in identifying motivations for wildlife purchasing, as well as in developing consumer surveys that can help target specific groups of interest rather than whole populations15,16. The increasing popularity of demand reduction campaigns13,17 can be usefully bolstered by empirical studies that provide evidence-based justification for targeting and messaging strategies18,19, which would ultimately allow these interventions to realize their full potential within a comprehensive ‘One Health’ approach to zoonotic disease regulation20.
To address this empirical aspect of wildlife demand reduction efforts, we surveyed a total of 5,000 respondents among the general public in five countries and territories in Asia (Hong Kong SAR, Japan, Myanmar, Thailand and Vietnam), eliciting their self-reported wildlife consumption patterns, their awareness of and attitudes towards wildlife markets and COVID-19, and a variety of socio-demographic information (Methods). We built Bayesian hierarchical regression models on the basis of respondent socio-demographic attributes for (1) self-reported wildlife consumption in the previous 12 months, (2) change in consumption as a result of COVID-19 and (3) anticipated future wildlife consumption (Methods and Fig. 1a). Wildlife consumption in our case referred specifically to the purchase of terrestrial wild animals or their derived products in open, in-country markets such as ‘wet’ markets (see Supplementary Methods for all questions used in our modelling). We then used insights from these models to develop a simulated behaviour change intervention and assessed the impact this intervention could have on future wildlife consumption.
Our models of recent wildlife-purchasing behaviour and COVID-related changes in wildlife consumption had excellent in-sample goodness-of-fit, with areas under the receiver operating curve, using posterior predictive probability of models, equal to 0.84 and 0.83, respectively21 (Supplementary Fig. 1). The area under the receiver operating curve for the model for future wildlife product purchases was lower at 0.76, but still at a level considered to provide acceptable classification performance21. The model containing all independent variables had the highest predictive power for recent self-reported wildlife consumption, and was statistically indistinguishable from the best reduced-form models for future wildlife consumption and for COVID-related changes in wildlife consumption (Supplementary Table 2). As has been suggested, we therefore retained the model containing all predictor variables for inference and subsequent predictive modelling across all three response variables22.
For all five countries/territories, awareness of COVID-19 was the strongest predictor of whether someone responded positively to any of the three questions regarding self-reported wildlife consumption (that is, current, future and changes as a result of COVID-19; Fig. 1b–d). For all three questions and across all countries/territories, there was strong evidence for negative associations between the highest level of awareness of COVID-19 and the probability of respondents saying they or someone they know would purchase wildlife. There was also strong evidence of a negative association between having some awareness of COVID-19 and the probability of a respondent reporting yes to each consumption question. The exceptions to this were respondents in Vietnam to the question on changes in wildlife consumption as a result of COVID-19 and in Myanmar to the question on the probability of being a future buyer.
Questions related to potential wildlife market closures had variable associations with wildlife consumption. Respondents in Thailand who viewed wildlife market closures as effective against future pandemics were less likely to say they would consume wildlife in the future. In all countries and territories except Myanmar, respondents who thought wildlife closures would be very effective in preventing future pandemics were actually more likely to have reported wildlife purchases among their social circle in the last 12 months. This may be explained by the fact that the people most familiar with these markets and the conditions wildlife are kept in may also be best placed to understand how closing them may protect public health. Those who were very likely to support government closures of wildlife markets were less likely to say they would consume wildlife in the future in all countries except for Vietnam, where those who were extremely worried about a future pandemic were more likely to have increased their wildlife consumption as a result of COVID-19.
We simulated the impacts of a hypothetical intervention package that simultaneously targeted several socio-demographic variables, assessing how future wildlife-purchasing behaviour might change compared to baseline expectations of a population with similar attributes to the one we sampled. The intervention included information provisioning to raise awareness on COVID-19, as well as a hypothetical elimination of medical impacts associated with the pandemic and the achievement of universal support for wildlife market closures. There is strong evidence that this hypothetical intervention would result in substantial reductions in the probability of future buying across simulated populations in Myanmar (mean frequency of future buying reduced from 15.5% to 7.3%) and Japan (mean frequency of future buying reduced from 10% to 4.5%). There was also strong evidence for reductions in future wildlife consumption among specific demographic groups in all countries/territories (Fig. 2 and Supplementary Table 3). For example, exposing simulated individuals aged 21–25 in Thailand to the hypothetical intervention resulted in a reduction in the mean probability of future buying from 24.1% to 13.5% (a nearly 50% reduction). And in Hong Kong SAR, our models suggest that targeting wealthier individuals (those earning >US\$135,000 per year) would reduce the mean probability of future buying in that group from 16% to 7% (Fig. 2).
Our results provide clues on how to best approach potential interventions that focus on the demand side of wildlife consumption in parts of Asia, and are particularly relevant for consumption that occurs in high-risk markets where live and/or freshly butchered wildlife and their derived products may be sold for luxury consumption, medicinal use, ornaments or as pets. They show the importance of identifying target groups and target messages before conducting demand reduction campaigns, as results may vary among demographically distinct groups or in different regions. They also suggest areas for follow-up work that should build on the survey we report here. These include further investigation on the drivers of consumer demand for wildlife in Myanmar, Thailand and Vietnam (where consumption levels were highest), as well as surveys in additional countries of importance (for example, China). The opinion poll results we present could also be usefully complemented with experimental survey techniques that address how to elicit information and trade-offs on sensitive topics such as wildlife consumption23,24,25 as well as the psychosocial motivations that may not surface during a traditional survey. Ultimately, basing potential behaviour-change interventions on the best available data and analytical approaches reduces the chance of unintended negative consequences when making policy decisions on wildlife consumption8, and could greatly increase the effectiveness and efficiency of these campaigns26 within a ‘One Health’ approach to confronting zoonotic disease emergence.
## Methods
We focused our research on countries/territories in Asia (specifically, Hong Kong SAR, Japan, Myanmar, Thailand and Vietnam) because COVID-19 had not spread much outside Asia at the time of data collection and the global effects were predominantly concentrated in East and Southeast Asia. Our five survey countries/territories were chosen because they all have relatively high levels of wildlife trade but also represent very different forms of trade (for example, the pet trade in Japan versus the wild-meat trade in Vietnam). Surveying respondents from markets with these different forms of trade thus allowed an examination of how the full variety of wildlife consumption types may be impacted by perceived disease risk. Budgetary constraints precluded the inclusion of further countries, although we believe those that were surveyed provide a valid snapshot of the main regional issues and patterns. The exception to this may be the exclusion of China, a key global player in the wildlife trade and the possible origin of the COVID-19 virus. Conducting research in China requires an extensive process to obtain permission that was not consistent with the opportunistic nature of our survey, which was mobilized quickly to target opinions from a snapshot view of an (at that time) emerging disease. Given the time-sensitive nature of the research, we were therefore unable to wait for the necessary permissions to include China in this survey.
Our online survey was conducted between March 3–11, 2020 and surveyed 1,000 respondents in each of the five target countries/territories. We designed and translated our questionnaires with local experts to ensure questions were culturally appropriate, understandable and relevant. The survey was a quantitative data collection instrument that comprised 32 questions, lasted on average 8 minutes, and respondents were offered an incentive for participating. Respondents aged 18+ were invited via email from an online panel of over 2.5 million people in the target countries/territories, and could answer on any internet-capable device (for example smartphone, tablet, laptop) at their convenience. Only respondents aged 18 and over were eligible to take the survey, which was entirely voluntary. Any respondents working in advertising, public relations, marketing, market research or media industries were screened out to prevent possible bias. The email invite that was sent to participants did not specify the exact nature of the survey to avoid skewing the participants towards those that believed they know about the topic. Instead, the invite indicated that the questions would be about ‘consumption and shopping habits’. The panel is maintained by Toluna (https://tolunacorporate.com/), an online data collection group focused on providing high-quality market research data to clients in various business and non-business sectors. Toluna builds and maintains large online consumer panels to collect these data while adhering to stringent global and local guidelines for panel management and data quality, and is a member of the European Society for Opinion and Market Research (https://www.esomar.org).
Toluna respects privacy and is committed to protecting personal data. Their privacy policy (https://tolunacorporate.com/legal/privacy-policy/) provides information on how Toluna collects and processes personal data, explains privacy rights and gives an overview of applicable legislation protecting the handling of personal information. Toluna only uses personal data when the law allows the data to be used.
Respondents were asked demographic questions, and quotas based on the most recent census data for each country/territory were used to ensure the final sample profile was nationally representative of age and gender, except in Myanmar where internet access skewed online panel members to a younger male demographic. Specifically, participants were excluded once quotas on age and gender were filled, and again, participants working in advertising/public relations, marketing research or media were excluded from the survey as we believed these jobs could influence responses. Respondents were asked about societal, economic and environmental concerns, their perception of COVID-19 and their attitudes towards wildlife and wildlife consumption (Supplementary Methods). We also excluded respondents who stated that they were unsure whether they or anyone in their social circle had recently purchased wildlife products (n = 421), as well as an additional n = 39 respondents who were unable to answer survey questions that were later included as covariates in our models.
Because of the potentially sensitive nature of wildlife consumption, we asked about past wildlife purchases indirectly, questioning respondents on whether anyone within their social circle, including themselves, had recently purchased wildlife products. Indirect questions can improve answer rates for questions that people may feel uncomfortable about answering honestly27. During the pandemic, respondents may have felt uncomfortable about revealing wildlife purchases, given links between wildlife consumption and COVID-19. Additionally, although most wildlife consumption is legal (with restrictions) in the markets surveyed, some is not, and researchers can be perceived as having interests contrary to that of the respondent. For less-sensitive questions on future wildlife consumption and changes in consumption resulting from COVID-19, we asked respondents for their own response rather than that of their social group.
Previous studies have found a high correlation between an individual’s admission of using a wildlife product and their likelihood of being within a network of individuals who buy such products28, and suggested that this is linked to homophily in social networks, especially in Southeast Asia. The homophily principle states that people’s personal networks are homogeneous with regard to many socio-demographic, behavioural and intrapersonal characteristics29. Research on wildlife consumption in other Southeast Asian contexts suggests that social groups can be a motivator to begin or maintain consumption of wildlife products28,30. Our own previous research supports this, indicating a strong correlation between one’s own tiger and ivory purchases and knowing someone within one’s social circle who has purchased such products. Additionally and recognizing the homophily principle, behaviour change campaigns targeted at social networks rather than individuals per se are likely to achieve better results than non-targeted campaigns. Changing perceptions of acceptability is a key aspect of social marketing and is used in the social mobilization domain of social and behaviour change communications, which has become a popular framework for reducing demand for illegally traded wildlife products31. Influencing people within a wildlife consumer’s social network may therefore have a higher rate of efficacy than attempting to influence the perceptions of individuals who do not know any consumers of wildlife.
We used hierarchical Bayesian regression models to assess relationships between socio-demographic explanators and our three response variables: (1) self-reported recent wildlife consumption, (2) change in wildlife consumption as a result of COVID-19 and (3) anticipated future wildlife consumption. Explanatory variables included 22 non-collinear variables in six categories: basic demographics, awareness and level of worry of COVID-19, COVID-19 personal impacts, support for and effectiveness of wildlife market closures, international travel habits and general attitudes towards global issues (Supplementary Table 1). Aside from household income (measured in US dollars per year), age (midpoint of year categories from the survey question) and education (ordinal, reflecting increasing level of schooling), all other variables were categorical; those with more than two categories were collapsed into dummy variables. Income, age and education were standardized and included to investigate whether a person’s general socio-economic status affects wildlife consumption. General attitudes towards global issues were expected to reflect aspects of respondents’ political tendencies, while travel habits were included to test the hypothesis that those who travel internationally more habitually are, and will be, more frequent consumers of wildlife. Questions regarding awareness and impacts of COVID-19, and concern about future disease epidemics, were asked to determine how the pandemic may be shaping wildlife consumption. Finally, support and perceived effectiveness of wildlife market closures were included as predictor variables since this measure has been suggested as a strong policy lever to reduce wildlife consumption.
The general structure of all three models was as follows:
$$y_{ij}\sim {{{\mathrm{Bernoulli}}}}\left( {\theta _{ij}} \right)$$
(1)
$${\mathrm{logit}}\left( \theta \right) = \alpha + {{u}_1} + {\beta} {\mathbf{X}} + {{u}_2}{\mathbf{Z}}$$
(2)
This model allowed both coefficients and intercepts to vary across countries (that is, a ‘random-slope random-intercept’ model). In equation (1), yij is whether or not individual i in country j reported wildlife consumption, modelled as a Bernoulli trial with probability θij. The logit transformation of θ (equation 2) is a linear function of parameters α and u1 (the fixed intercept term and a vector of the country-specific intercept terms, respectively), as well as a vector of fixed regression coefficients β and a vector of country-specific regression coefficients u2, with X and Z being the corresponding design matrices32. For α and β, we used an improper flat prior over the real numbers, while the group level parameters u1 and u2 were assumed to arise from a multivariate normal distribution with mean 0 and unknown covariance matrix. The covariance matrix was parameterized by a correlation matrix having a Lewandowski–Kurowicka–Joe prior, and a standard deviation with half-Student t prior with three degrees of freedom32.
For the three dependent variables, we evaluated the predictive power of a model containing all 22 variables, as well as six subset models, using Watanabe–Akaike Information Criterion and leave-one-out cross-validation33. Each of these six subset models contained all explanatory variables except for those within one of the six categories described above (for example, all explanatory variables except those relating to international travel habits, all explanatory variables except those relating to support for wildlife market closures). We used this model-comparison approach to test whether any of these categories of explanatory variable were more or less important in explaining wildlife consumption; if particular categories of variable are stronger predictors of wildlife consumption, this could help inform where future conservation interventions should focus on. Watanabe–Akaike Information Criterion and leave-one-out cross-validation are both measures of model predictive accuracy (both use log predictive density as the utility function or comparison metric) and have been suggested as useful metrics for Bayesian model selection33. We interpreted variable coefficients whose 95% Bayesian credible intervals did not contain 0 as providing strong evidence for the impact of that variable on the outcome in each of the three models for self-reported wildlife consumption (that is, recent, future and changes due to COVID-19). Models were estimated using the R statistical computing software34, in particular the package brms32, with four chains of 1,000 iterations each, a 500-iteration warm-up period, and with successful convergence verified by confirming that R-hat statistical values were less than or equal to 1.01 (ref. 22).
We used the Bayesian hierarchical model of anticipated future wildlife consumption and generated predicted probabilities of future consumption for our sample population (Fig. 2, grey bars). We then predicted future consumption probabilities for a hypothetical behaviour-change intervention (Fig. 2, coloured bars). This intervention was simulated by setting the ‘medical impact’ variable to zero for all individuals, and by assigning all individuals into the ‘aware lots’ and ‘support very likely’ categories for questions related to level of awareness of COVID-19 and level of support for government closure of domestic wildlife markets, respectively. All other variables for individuals were held at the levels recorded in the surveys. We considered the difference between these two predicted probabilities as the impact of the hypothetical behaviour-change intervention, which we examined at the level of the country/territory and within education, age, income and gender demographic classes. Strong evidence for the effectiveness of this hypothetical intervention among countries and demographic classes was suggested where Bayesian credible intervals around the mean predicted difference were less than zero (Supplementary Table 3).
### Reporting Summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
The data analysed in this study are available via the Open Science Framework at https://osf.io/z8kbd/.
## References
1. Morse, S. S. et al. Prediction and prevention of the next pandemic zoonosis. Lancet 380, 1956–1965 (2012).
2. Mallapaty, S. What’s next in the search for COVID’s origins. Nature 592, 337–338 (2021).
3. Gibb, R. et al. Zoonotic host diversity increases in human-dominated ecosystems. Nature 584, 398–402 (2020).
4. Aguirre, A. A., Catherina, R., Frye, H. & Shelley, L. Illicit wildlife trade, wet markets, and COVID-19: preventing future pandemics. World Med. Health Policy 12, 256–265 (2020).
5. Dobson, A. P. et al. Ecology and economics for pandemic prevention. Science 369, 379–381 (2020).
6. Nijman, V. An overview of international wildlife trade from Southeast Asia. Biodivers. Conserv. 19, 1101–1114 (2010).
7. Wyatt, T., Johnson, K., Hunter, L., George, R. & Gunter, R. Corruption and wildlife trafficking: three case studies involving Asia. Asian J. Criminol. 13, 35–55 (2018).
8. Roe, D. et al. Beyond banning wildlife trade: COVID-19, conservation and development. World Dev. 136, 105121 (2020).
9. Lindsey, P. et al. Conserving Africa’s wildlife and wildlands through the COVID-19 crisis and beyond. Nat. Ecol. Evol. 4, 1300–1310 (2020).
10. Hockings, M. et al. Covid-19 and protected and conserved areas. Parks 26, 7–24 (2020).
11. Miron, J. A. & Zwiebel, J. The economic case against drug prohibition. J. Econ. Perspect. 9, 175–192 (1995).
12. Biggs, D. et al. Breaking the deadlock on ivory. Science 358, 1378–1381 (2017).
13. Sas-Rolfes, M. ‘t, Challender, D. W. S., Hinsley, A., Veríssimo, D. & Milner-Gulland, E. J. Illegal wildlife trade: scale, processes, and governance. Annu. Rev. Environ. Resour. 44, 201–228 (2019).
14. Wilkie, D. S. et al. Eating and conserving bushmeat in Africa. Afr. J. Ecol. 54, 402–414 (2016).
15. Bergin, D., Wu, D. & Meijer, W. Response to “The imaginary ‘Asian Super Consumer’: A critique of demand reduction campaigns for the illegal wildlife trade”. Geoforum 107, 216–219 (2020).
16. Thomas-Walters, L. et al. Motivations for the use and consumption of wildlife products. Conserv. Biol. 35, 483–491 (2021).
17. Greenfield, S. & Verissimo, D. To what extent is social marketing used in demand reduction campaigns for illegal wildlife products? Insights from elephant ivory and rhino horn. Soc. Mar. Q. 25, 40–54 (2019).
18. Veríssimo, D. & Wan, A. K. Y. Characterizing efforts to reduce consumer demand for wildlife products. Conserv. Biol. 33, 623–633 (2019).
19. Olmedo, A., Sharif, V. & Milner-Gulland, E. J. Evaluating the design of behavior change interventions: a case study of rhino horn in Vietnam. Conserv. Lett. 11, e12365 (2018).
20. Cunningham, A. A., Daszak, P. & Wood, J. L. N. One health, emerging infectious diseases and wildlife: two decades of progress? Philos. Trans. R. Soc. B Biol. Sci. 372, 20160167 (2017).
21. Hosmer, D. W., Lemeshow, S. & Sturdivant, R. X. Applied Logistic Regression 3rd edn (John Wiley & Sons, 2013).
22. Gelman, A. et al. Bayesian Data Analysis 3rd edn (CRC Press, 2013).
23. Nuno, A. & St John, F. A. V. How to ask sensitive questions in conservation: a review of specialized questioning techniques. Biol. Conserv. 189, 5–15 (2015).
24. Moro, M. et al. An investigation using the choice experiment method into options for reducing illegal bushmeat hunting in western Serengeti. Conserv. Lett. 6, 37–45 (2013).
25. Nuno, A., Bunnefeld, N., Naiman, L. C. & Milner-Gulland, E. J. A novel approach to assessing the prevalence and drivers of illegal bushmeat hunting in the Serengeti. Conserv. Biol. 27, 1355–1365 (2013).
26. Jacobson, S. K., McDuff, M. D. & Monroe, M. C. Conservation Education and Outreach Techniques (Oxford Univ. Press, 2015).
27. Bergin, D. & Nijman, V. in Evolution, Ecology and Conservation of Lorises and Pottos (eds Nekaris, K. & Burrows, A.) 339–361 (Cambridge Univ. Press, 2020).
28. Davis, E. O., Crudge, B. & Glikman, J. A. The nominative technique: a simple tool for assessing illegal wildlife consumption. Oryx https://doi.org/10.1017/S0030605320000745 (2020).
29. McPherson, M., Smith-Lovin, L. & Cook, J. M. Birds of a feather: homophily in social networks. Annu. Rev. Sociol. 27, 415–444 (2001).
30. Davis, E. O. et al. Understanding the prevalence of bear part consumption in Cambodia: a comparison of specialised questioning techniques. PLoS ONE 14, e0211544 (2019).
31. Burgess, G. Monitoring and Evaluating Behaviour Change Amongst Ilegal Wildlife Product Consumers: Good Practice Guidelines for Social and Behavioural Change Communications Practitioners and Communications Professionals (TRAFFIC, 2018).
32. Burkner, P.-C. brms: an R package for Bayesian multilivel models using Stan. J. Stat. Softw. 80, 1–28 (2017).
33. Vehtari, A., Gelman, A. & Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat. Comput. 27, 1413–1432 (2017).
34. R Core Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2018).
## Acknowledgements
We thank A. Nicolas for research support.
## Author information
Authors
### Contributions
J.V. and D.B. conceived the study; D.B. collected the data; R.N. analysed the data; R.N., D.B. and J.V. wrote the paper.
### Corresponding author
Correspondence to Robin Naidoo.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Ecology & Evolution thanks Jarno Vanhatalo and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Supplementary information
### Supplementary Information
Supplementary Tables 1–3, Fig. 1 and Methods.
## Rights and permissions
Reprints and Permissions
Naidoo, R., Bergin, D. & Vertefeuille, J. Socio-demographic correlates of wildlife consumption during early stages of the COVID-19 pandemic. Nat Ecol Evol 5, 1361–1366 (2021). https://doi.org/10.1038/s41559-021-01546-5
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1038/s41559-021-01546-5 | 2022-08-11 01:47:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38034871220588684, "perplexity": 6189.799459232775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00759.warc.gz"} |
https://noa.gwlb.de/receive/cop_mods_00049805 | # Analysis of spatiotemporal variations in middle-tropospheric to upper-tropospheric methane during the Wenchuan <i>M</i><sub>s</sub> = 8.0 earthquake by three indices
This research studied the spatiotemporal variation in methane in the middle to upper troposphere during the Wenchuan earthquake (12 May 2008) using AIRS retrieval data and discussed the methane anomaly mechanism. Three indices were proposed and used for analysis. Our results show that the methane concentration increased significantly in 2008, with an average increase of $\mathrm{5.12}×{\mathrm{10}}^{-\mathrm{8}}$, compared to the average increase of $\mathrm{1.18}×{\mathrm{10}}^{-\mathrm{8}}$ in the previous 5 years. The absolute local index of change of the environment (ALICE) and differential value (diff) indices can be used to identify methane concentration anomalies. The two indices showed that the methane concentration distribution before and after the earthquake broke the distribution features of the background field. As the earthquake approached, areas of high methane concentration gradually converged towards the west side of the epicenter from both ends of the Longmenshan fault zone. Moreover, a large anomalous area was centered at the epicenter 8 d before the earthquake occurred, and a trend of strengthening, weakening and strengthening appeared over time. The gradient index showed that the vertical direction obviously increased before the main earthquake and that the value was positive. The gradient value is negative during coseismic or post-seismic events. The gradient index reflects the gas emission characteristics to some extent. We also determined that the methane release was connected with the deep crust–mantle stress state, as well as micro-fracture generation and expansion. However, due to the lack of any technical means to accurately identify the source and content of methane in the atmosphere before the earthquake, an in-depth discussion has not been conducted, and further studies on this issue may be needed.
### Zitieren
Zitierform:
Cui, Jing / Shen, Xuhui / Zhang, Jingfa / et al: Analysis of spatiotemporal variations in middle-tropospheric to upper-tropospheric methane during the Wenchuan <i>M</i><sub>s</sub> = 8.0 earthquake by three indices. 2019. Copernicus Publications.
### Rechte
Rechteinhaber: Jing Cui et al.
Nutzung und Vervielfältigung: | 2020-01-22 14:39:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6632055640220642, "perplexity": 2646.981875036565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00187.warc.gz"} |
https://mathoverflow.net/questions/239470/uniform-approximation-of-a-continuous-flow-by-a-mathcalc1-flow | # Uniform approximation of a continuous flow by a $\mathcal{C}^1$ flow
Setup: Consider a (smooth) compact Riemannian manifold $M$, whose distance is denoted by $d$. Let $\Phi$ be a continuous flow, namely a continuous application from $\mathbb{R} \times M$ to $M$ satisfying:
• $\forall t \in \mathbb{R}, \Phi(t,\cdot) \in \text{Homeo}(M)$
• $\forall t,s \in \mathbb{R}, \Phi(t+s,\cdot) = \Phi(t,\Phi(s,\cdot))$
We consider the following $\mathcal{C}^0$ metric on continuous flows: $$\delta(\Phi,\Psi) = \sup_{t \in [0,1],x \in M} d(\Phi(t,x),\Psi(t,x))$$
Question: Is it possible to approximate a $\mathcal{C}^0$ flow by a $\mathcal{C}^1$ flow (or even $\mathcal{C}^{\infty}$) in the $\mathcal{C}^0$ topology ? In other words, given a $\varepsilon > 0$, is there a $\mathcal{C}^1$ flow $\Psi$ such that $\delta(\Phi,\Psi) < \varepsilon$ ?
I know (from http://arxiv.org/abs/0901.1002) that this result is far from being trivial for homeomorphisms. It is true in dimension $\leq 3$ (any homeomorphism can be uniformly approximated by a diffeomorphism) and in dimension $\geq 5$ if and only if the homeo is isotopic to a diffeo. Apparently, it is still open in dimension $4$. In particular, this shows, that any element $\Phi^t$ of a continuous flow can be individually uniformly approximated by a diffeo, but this doesn't answer the question.
I have tried to look up for a reference in literature, but I couldn't find any. Has anyone any idea and/or reference on the question ? | 2019-03-23 03:29:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9622950553894043, "perplexity": 145.41048008993005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202711.3/warc/CC-MAIN-20190323020538-20190323042538-00455.warc.gz"} |
https://socratic.org/questions/a-customer-in-a-computer-store-can-choose-one-of-four-monitors-one-of-three-keyb | # A customer in a computer store can choose one of four monitors, one of three keyboards, and one of five computers. How do you determine the number of possible system configurations?
Dec 31, 2017
60
#### Explanation:
Given:
Monitors: 1 of 4
Keyboards: 1 of 3
Computers: 1 of 5
Notice the pattern:
I can pair 4 monitors with first keyboard. Then once more 4 monitors but now with the second keyboard. Then again with the third keyboard.
That is $4 + 4 + 4 = 12$ or Monitors $\times$ Keyboard=4*3=12
If we include computers into the pattern we get:
Monitors x Keyboards x Computers$= 4 \times 3 \times 5 = 60$
We have 60 different possibilities of choosing monitors, keyboards and computers. | 2020-07-09 04:39:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4214193522930145, "perplexity": 4130.600526406655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655898347.42/warc/CC-MAIN-20200709034306-20200709064306-00380.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.